All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ
@ 2023-07-06 19:12 Eugenio Pérez
  2023-07-06 19:12 ` [RFC PATCH 1/6] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
                   ` (6 more replies)
  0 siblings, 7 replies; 16+ messages in thread
From: Eugenio Pérez @ 2023-07-06 19:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Michael S. Tsirkin, Cindy Lu, si-wei.liu,
	Stefano Garzarella, Shannon Nelson, Gautam Dawar, Jason Wang,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

At this moment the migration of net features that depends on CVQ is not
possible, as there is no reliable way to restore the device state like mac
address, number of enabled queues, etc to the destination.  This is mainly
caused because the device must only read CVQ, and process all the commands
before resuming the dataplane.

This RFC lift that requirement, sending the VHOST_VDPA_SET_VRING_ENABLE ioctl
for dataplane vqs only after the device has processed all commands.  If this
method is valid or not, or if it must be signalled by the parent driver
somehow, is still under discussion.  In case it is valid, this code allows
testing the vDPA device for it.

Eugenio Pérez (6):
  vdpa: export vhost_vdpa_set_vring_ready
  vdpa: add should_enable op
  vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready
  vdpa: add stub vhost_vdpa_should_enable
  vdpa: delay enable of data vqs
  vdpa: remove net cvq migration blocker

 include/hw/virtio/vhost-vdpa.h |  9 +++++++
 hw/virtio/vhost-vdpa.c         | 33 +++++++++++++++++++------
 net/vhost-vdpa.c               | 45 +++++++++++++++++++++++++---------
 hw/virtio/trace-events         |  2 +-
 4 files changed, 68 insertions(+), 21 deletions(-)

-- 
2.39.3




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/6] vdpa: export vhost_vdpa_set_vring_ready
  2023-07-06 19:12 [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
@ 2023-07-06 19:12 ` Eugenio Pérez
  2023-07-10  3:19   ` Jason Wang
  2023-07-06 19:12 ` [RFC PATCH 2/6] vdpa: add should_enable op Eugenio Pérez
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Eugenio Pérez @ 2023-07-06 19:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Michael S. Tsirkin, Cindy Lu, si-wei.liu,
	Stefano Garzarella, Shannon Nelson, Gautam Dawar, Jason Wang,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

The vhost-vdpa net backend needs to enable vrings in a different order
than default, so export it.

No functional change intended except for tracing, that now includes the
(virtio) index being enabled and the return value of the ioctl.

Still ignoring return value of this function if called from
vhost_vdpa_dev_start, as reorganize calling code around it is out of
the scope of this series.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/vhost-vdpa.h |  1 +
 hw/virtio/vhost-vdpa.c         | 28 ++++++++++++++++++++--------
 hw/virtio/trace-events         |  2 +-
 3 files changed, 22 insertions(+), 9 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index e64bfc7f98..5407d54fd7 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -57,6 +57,7 @@ typedef struct vhost_vdpa {
 } VhostVDPA;
 
 int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range);
+int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx);
 
 int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
                        hwaddr size, void *vaddr, bool readonly);
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 3c575a9a6e..5978d970ee 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -528,6 +528,19 @@ int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range)
     return ret < 0 ? -errno : 0;
 }
 
+int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx)
+{
+    struct vhost_dev *dev = v->dev;
+    struct vhost_vring_state state = {
+        .index = idx,
+        .num = 1,
+    };
+    int r = vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
+
+    trace_vhost_vdpa_set_vring_ready(dev, idx, r);
+    return r;
+}
+
 /*
  * The use of this function is for requests that only need to be
  * applied once. Typically such request occurs at the beginning
@@ -872,16 +885,15 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev *dev, int idx)
     return idx;
 }
 
-static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
+static int vhost_vdpa_set_vrings_ready(struct vhost_dev *dev)
 {
+    struct vhost_vdpa *v = dev->opaque;
     int i;
-    trace_vhost_vdpa_set_vring_ready(dev);
+
+    assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
+
     for (i = 0; i < dev->nvqs; ++i) {
-        struct vhost_vring_state state = {
-            .index = dev->vq_index + i,
-            .num = 1,
-        };
-        vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
+        vhost_vdpa_set_vring_ready(v, dev->vq_index + i);
     }
     return 0;
 }
@@ -1294,7 +1306,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
         if (unlikely(!ok)) {
             return -1;
         }
-        vhost_vdpa_set_vring_ready(dev);
+        vhost_vdpa_set_vrings_ready(dev);
     } else {
         vhost_vdpa_suspend(dev);
         vhost_vdpa_svqs_stop(dev);
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 8f8d05cf9b..4f6a6ba428 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -46,7 +46,7 @@ vhost_vdpa_set_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRI
 vhost_vdpa_get_device_id(void *dev, uint32_t device_id) "dev: %p device_id %"PRIu32
 vhost_vdpa_reset_device(void *dev, uint8_t status) "dev: %p status: 0x%"PRIx8
 vhost_vdpa_get_vq_index(void *dev, int idx, int vq_idx) "dev: %p idx: %d vq idx: %d"
-vhost_vdpa_set_vring_ready(void *dev) "dev: %p"
+vhost_vdpa_set_vring_ready(void *dev, unsigned i, int r) "dev: %p, idx: %u, r: %d"
 vhost_vdpa_dump_config(void *dev, const char *line) "dev: %p %s"
 vhost_vdpa_set_config(void *dev, uint32_t offset, uint32_t size, uint32_t flags) "dev: %p offset: %"PRIu32" size: %"PRIu32" flags: 0x%"PRIx32
 vhost_vdpa_get_config(void *dev, void *config, uint32_t config_len) "dev: %p config: %p config_len: %"PRIu32
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 2/6] vdpa: add should_enable op
  2023-07-06 19:12 [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
  2023-07-06 19:12 ` [RFC PATCH 1/6] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
@ 2023-07-06 19:12 ` Eugenio Pérez
  2023-07-10  3:46   ` Jason Wang
  2023-07-06 19:12 ` [RFC PATCH 3/6] vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready Eugenio Pérez
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Eugenio Pérez @ 2023-07-06 19:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Michael S. Tsirkin, Cindy Lu, si-wei.liu,
	Stefano Garzarella, Shannon Nelson, Gautam Dawar, Jason Wang,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

To restore the device at the destination of a live migration we send the
commands through control virtqueue.  For a device to read CVQ it must
have received the DRIVER_OK status bit.

However this opens a window where the device could start receiving
packets in rx queue 0 before it receives the RSS configuration.  To
avoid that, we will not send vring_enable until all configuration is
used by the device.

As a first step, enable a new vitio ops per vhost_vdpa device to know if
we should enable a virtqueue or not.  This srtuct can be reused in the
future to add more actions to vhost_vdpa that depend on the virtIO kind
of device.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/vhost-vdpa.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 5407d54fd7..3d330d439a 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -30,6 +30,13 @@ typedef struct VhostVDPAHostNotifier {
     void *addr;
 } VhostVDPAHostNotifier;
 
+struct vhost_vdpa;
+typedef bool (*vhost_vdpa_virtio_should_enable_op)(const struct vhost_vdpa *v);
+
+typedef struct VhostVDPAVirtIOOps {
+    vhost_vdpa_virtio_should_enable_op should_enable;
+} VhostVDPAVirtIOOps;
+
 typedef struct vhost_vdpa {
     int device_fd;
     int index;
@@ -48,6 +55,7 @@ typedef struct vhost_vdpa {
     VhostIOVATree *iova_tree;
     GPtrArray *shadow_vqs;
     const VhostShadowVirtqueueOps *shadow_vq_ops;
+    const VhostVDPAVirtIOOps *virtio_ops;
     void *shadow_vq_ops_opaque;
     struct vhost_dev *dev;
     Error *migration_blocker;
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 3/6] vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready
  2023-07-06 19:12 [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
  2023-07-06 19:12 ` [RFC PATCH 1/6] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
  2023-07-06 19:12 ` [RFC PATCH 2/6] vdpa: add should_enable op Eugenio Pérez
@ 2023-07-06 19:12 ` Eugenio Pérez
  2023-07-06 19:12 ` [RFC PATCH 4/6] vdpa: add stub vhost_vdpa_should_enable Eugenio Pérez
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Eugenio Pérez @ 2023-07-06 19:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Michael S. Tsirkin, Cindy Lu, si-wei.liu,
	Stefano Garzarella, Shannon Nelson, Gautam Dawar, Jason Wang,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

This allow to skip some rings that qemu does not want to enable.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 5978d970ee..f51f5d9969 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -892,6 +892,11 @@ static int vhost_vdpa_set_vrings_ready(struct vhost_dev *dev)
 
     assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
 
+    if (v->virtio_ops && v->virtio_ops->should_enable
+        && !(v->virtio_ops->should_enable(v))) {
+        return 0;
+    }
+
     for (i = 0; i < dev->nvqs; ++i) {
         vhost_vdpa_set_vring_ready(v, dev->vq_index + i);
     }
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 4/6] vdpa: add stub vhost_vdpa_should_enable
  2023-07-06 19:12 [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
                   ` (2 preceding siblings ...)
  2023-07-06 19:12 ` [RFC PATCH 3/6] vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready Eugenio Pérez
@ 2023-07-06 19:12 ` Eugenio Pérez
  2023-07-06 19:12 ` [RFC PATCH 5/6] vdpa: delay enable of data vqs Eugenio Pérez
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Eugenio Pérez @ 2023-07-06 19:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Michael S. Tsirkin, Cindy Lu, si-wei.liu,
	Stefano Garzarella, Shannon Nelson, Gautam Dawar, Jason Wang,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

To restore the device at the destination of a live migration we send the
commands through control virtqueue.  For a device to read CVQ it must
have received the DRIVER_OK status bit.

However this opens a window where the device could start receiving
packets in rx queue 0 before it receives the RSS configuration.  To
avoid that, we will not send vring_enable until all configuration is
used by the device.

Add vhost-vdpa net vhost_vdpa_should_enable.  Do not change the behavior
in this commit, only introduce the op.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index e19ab063fa..536bab8613 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -826,6 +826,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
     .avail_handler = vhost_vdpa_net_handle_ctrl_avail,
 };
 
+static bool vhost_vdpa_should_enable(const struct vhost_vdpa *v)
+{
+    return true;
+}
+
+static const VhostVDPAVirtIOOps vhost_vdpa_virtio_net_ops = {
+    .should_enable = vhost_vdpa_should_enable,
+};
+
 /**
  * Probe if CVQ is isolated
  *
@@ -949,6 +958,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
     s->vhost_vdpa.shadow_vqs_enabled = svq;
     s->vhost_vdpa.iova_range = iova_range;
     s->vhost_vdpa.shadow_data = svq;
+    s->vhost_vdpa.virtio_ops = &vhost_vdpa_virtio_net_ops;
     if (queue_pair_index == 0) {
         vhost_vdpa_net_valid_svq_features(features,
                                           &s->vhost_vdpa.migration_blocker);
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 5/6] vdpa: delay enable of data vqs
  2023-07-06 19:12 [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
                   ` (3 preceding siblings ...)
  2023-07-06 19:12 ` [RFC PATCH 4/6] vdpa: add stub vhost_vdpa_should_enable Eugenio Pérez
@ 2023-07-06 19:12 ` Eugenio Pérez
  2023-07-06 19:12 ` [RFC PATCH 6/6] vdpa: remove net cvq migration blocker Eugenio Pérez
  2023-07-06 20:02 ` [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Michael S. Tsirkin
  6 siblings, 0 replies; 16+ messages in thread
From: Eugenio Pérez @ 2023-07-06 19:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Michael S. Tsirkin, Cindy Lu, si-wei.liu,
	Stefano Garzarella, Shannon Nelson, Gautam Dawar, Jason Wang,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

To restore the device at the destination of a live migration we send
the commands through control virtqueue.  For a device to read CVQ it
must have received the DRIVER_OK status bit.

However this opens a window where the device could start receiving
packets in rx queue 0 before it receives the RSS configuration.  To
avoid that, we do not send vring_enable until all configuration is used
by the device.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 25 ++++++++++++++++++++++++-
 1 file changed, 24 insertions(+), 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 536bab8613..d5970e9f06 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -739,6 +739,13 @@ static int vhost_vdpa_net_load(NetClientState *nc)
         return r;
     }
 
+    for (int i = 0; i < v->dev->vq_index; ++i) {
+        r = vhost_vdpa_set_vring_ready(v, i);
+        if (unlikely(r)) {
+            return r;
+        }
+    }
+
     return 0;
 }
 
@@ -826,9 +833,25 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
     .avail_handler = vhost_vdpa_net_handle_ctrl_avail,
 };
 
+/**
+ * Check if a vhost_vdpa device should enable before DRIVER_OK
+ *
+ * CVQ must always start first if we want to restore the state safely. Do not
+ * start data vqs if the device has CVQ.
+ */
 static bool vhost_vdpa_should_enable(const struct vhost_vdpa *v)
 {
-    return true;
+    struct vhost_dev *dev = v->dev;
+
+    if (!dev->vq_index_end % 2) {
+        /* vDPA device does not have CVQ */
+        return true;
+    }
+
+    /*
+     * We're evaluating CVQ, enable to send control cmds to load device state.
+     */
+    return dev->vq_index + 1 == dev->vq_index_end;
 }
 
 static const VhostVDPAVirtIOOps vhost_vdpa_virtio_net_ops = {
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 6/6] vdpa: remove net cvq migration blocker
  2023-07-06 19:12 [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
                   ` (4 preceding siblings ...)
  2023-07-06 19:12 ` [RFC PATCH 5/6] vdpa: delay enable of data vqs Eugenio Pérez
@ 2023-07-06 19:12 ` Eugenio Pérez
  2023-07-10  3:53   ` Jason Wang
  2023-07-06 20:02 ` [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Michael S. Tsirkin
  6 siblings, 1 reply; 16+ messages in thread
From: Eugenio Pérez @ 2023-07-06 19:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Michael S. Tsirkin, Cindy Lu, si-wei.liu,
	Stefano Garzarella, Shannon Nelson, Gautam Dawar, Jason Wang,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

Now that we have add migration blockers if the device does not support
all the needed features, remove the general blocker applied to all net
devices with CVQ.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index d5970e9f06..5432b50498 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -996,18 +996,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
         s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
         s->vhost_vdpa.shadow_vq_ops_opaque = s;
         s->cvq_isolated = cvq_isolated;
-
-        /*
-         * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
-         * there is no way to set the device state (MAC, MQ, etc) before
-         * starting the datapath.
-         *
-         * Migration blocker ownership now belongs to s->vhost_vdpa.
-         */
-        if (!svq) {
-            error_setg(&s->vhost_vdpa.migration_blocker,
-                       "net vdpa cannot migrate with CVQ feature");
-        }
     }
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
     if (ret) {
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ
  2023-07-06 19:12 [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
                   ` (5 preceding siblings ...)
  2023-07-06 19:12 ` [RFC PATCH 6/6] vdpa: remove net cvq migration blocker Eugenio Pérez
@ 2023-07-06 20:02 ` Michael S. Tsirkin
  2023-07-07  6:21   ` Eugenio Perez Martin
  6 siblings, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2023-07-06 20:02 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Laurent Vivier, Cindy Lu, si-wei.liu,
	Stefano Garzarella, Shannon Nelson, Gautam Dawar, Jason Wang,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

On Thu, Jul 06, 2023 at 09:12:21PM +0200, Eugenio Pérez wrote:
> At this moment the migration of net features that depends on CVQ is not
> possible, as there is no reliable way to restore the device state like mac
> address, number of enabled queues, etc to the destination.  This is mainly
> caused because the device must only read CVQ, and process all the commands
> before resuming the dataplane.
> 
> This RFC lift that requirement, sending the VHOST_VDPA_SET_VRING_ENABLE ioctl
> for dataplane vqs only after the device has processed all commands.  If this
> method is valid or not, or if it must be signalled by the parent driver
> somehow, is still under discussion.  In case it is valid, this code allows
> testing the vDPA device for it.

And you plan to add the reset trick too in a future version?

> Eugenio Pérez (6):
>   vdpa: export vhost_vdpa_set_vring_ready
>   vdpa: add should_enable op
>   vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready
>   vdpa: add stub vhost_vdpa_should_enable
>   vdpa: delay enable of data vqs
>   vdpa: remove net cvq migration blocker
> 
>  include/hw/virtio/vhost-vdpa.h |  9 +++++++
>  hw/virtio/vhost-vdpa.c         | 33 +++++++++++++++++++------
>  net/vhost-vdpa.c               | 45 +++++++++++++++++++++++++---------
>  hw/virtio/trace-events         |  2 +-
>  4 files changed, 68 insertions(+), 21 deletions(-)
> 
> -- 
> 2.39.3
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ
  2023-07-06 20:02 ` [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Michael S. Tsirkin
@ 2023-07-07  6:21   ` Eugenio Perez Martin
  0 siblings, 0 replies; 16+ messages in thread
From: Eugenio Perez Martin @ 2023-07-07  6:21 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: qemu-devel, Laurent Vivier, Cindy Lu, si-wei.liu,
	Stefano Garzarella, Shannon Nelson, Gautam Dawar, Jason Wang,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

On Thu, Jul 6, 2023 at 10:02 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Jul 06, 2023 at 09:12:21PM +0200, Eugenio Pérez wrote:
> > At this moment the migration of net features that depends on CVQ is not
> > possible, as there is no reliable way to restore the device state like mac
> > address, number of enabled queues, etc to the destination.  This is mainly
> > caused because the device must only read CVQ, and process all the commands
> > before resuming the dataplane.
> >
> > This RFC lift that requirement, sending the VHOST_VDPA_SET_VRING_ENABLE ioctl
> > for dataplane vqs only after the device has processed all commands.  If this
> > method is valid or not, or if it must be signalled by the parent driver
> > somehow, is still under discussion.  In case it is valid, this code allows
> > testing the vDPA device for it.
>
> And you plan to add the reset trick too in a future version?
>

I'll try to come with a version today.

At the current state of vDPA devices, it will just serve to enable
vp_vdpa to be the destination of a migration. But vp_vdpa cannot
migrate the guest again, so it is a dead end. I can try to add
_F_RING_RESET to vdpa_sim if that makes sense.

Anyway, as HW implement _F_RING_RESET, there should be no need to make
more changes to qemu, so we should be good.

> > Eugenio Pérez (6):
> >   vdpa: export vhost_vdpa_set_vring_ready
> >   vdpa: add should_enable op
> >   vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready
> >   vdpa: add stub vhost_vdpa_should_enable
> >   vdpa: delay enable of data vqs
> >   vdpa: remove net cvq migration blocker
> >
> >  include/hw/virtio/vhost-vdpa.h |  9 +++++++
> >  hw/virtio/vhost-vdpa.c         | 33 +++++++++++++++++++------
> >  net/vhost-vdpa.c               | 45 +++++++++++++++++++++++++---------
> >  hw/virtio/trace-events         |  2 +-
> >  4 files changed, 68 insertions(+), 21 deletions(-)
> >
> > --
> > 2.39.3
> >
>



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/6] vdpa: export vhost_vdpa_set_vring_ready
  2023-07-06 19:12 ` [RFC PATCH 1/6] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
@ 2023-07-10  3:19   ` Jason Wang
  2023-07-10  8:43     ` Eugenio Perez Martin
  0 siblings, 1 reply; 16+ messages in thread
From: Jason Wang @ 2023-07-10  3:19 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Laurent Vivier, Michael S. Tsirkin, Cindy Lu,
	si-wei.liu, Stefano Garzarella, Shannon Nelson, Gautam Dawar,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> The vhost-vdpa net backend needs to enable vrings in a different order
> than default, so export it.
>
> No functional change intended except for tracing, that now includes the
> (virtio) index being enabled and the return value of the ioctl.
>
> Still ignoring return value of this function if called from
> vhost_vdpa_dev_start, as reorganize calling code around it is out of
> the scope of this series.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  include/hw/virtio/vhost-vdpa.h |  1 +
>  hw/virtio/vhost-vdpa.c         | 28 ++++++++++++++++++++--------
>  hw/virtio/trace-events         |  2 +-
>  3 files changed, 22 insertions(+), 9 deletions(-)
>
> diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> index e64bfc7f98..5407d54fd7 100644
> --- a/include/hw/virtio/vhost-vdpa.h
> +++ b/include/hw/virtio/vhost-vdpa.h
> @@ -57,6 +57,7 @@ typedef struct vhost_vdpa {
>  } VhostVDPA;
>
>  int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range);
> +int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx);
>
>  int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
>                         hwaddr size, void *vaddr, bool readonly);
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 3c575a9a6e..5978d970ee 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -528,6 +528,19 @@ int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range)
>      return ret < 0 ? -errno : 0;
>  }
>
> +int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx)
> +{
> +    struct vhost_dev *dev = v->dev;
> +    struct vhost_vring_state state = {
> +        .index = idx,
> +        .num = 1,
> +    };
> +    int r = vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
> +
> +    trace_vhost_vdpa_set_vring_ready(dev, idx, r);
> +    return r;
> +}
> +
>  /*
>   * The use of this function is for requests that only need to be
>   * applied once. Typically such request occurs at the beginning
> @@ -872,16 +885,15 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev *dev, int idx)
>      return idx;
>  }
>
> -static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
> +static int vhost_vdpa_set_vrings_ready(struct vhost_dev *dev)
>  {
> +    struct vhost_vdpa *v = dev->opaque;
>      int i;
> -    trace_vhost_vdpa_set_vring_ready(dev);
> +
> +    assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);

Nit: any reason we need to add this assert in this patch?

Thanks

> +
>      for (i = 0; i < dev->nvqs; ++i) {
> -        struct vhost_vring_state state = {
> -            .index = dev->vq_index + i,
> -            .num = 1,
> -        };
> -        vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
> +        vhost_vdpa_set_vring_ready(v, dev->vq_index + i);
>      }
>      return 0;
>  }
> @@ -1294,7 +1306,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
>          if (unlikely(!ok)) {
>              return -1;
>          }
> -        vhost_vdpa_set_vring_ready(dev);
> +        vhost_vdpa_set_vrings_ready(dev);
>      } else {
>          vhost_vdpa_suspend(dev);
>          vhost_vdpa_svqs_stop(dev);
> diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> index 8f8d05cf9b..4f6a6ba428 100644
> --- a/hw/virtio/trace-events
> +++ b/hw/virtio/trace-events
> @@ -46,7 +46,7 @@ vhost_vdpa_set_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRI
>  vhost_vdpa_get_device_id(void *dev, uint32_t device_id) "dev: %p device_id %"PRIu32
>  vhost_vdpa_reset_device(void *dev, uint8_t status) "dev: %p status: 0x%"PRIx8
>  vhost_vdpa_get_vq_index(void *dev, int idx, int vq_idx) "dev: %p idx: %d vq idx: %d"
> -vhost_vdpa_set_vring_ready(void *dev) "dev: %p"
> +vhost_vdpa_set_vring_ready(void *dev, unsigned i, int r) "dev: %p, idx: %u, r: %d"
>  vhost_vdpa_dump_config(void *dev, const char *line) "dev: %p %s"
>  vhost_vdpa_set_config(void *dev, uint32_t offset, uint32_t size, uint32_t flags) "dev: %p offset: %"PRIu32" size: %"PRIu32" flags: 0x%"PRIx32
>  vhost_vdpa_get_config(void *dev, void *config, uint32_t config_len) "dev: %p config: %p config_len: %"PRIu32
> --
> 2.39.3
>



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 2/6] vdpa: add should_enable op
  2023-07-06 19:12 ` [RFC PATCH 2/6] vdpa: add should_enable op Eugenio Pérez
@ 2023-07-10  3:46   ` Jason Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2023-07-10  3:46 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Laurent Vivier, Michael S. Tsirkin, Cindy Lu,
	si-wei.liu, Stefano Garzarella, Shannon Nelson, Gautam Dawar,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> To restore the device at the destination of a live migration we send the
> commands through control virtqueue.  For a device to read CVQ it must
> have received the DRIVER_OK status bit.
>
> However this opens a window where the device could start receiving
> packets in rx queue 0 before it receives the RSS configuration.

I think this can also happen in the case of automatic steering.

> To
> avoid that, we will not send vring_enable until all configuration is
> used by the device.
>
> As a first step, enable a new vitio ops per vhost_vdpa device to know if
> we should enable a virtqueue or not.  This srtuct can be reused in the

typo.

Thanks

> future to add more actions to vhost_vdpa that depend on the virtIO kind
> of device.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  include/hw/virtio/vhost-vdpa.h | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> index 5407d54fd7..3d330d439a 100644
> --- a/include/hw/virtio/vhost-vdpa.h
> +++ b/include/hw/virtio/vhost-vdpa.h
> @@ -30,6 +30,13 @@ typedef struct VhostVDPAHostNotifier {
>      void *addr;
>  } VhostVDPAHostNotifier;
>
> +struct vhost_vdpa;
> +typedef bool (*vhost_vdpa_virtio_should_enable_op)(const struct vhost_vdpa *v);
> +
> +typedef struct VhostVDPAVirtIOOps {
> +    vhost_vdpa_virtio_should_enable_op should_enable;
> +} VhostVDPAVirtIOOps;
> +
>  typedef struct vhost_vdpa {
>      int device_fd;
>      int index;
> @@ -48,6 +55,7 @@ typedef struct vhost_vdpa {
>      VhostIOVATree *iova_tree;
>      GPtrArray *shadow_vqs;
>      const VhostShadowVirtqueueOps *shadow_vq_ops;
> +    const VhostVDPAVirtIOOps *virtio_ops;
>      void *shadow_vq_ops_opaque;
>      struct vhost_dev *dev;
>      Error *migration_blocker;
> --
> 2.39.3
>



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 6/6] vdpa: remove net cvq migration blocker
  2023-07-06 19:12 ` [RFC PATCH 6/6] vdpa: remove net cvq migration blocker Eugenio Pérez
@ 2023-07-10  3:53   ` Jason Wang
  2023-07-10  7:37     ` Eugenio Perez Martin
  0 siblings, 1 reply; 16+ messages in thread
From: Jason Wang @ 2023-07-10  3:53 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Laurent Vivier, Michael S. Tsirkin, Cindy Lu,
	si-wei.liu, Stefano Garzarella, Shannon Nelson, Gautam Dawar,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Now that we have add migration blockers if the device does not support
> all the needed features, remove the general blocker applied to all net
> devices with CVQ.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

I wonder what's the difference compared if we just start cvq first in
vhost_net_start()?

Thanks

> ---
>  net/vhost-vdpa.c | 12 ------------
>  1 file changed, 12 deletions(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index d5970e9f06..5432b50498 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -996,18 +996,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>          s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
>          s->vhost_vdpa.shadow_vq_ops_opaque = s;
>          s->cvq_isolated = cvq_isolated;
> -
> -        /*
> -         * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
> -         * there is no way to set the device state (MAC, MQ, etc) before
> -         * starting the datapath.
> -         *
> -         * Migration blocker ownership now belongs to s->vhost_vdpa.
> -         */
> -        if (!svq) {
> -            error_setg(&s->vhost_vdpa.migration_blocker,
> -                       "net vdpa cannot migrate with CVQ feature");
> -        }
>      }
>      ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
>      if (ret) {
> --
> 2.39.3
>



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 6/6] vdpa: remove net cvq migration blocker
  2023-07-10  3:53   ` Jason Wang
@ 2023-07-10  7:37     ` Eugenio Perez Martin
  2023-07-20  8:17       ` Eugenio Perez Martin
  0 siblings, 1 reply; 16+ messages in thread
From: Eugenio Perez Martin @ 2023-07-10  7:37 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Laurent Vivier, Michael S. Tsirkin, Cindy Lu,
	si-wei.liu, Stefano Garzarella, Shannon Nelson, Gautam Dawar,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

On Mon, Jul 10, 2023 at 5:54 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > Now that we have add migration blockers if the device does not support
> > all the needed features, remove the general blocker applied to all net
> > devices with CVQ.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>
> I wonder what's the difference compared if we just start cvq first in
> vhost_net_start()?
>

That's interesting. It would complicate the for loop in
vhost_net_start, but I think it's less complicated than
should_start_op.

It would be something like moving from:

for (i = 0; i < nvhosts; i++) {
    if (i < data_queue_pairs) {
        peer = qemu_get_peer(ncs, i);
    } else {
        peer = qemu_get_peer(ncs, n->max_queue_pairs);
    }

    ...

    r = vhost_net_start_one(get_vhost_net(peer), dev);
    if (r < 0) {
        goto err_start;
    }
}

To:

for (i = 0; i < nvhosts; i++) {
    if (i == 0 && cvq) {
        peer = qemu_get_peer(ncs, n->max_queue_pairs);
    } else {
        peer = qemu_get_peer(ncs, i - cvq);
    }

    ...

    r = vhost_net_start_one(get_vhost_net(peer), dev);
    if (r < 0) {
        goto err_start;
    }
}

Is this what you have in mind?

Thanks!

> Thanks
>
> > ---
> >  net/vhost-vdpa.c | 12 ------------
> >  1 file changed, 12 deletions(-)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index d5970e9f06..5432b50498 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -996,18 +996,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> >          s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
> >          s->vhost_vdpa.shadow_vq_ops_opaque = s;
> >          s->cvq_isolated = cvq_isolated;
> > -
> > -        /*
> > -         * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
> > -         * there is no way to set the device state (MAC, MQ, etc) before
> > -         * starting the datapath.
> > -         *
> > -         * Migration blocker ownership now belongs to s->vhost_vdpa.
> > -         */
> > -        if (!svq) {
> > -            error_setg(&s->vhost_vdpa.migration_blocker,
> > -                       "net vdpa cannot migrate with CVQ feature");
> > -        }
> >      }
> >      ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
> >      if (ret) {
> > --
> > 2.39.3
> >
>



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/6] vdpa: export vhost_vdpa_set_vring_ready
  2023-07-10  3:19   ` Jason Wang
@ 2023-07-10  8:43     ` Eugenio Perez Martin
  0 siblings, 0 replies; 16+ messages in thread
From: Eugenio Perez Martin @ 2023-07-10  8:43 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Laurent Vivier, Michael S. Tsirkin, Cindy Lu,
	si-wei.liu, Stefano Garzarella, Shannon Nelson, Gautam Dawar,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

On Mon, Jul 10, 2023 at 5:19 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > The vhost-vdpa net backend needs to enable vrings in a different order
> > than default, so export it.
> >
> > No functional change intended except for tracing, that now includes the
> > (virtio) index being enabled and the return value of the ioctl.
> >
> > Still ignoring return value of this function if called from
> > vhost_vdpa_dev_start, as reorganize calling code around it is out of
> > the scope of this series.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  include/hw/virtio/vhost-vdpa.h |  1 +
> >  hw/virtio/vhost-vdpa.c         | 28 ++++++++++++++++++++--------
> >  hw/virtio/trace-events         |  2 +-
> >  3 files changed, 22 insertions(+), 9 deletions(-)
> >
> > diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> > index e64bfc7f98..5407d54fd7 100644
> > --- a/include/hw/virtio/vhost-vdpa.h
> > +++ b/include/hw/virtio/vhost-vdpa.h
> > @@ -57,6 +57,7 @@ typedef struct vhost_vdpa {
> >  } VhostVDPA;
> >
> >  int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range);
> > +int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx);
> >
> >  int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
> >                         hwaddr size, void *vaddr, bool readonly);
> > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > index 3c575a9a6e..5978d970ee 100644
> > --- a/hw/virtio/vhost-vdpa.c
> > +++ b/hw/virtio/vhost-vdpa.c
> > @@ -528,6 +528,19 @@ int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range)
> >      return ret < 0 ? -errno : 0;
> >  }
> >
> > +int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx)
> > +{
> > +    struct vhost_dev *dev = v->dev;
> > +    struct vhost_vring_state state = {
> > +        .index = idx,
> > +        .num = 1,
> > +    };
> > +    int r = vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
> > +
> > +    trace_vhost_vdpa_set_vring_ready(dev, idx, r);
> > +    return r;
> > +}
> > +
> >  /*
> >   * The use of this function is for requests that only need to be
> >   * applied once. Typically such request occurs at the beginning
> > @@ -872,16 +885,15 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev *dev, int idx)
> >      return idx;
> >  }
> >
> > -static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
> > +static int vhost_vdpa_set_vrings_ready(struct vhost_dev *dev)
> >  {
> > +    struct vhost_vdpa *v = dev->opaque;
> >      int i;
> > -    trace_vhost_vdpa_set_vring_ready(dev);
> > +
> > +    assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
>
> Nit: any reason we need to add this assert in this patch?
>

Other vhost_ops asserts, but sure we can leave it out.

Thanks!

> Thanks
>
> > +
> >      for (i = 0; i < dev->nvqs; ++i) {
> > -        struct vhost_vring_state state = {
> > -            .index = dev->vq_index + i,
> > -            .num = 1,
> > -        };
> > -        vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
> > +        vhost_vdpa_set_vring_ready(v, dev->vq_index + i);
> >      }
> >      return 0;
> >  }
> > @@ -1294,7 +1306,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
> >          if (unlikely(!ok)) {
> >              return -1;
> >          }
> > -        vhost_vdpa_set_vring_ready(dev);
> > +        vhost_vdpa_set_vrings_ready(dev);
> >      } else {
> >          vhost_vdpa_suspend(dev);
> >          vhost_vdpa_svqs_stop(dev);
> > diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> > index 8f8d05cf9b..4f6a6ba428 100644
> > --- a/hw/virtio/trace-events
> > +++ b/hw/virtio/trace-events
> > @@ -46,7 +46,7 @@ vhost_vdpa_set_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRI
> >  vhost_vdpa_get_device_id(void *dev, uint32_t device_id) "dev: %p device_id %"PRIu32
> >  vhost_vdpa_reset_device(void *dev, uint8_t status) "dev: %p status: 0x%"PRIx8
> >  vhost_vdpa_get_vq_index(void *dev, int idx, int vq_idx) "dev: %p idx: %d vq idx: %d"
> > -vhost_vdpa_set_vring_ready(void *dev) "dev: %p"
> > +vhost_vdpa_set_vring_ready(void *dev, unsigned i, int r) "dev: %p, idx: %u, r: %d"
> >  vhost_vdpa_dump_config(void *dev, const char *line) "dev: %p %s"
> >  vhost_vdpa_set_config(void *dev, uint32_t offset, uint32_t size, uint32_t flags) "dev: %p offset: %"PRIu32" size: %"PRIu32" flags: 0x%"PRIx32
> >  vhost_vdpa_get_config(void *dev, void *config, uint32_t config_len) "dev: %p config: %p config_len: %"PRIu32
> > --
> > 2.39.3
> >
>



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 6/6] vdpa: remove net cvq migration blocker
  2023-07-10  7:37     ` Eugenio Perez Martin
@ 2023-07-20  8:17       ` Eugenio Perez Martin
  2023-07-20  8:23         ` Jason Wang
  0 siblings, 1 reply; 16+ messages in thread
From: Eugenio Perez Martin @ 2023-07-20  8:17 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Laurent Vivier, Michael S. Tsirkin, Cindy Lu,
	si-wei.liu, Stefano Garzarella, Shannon Nelson, Gautam Dawar,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

On Mon, Jul 10, 2023 at 9:37 AM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Mon, Jul 10, 2023 at 5:54 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > Now that we have add migration blockers if the device does not support
> > > all the needed features, remove the general blocker applied to all net
> > > devices with CVQ.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> >
> > I wonder what's the difference compared if we just start cvq first in
> > vhost_net_start()?
> >
>
> That's interesting. It would complicate the for loop in
> vhost_net_start, but I think it's less complicated than
> should_start_op.
>
> It would be something like moving from:
>
> for (i = 0; i < nvhosts; i++) {
>     if (i < data_queue_pairs) {
>         peer = qemu_get_peer(ncs, i);
>     } else {
>         peer = qemu_get_peer(ncs, n->max_queue_pairs);
>     }
>
>     ...
>
>     r = vhost_net_start_one(get_vhost_net(peer), dev);
>     if (r < 0) {
>         goto err_start;
>     }
> }
>
> To:
>
> for (i = 0; i < nvhosts; i++) {
>     if (i == 0 && cvq) {
>         peer = qemu_get_peer(ncs, n->max_queue_pairs);
>     } else {
>         peer = qemu_get_peer(ncs, i - cvq);
>     }
>
>     ...
>
>     r = vhost_net_start_one(get_vhost_net(peer), dev);
>     if (r < 0) {
>         goto err_start;
>     }
> }
>
> Is this what you have in mind?
>

Friendly ping.

Thanks!



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 6/6] vdpa: remove net cvq migration blocker
  2023-07-20  8:17       ` Eugenio Perez Martin
@ 2023-07-20  8:23         ` Jason Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2023-07-20  8:23 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: qemu-devel, Laurent Vivier, Michael S. Tsirkin, Cindy Lu,
	si-wei.liu, Stefano Garzarella, Shannon Nelson, Gautam Dawar,
	Harpreet Singh Anand, Parav Pandit, Dragos Tatulea, Zhu Lingshan,
	Lei Yang

On Thu, Jul 20, 2023 at 4:18 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Mon, Jul 10, 2023 at 9:37 AM Eugenio Perez Martin
> <eperezma@redhat.com> wrote:
> >
> > On Mon, Jul 10, 2023 at 5:54 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > > >
> > > > Now that we have add migration blockers if the device does not support
> > > > all the needed features, remove the general blocker applied to all net
> > > > devices with CVQ.
> > > >
> > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > >
> > > I wonder what's the difference compared if we just start cvq first in
> > > vhost_net_start()?
> > >
> >
> > That's interesting. It would complicate the for loop in
> > vhost_net_start, but I think it's less complicated than
> > should_start_op.
> >
> > It would be something like moving from:
> >
> > for (i = 0; i < nvhosts; i++) {
> >     if (i < data_queue_pairs) {
> >         peer = qemu_get_peer(ncs, i);
> >     } else {
> >         peer = qemu_get_peer(ncs, n->max_queue_pairs);
> >     }
> >
> >     ...
> >
> >     r = vhost_net_start_one(get_vhost_net(peer), dev);
> >     if (r < 0) {
> >         goto err_start;
> >     }
> > }
> >
> > To:
> >
> > for (i = 0; i < nvhosts; i++) {
> >     if (i == 0 && cvq) {
> >         peer = qemu_get_peer(ncs, n->max_queue_pairs);
> >     } else {
> >         peer = qemu_get_peer(ncs, i - cvq);
> >     }
> >
> >     ...
> >
> >     r = vhost_net_start_one(get_vhost_net(peer), dev);
> >     if (r < 0) {
> >         goto err_start;
> >     }
> > }
> >
> > Is this what you have in mind?
> >
>

Yes, something like this.

Thanks

> Friendly ping.
>
> Thanks!
>



^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-07-20  8:26 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-06 19:12 [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
2023-07-06 19:12 ` [RFC PATCH 1/6] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
2023-07-10  3:19   ` Jason Wang
2023-07-10  8:43     ` Eugenio Perez Martin
2023-07-06 19:12 ` [RFC PATCH 2/6] vdpa: add should_enable op Eugenio Pérez
2023-07-10  3:46   ` Jason Wang
2023-07-06 19:12 ` [RFC PATCH 3/6] vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready Eugenio Pérez
2023-07-06 19:12 ` [RFC PATCH 4/6] vdpa: add stub vhost_vdpa_should_enable Eugenio Pérez
2023-07-06 19:12 ` [RFC PATCH 5/6] vdpa: delay enable of data vqs Eugenio Pérez
2023-07-06 19:12 ` [RFC PATCH 6/6] vdpa: remove net cvq migration blocker Eugenio Pérez
2023-07-10  3:53   ` Jason Wang
2023-07-10  7:37     ` Eugenio Perez Martin
2023-07-20  8:17       ` Eugenio Perez Martin
2023-07-20  8:23         ` Jason Wang
2023-07-06 20:02 ` [RFC PATCH 0/6] Enable vdpa net migration with features depending on CVQ Michael S. Tsirkin
2023-07-07  6:21   ` Eugenio Perez Martin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.