kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net
@ 2022-11-16 15:05 Eugenio Pérez
  2022-11-16 15:05 ` [PATCH for 8.0 v7 01/10] vdpa: Use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop Eugenio Pérez
                   ` (9 more replies)
  0 siblings, 10 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

Control VQ is the way net devices use to send changes to the device state, like
the number of active queues or its mac address.

QEMU needs to intercept this queue so it can track these changes and is able to
migrate the device. It can do it from 1576dbb5bbc4 ("vdpa: Add x-svq to
NetdevVhostVDPAOptions"). However, to enable x-svq implies to shadow all VirtIO
device's virtqueues, which will damage performance.

This series adds address space isolation, so the device and the guest
communicate directly with them (passthrough) and CVQ communication is split in
two: The guest communicates with QEMU and QEMU forwards the commands to the
device.

This patch add new features so is targeted for qemu 8.0.

Comments are welcome. Thanks!

v7:
- Never ask for number of address spaces, just react if isolation is not
  possible.
- Return ASID ioctl errors instead of masking them as if the device has
  no asid.
- Rename listener_shadow_vq to shadow_data
- Move comment on zero initailization of vhost_vdpa_dma_map above the
  functions.
- Add VHOST_VDPA_GUEST_PA_ASID macro.

v6:
- Do not allocate SVQ resources like file descriptors if SVQ cannot be used.
- Disable shadow CVQ if the device does not support it because of net
  features.

v5:
- Move vring state in vhost_vdpa_get_vring_group instead of using a
  parameter.
- Rename VHOST_VDPA_NET_CVQ_PASSTHROUGH to VHOST_VDPA_NET_DATA_ASID

v4:
- Rebased on last CVQ start series, that allocated CVQ cmd bufs at load
- Squash vhost_vdpa_cvq_group_is_independent.
- Do not check for cvq index on vhost_vdpa_net_prepare, we only have one
  that callback registered in that NetClientInfo.
- Add comment specifying behavior if device does not support _F_ASID
- Update headers to a later Linux commit to not to remove SETUP_RNG_SEED

v3:
- Do not return an error but just print a warning if vdpa device initialization
  returns failure while getting AS num of VQ groups
- Delete extra newline

v2:
- Much as commented on series [1], handle vhost_net backend through
  NetClientInfo callbacks instead of directly.
- Fix not freeing SVQ properly when device does not support CVQ
- Add BIT_ULL missed checking device's backend feature for _F_ASID.

Eugenio Pérez (10):
  vdpa: Use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
  vhost: set SVQ device call handler at SVQ start
  vhost: Allocate SVQ device file descriptors at device start
  vdpa: add vhost_vdpa_net_valid_svq_features
  vdpa: move SVQ vring features check to net/
  vdpa: Allocate SVQ unconditionally
  vdpa: Add asid parameter to vhost_vdpa_dma_map/unmap
  vdpa: Store x-svq parameter in VhostVDPAState
  vdpa: Add shadow_data to vhost_vdpa
  vdpa: Always start CVQ in SVQ mode if possible

 include/hw/virtio/vhost-vdpa.h     |  16 ++-
 hw/virtio/vhost-shadow-virtqueue.c |  35 +------
 hw/virtio/vhost-vdpa.c             | 121 ++++++++++++-----------
 net/vhost-vdpa.c                   | 152 ++++++++++++++++++++++++++---
 hw/virtio/trace-events             |   4 +-
 5 files changed, 217 insertions(+), 111 deletions(-)

-- 
2.31.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH for 8.0 v7 01/10] vdpa: Use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
  2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
@ 2022-11-16 15:05 ` Eugenio Pérez
  2022-11-17  5:43   ` Jason Wang
  2022-11-16 15:05 ` [PATCH for 8.0 v7 02/10] vhost: set SVQ device call handler at SVQ start Eugenio Pérez
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

This function used to trust in v->shadow_vqs != NULL to know if it must
start svq or not.

This is not going to be valid anymore, as qemu is going to allocate svq
unconditionally (but it will only start them conditionally).

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 7468e44b87..7f0ff4df5b 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1029,7 +1029,7 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
     Error *err = NULL;
     unsigned i;
 
-    if (!v->shadow_vqs) {
+    if (!v->shadow_vqs_enabled) {
         return true;
     }
 
@@ -1082,7 +1082,7 @@ static void vhost_vdpa_svqs_stop(struct vhost_dev *dev)
 {
     struct vhost_vdpa *v = dev->opaque;
 
-    if (!v->shadow_vqs) {
+    if (!v->shadow_vqs_enabled) {
         return;
     }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH for 8.0 v7 02/10] vhost: set SVQ device call handler at SVQ start
  2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
  2022-11-16 15:05 ` [PATCH for 8.0 v7 01/10] vdpa: Use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop Eugenio Pérez
@ 2022-11-16 15:05 ` Eugenio Pérez
  2022-11-16 15:05 ` [PATCH for 8.0 v7 03/10] vhost: Allocate SVQ device file descriptors at device start Eugenio Pérez
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

By the end of this series CVQ is shadowed as long as the features
support it.

Since we don't know at the beginning of qemu running if this is
supported, move the event notifier handler setting to the start of the
SVQ, instead of the start of qemu run.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 5bd14cad96..264ddc166d 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -648,6 +648,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
 {
     size_t desc_size, driver_size, device_size;
 
+    event_notifier_set_handler(&svq->hdev_call, vhost_svq_handle_call);
     svq->next_guest_avail_elem = NULL;
     svq->shadow_avail_idx = 0;
     svq->shadow_used_idx = 0;
@@ -704,6 +705,7 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
     g_free(svq->desc_state);
     qemu_vfree(svq->vring.desc);
     qemu_vfree(svq->vring.used);
+    event_notifier_set_handler(&svq->hdev_call, NULL);
 }
 
 /**
@@ -740,7 +742,6 @@ VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
     }
 
     event_notifier_init_fd(&svq->svq_kick, VHOST_FILE_UNBIND);
-    event_notifier_set_handler(&svq->hdev_call, vhost_svq_handle_call);
     svq->iova_tree = iova_tree;
     svq->ops = ops;
     svq->ops_opaque = ops_opaque;
@@ -763,7 +764,6 @@ void vhost_svq_free(gpointer pvq)
     VhostShadowVirtqueue *vq = pvq;
     vhost_svq_stop(vq);
     event_notifier_cleanup(&vq->hdev_kick);
-    event_notifier_set_handler(&vq->hdev_call, NULL);
     event_notifier_cleanup(&vq->hdev_call);
     g_free(vq);
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH for 8.0 v7 03/10] vhost: Allocate SVQ device file descriptors at device start
  2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
  2022-11-16 15:05 ` [PATCH for 8.0 v7 01/10] vdpa: Use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop Eugenio Pérez
  2022-11-16 15:05 ` [PATCH for 8.0 v7 02/10] vhost: set SVQ device call handler at SVQ start Eugenio Pérez
@ 2022-11-16 15:05 ` Eugenio Pérez
  2022-11-16 15:05 ` [PATCH for 8.0 v7 04/10] vdpa: add vhost_vdpa_net_valid_svq_features Eugenio Pérez
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

The next patches will start control SVQ if possible. However, we don't
know if that will be possible at qemu boot anymore.

Delay device file descriptors until we know it at device start.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 31 ++------------------------
 hw/virtio/vhost-vdpa.c             | 35 ++++++++++++++++++++++++------
 2 files changed, 30 insertions(+), 36 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 264ddc166d..3b05bab44d 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -715,43 +715,18 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
  * @iova_tree: Tree to perform descriptors translations
  * @ops: SVQ owner callbacks
  * @ops_opaque: ops opaque pointer
- *
- * Returns the new virtqueue or NULL.
- *
- * In case of error, reason is reported through error_report.
  */
 VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
                                     const VhostShadowVirtqueueOps *ops,
                                     void *ops_opaque)
 {
-    g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
-    int r;
-
-    r = event_notifier_init(&svq->hdev_kick, 0);
-    if (r != 0) {
-        error_report("Couldn't create kick event notifier: %s (%d)",
-                     g_strerror(errno), errno);
-        goto err_init_hdev_kick;
-    }
-
-    r = event_notifier_init(&svq->hdev_call, 0);
-    if (r != 0) {
-        error_report("Couldn't create call event notifier: %s (%d)",
-                     g_strerror(errno), errno);
-        goto err_init_hdev_call;
-    }
+    VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
 
     event_notifier_init_fd(&svq->svq_kick, VHOST_FILE_UNBIND);
     svq->iova_tree = iova_tree;
     svq->ops = ops;
     svq->ops_opaque = ops_opaque;
-    return g_steal_pointer(&svq);
-
-err_init_hdev_call:
-    event_notifier_cleanup(&svq->hdev_kick);
-
-err_init_hdev_kick:
-    return NULL;
+    return svq;
 }
 
 /**
@@ -763,7 +738,5 @@ void vhost_svq_free(gpointer pvq)
 {
     VhostShadowVirtqueue *vq = pvq;
     vhost_svq_stop(vq);
-    event_notifier_cleanup(&vq->hdev_kick);
-    event_notifier_cleanup(&vq->hdev_call);
     g_free(vq);
 }
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 7f0ff4df5b..3df2775760 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -428,15 +428,11 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
 
     shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free);
     for (unsigned n = 0; n < hdev->nvqs; ++n) {
-        g_autoptr(VhostShadowVirtqueue) svq;
+        VhostShadowVirtqueue *svq;
 
         svq = vhost_svq_new(v->iova_tree, v->shadow_vq_ops,
                             v->shadow_vq_ops_opaque);
-        if (unlikely(!svq)) {
-            error_setg(errp, "Cannot create svq %u", n);
-            return -1;
-        }
-        g_ptr_array_add(shadow_vqs, g_steal_pointer(&svq));
+        g_ptr_array_add(shadow_vqs, svq);
     }
 
     v->shadow_vqs = g_steal_pointer(&shadow_vqs);
@@ -864,11 +860,23 @@ static int vhost_vdpa_svq_set_fds(struct vhost_dev *dev,
     const EventNotifier *event_notifier = &svq->hdev_kick;
     int r;
 
+    r = event_notifier_init(&svq->hdev_kick, 0);
+    if (r != 0) {
+        error_setg_errno(errp, -r, "Couldn't create kick event notifier");
+        goto err_init_hdev_kick;
+    }
+
+    r = event_notifier_init(&svq->hdev_call, 0);
+    if (r != 0) {
+        error_setg_errno(errp, -r, "Couldn't create call event notifier");
+        goto err_init_hdev_call;
+    }
+
     file.fd = event_notifier_get_fd(event_notifier);
     r = vhost_vdpa_set_vring_dev_kick(dev, &file);
     if (unlikely(r != 0)) {
         error_setg_errno(errp, -r, "Can't set device kick fd");
-        return r;
+        goto err_init_set_dev_fd;
     }
 
     event_notifier = &svq->hdev_call;
@@ -876,8 +884,18 @@ static int vhost_vdpa_svq_set_fds(struct vhost_dev *dev,
     r = vhost_vdpa_set_vring_dev_call(dev, &file);
     if (unlikely(r != 0)) {
         error_setg_errno(errp, -r, "Can't set device call fd");
+        goto err_init_set_dev_fd;
     }
 
+    return 0;
+
+err_init_set_dev_fd:
+    event_notifier_set_handler(&svq->hdev_call, NULL);
+
+err_init_hdev_call:
+    event_notifier_cleanup(&svq->hdev_kick);
+
+err_init_hdev_kick:
     return r;
 }
 
@@ -1089,6 +1107,9 @@ static void vhost_vdpa_svqs_stop(struct vhost_dev *dev)
     for (unsigned i = 0; i < v->shadow_vqs->len; ++i) {
         VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i);
         vhost_vdpa_svq_unmap_rings(dev, svq);
+
+        event_notifier_cleanup(&svq->hdev_kick);
+        event_notifier_cleanup(&svq->hdev_call);
     }
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH for 8.0 v7 04/10] vdpa: add vhost_vdpa_net_valid_svq_features
  2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (2 preceding siblings ...)
  2022-11-16 15:05 ` [PATCH for 8.0 v7 03/10] vhost: Allocate SVQ device file descriptors at device start Eugenio Pérez
@ 2022-11-16 15:05 ` Eugenio Pérez
  2022-11-16 15:05 ` [PATCH for 8.0 v7 05/10] vdpa: move SVQ vring features check to net/ Eugenio Pérez
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

It will be reused at vdpa device start so let's extract in its own function

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 26 +++++++++++++++++---------
 1 file changed, 17 insertions(+), 9 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 6811089231..e98d5f5eac 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -106,6 +106,22 @@ VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
     return s->vhost_net;
 }
 
+static bool vhost_vdpa_net_valid_svq_features(uint64_t features, Error **errp)
+{
+    uint64_t invalid_dev_features =
+        features & ~vdpa_svq_device_features &
+        /* Transport are all accepted at this point */
+        ~MAKE_64BIT_MASK(VIRTIO_TRANSPORT_F_START,
+                         VIRTIO_TRANSPORT_F_END - VIRTIO_TRANSPORT_F_START);
+
+    if (invalid_dev_features) {
+        error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
+                   invalid_dev_features);
+    }
+
+    return !invalid_dev_features;
+}
+
 static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
 {
     uint32_t device_id;
@@ -675,15 +691,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
     if (opts->x_svq) {
         struct vhost_vdpa_iova_range iova_range;
 
-        uint64_t invalid_dev_features =
-            features & ~vdpa_svq_device_features &
-            /* Transport are all accepted at this point */
-            ~MAKE_64BIT_MASK(VIRTIO_TRANSPORT_F_START,
-                             VIRTIO_TRANSPORT_F_END - VIRTIO_TRANSPORT_F_START);
-
-        if (invalid_dev_features) {
-            error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
-                       invalid_dev_features);
+        if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
             goto err_svq;
         }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH for 8.0 v7 05/10] vdpa: move SVQ vring features check to net/
  2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (3 preceding siblings ...)
  2022-11-16 15:05 ` [PATCH for 8.0 v7 04/10] vdpa: add vhost_vdpa_net_valid_svq_features Eugenio Pérez
@ 2022-11-16 15:05 ` Eugenio Pérez
  2022-11-17  5:44   ` Jason Wang
  2022-11-16 15:05 ` [PATCH for 8.0 v7 06/10] vdpa: Allocate SVQ unconditionally Eugenio Pérez
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

The next patches will start control SVQ if possible. However, we don't
know if that will be possible at qemu boot anymore.

Since the moved checks will be already evaluated at net/ to know if it
is ok to shadow CVQ, move them.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 33 ++-------------------------------
 net/vhost-vdpa.c       |  3 ++-
 2 files changed, 4 insertions(+), 32 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 3df2775760..146f0dcb40 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -402,29 +402,9 @@ static int vhost_vdpa_get_dev_features(struct vhost_dev *dev,
     return ret;
 }
 
-static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
-                               Error **errp)
+static void vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v)
 {
     g_autoptr(GPtrArray) shadow_vqs = NULL;
-    uint64_t dev_features, svq_features;
-    int r;
-    bool ok;
-
-    if (!v->shadow_vqs_enabled) {
-        return 0;
-    }
-
-    r = vhost_vdpa_get_dev_features(hdev, &dev_features);
-    if (r != 0) {
-        error_setg_errno(errp, -r, "Can't get vdpa device features");
-        return r;
-    }
-
-    svq_features = dev_features;
-    ok = vhost_svq_valid_features(svq_features, errp);
-    if (unlikely(!ok)) {
-        return -1;
-    }
 
     shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free);
     for (unsigned n = 0; n < hdev->nvqs; ++n) {
@@ -436,7 +416,6 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
     }
 
     v->shadow_vqs = g_steal_pointer(&shadow_vqs);
-    return 0;
 }
 
 static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
@@ -461,11 +440,7 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
     dev->opaque =  opaque ;
     v->listener = vhost_vdpa_memory_listener;
     v->msg_type = VHOST_IOTLB_MSG_V2;
-    ret = vhost_vdpa_init_svq(dev, v, errp);
-    if (ret) {
-        goto err;
-    }
-
+    vhost_vdpa_init_svq(dev, v);
     vhost_vdpa_get_iova_range(v);
 
     if (!vhost_vdpa_first_dev(dev)) {
@@ -476,10 +451,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
                                VIRTIO_CONFIG_S_DRIVER);
 
     return 0;
-
-err:
-    ram_block_discard_disable(false);
-    return ret;
 }
 
 static void vhost_vdpa_host_notifier_uninit(struct vhost_dev *dev,
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index e98d5f5eac..dd9cea42d0 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -117,9 +117,10 @@ static bool vhost_vdpa_net_valid_svq_features(uint64_t features, Error **errp)
     if (invalid_dev_features) {
         error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
                    invalid_dev_features);
+        return false;
     }
 
-    return !invalid_dev_features;
+    return vhost_svq_valid_features(features, errp);
 }
 
 static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH for 8.0 v7 06/10] vdpa: Allocate SVQ unconditionally
  2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (4 preceding siblings ...)
  2022-11-16 15:05 ` [PATCH for 8.0 v7 05/10] vdpa: move SVQ vring features check to net/ Eugenio Pérez
@ 2022-11-16 15:05 ` Eugenio Pérez
  2022-11-17  5:44   ` Jason Wang
  2022-11-16 15:05 ` [PATCH for 8.0 v7 07/10] vdpa: Add asid parameter to vhost_vdpa_dma_map/unmap Eugenio Pérez
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

SVQ may run or not in a device depending on runtime conditions (for
example, if the device can move CVQ to its own group or not).

Allocate the SVQ array unconditionally at startup, since its hard to
move this allocation elsewhere.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 146f0dcb40..23efb8f49d 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -547,10 +547,6 @@ static void vhost_vdpa_svq_cleanup(struct vhost_dev *dev)
     struct vhost_vdpa *v = dev->opaque;
     size_t idx;
 
-    if (!v->shadow_vqs) {
-        return;
-    }
-
     for (idx = 0; idx < v->shadow_vqs->len; ++idx) {
         vhost_svq_stop(g_ptr_array_index(v->shadow_vqs, idx));
     }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH for 8.0 v7 07/10] vdpa: Add asid parameter to vhost_vdpa_dma_map/unmap
  2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (5 preceding siblings ...)
  2022-11-16 15:05 ` [PATCH for 8.0 v7 06/10] vdpa: Allocate SVQ unconditionally Eugenio Pérez
@ 2022-11-16 15:05 ` Eugenio Pérez
  2022-11-17  5:45   ` Jason Wang
  2022-11-16 15:05 ` [PATCH for 8.0 v7 08/10] vdpa: Store x-svq parameter in VhostVDPAState Eugenio Pérez
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

So the caller can choose which ASID is destined.

No need to update the batch functions as they will always be called from
memory listener updates at the moment. Memory listener updates will
always update ASID 0, as it's the passthrough ASID.

All vhost devices's ASID are 0 at this moment.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v7:
* Move comment on zero initailization of vhost_vdpa_dma_map above the
  functions.
* Add VHOST_VDPA_GUEST_PA_ASID macro.

v5:
* Solve conflict, now vhost_vdpa_svq_unmap_ring returns void
* Change comment on zero initialization.

v4: Add comment specifying behavior if device does not support _F_ASID

v3: Deleted unneeded space
---
 include/hw/virtio/vhost-vdpa.h | 14 ++++++++++---
 hw/virtio/vhost-vdpa.c         | 36 +++++++++++++++++++++++-----------
 net/vhost-vdpa.c               |  6 +++---
 hw/virtio/trace-events         |  4 ++--
 4 files changed, 41 insertions(+), 19 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 1111d85643..e57dfa1fd1 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -19,6 +19,12 @@
 #include "hw/virtio/virtio.h"
 #include "standard-headers/linux/vhost_types.h"
 
+/*
+ * ASID dedicated to map guest's addresses.  If SVQ is disabled it maps GPA to
+ * qemu's IOVA.  If SVQ is enabled it maps also the SVQ vring here
+ */
+#define VHOST_VDPA_GUEST_PA_ASID 0
+
 typedef struct VhostVDPAHostNotifier {
     MemoryRegion mr;
     void *addr;
@@ -29,6 +35,7 @@ typedef struct vhost_vdpa {
     int index;
     uint32_t msg_type;
     bool iotlb_batch_begin_sent;
+    uint32_t address_space_id;
     MemoryListener listener;
     struct vhost_vdpa_iova_range iova_range;
     uint64_t acked_features;
@@ -42,8 +49,9 @@ typedef struct vhost_vdpa {
     VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
 } VhostVDPA;
 
-int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
-                       void *vaddr, bool readonly);
-int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size);
+int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+                       hwaddr size, void *vaddr, bool readonly);
+int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+                         hwaddr size);
 
 #endif
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 23efb8f49d..1e4e1cb523 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -72,22 +72,28 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
     return false;
 }
 
-int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
-                       void *vaddr, bool readonly)
+/*
+ * The caller must set asid = 0 if the device does not support asid.
+ * This is not an ABI break since it is set to 0 by the initializer anyway.
+ */
+int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+                       hwaddr size, void *vaddr, bool readonly)
 {
     struct vhost_msg_v2 msg = {};
     int fd = v->device_fd;
     int ret = 0;
 
     msg.type = v->msg_type;
+    msg.asid = asid;
     msg.iotlb.iova = iova;
     msg.iotlb.size = size;
     msg.iotlb.uaddr = (uint64_t)(uintptr_t)vaddr;
     msg.iotlb.perm = readonly ? VHOST_ACCESS_RO : VHOST_ACCESS_RW;
     msg.iotlb.type = VHOST_IOTLB_UPDATE;
 
-   trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.iotlb.iova, msg.iotlb.size,
-                            msg.iotlb.uaddr, msg.iotlb.perm, msg.iotlb.type);
+    trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.asid, msg.iotlb.iova,
+                             msg.iotlb.size, msg.iotlb.uaddr, msg.iotlb.perm,
+                             msg.iotlb.type);
 
     if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
         error_report("failed to write, fd=%d, errno=%d (%s)",
@@ -98,18 +104,24 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
     return ret;
 }
 
-int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size)
+/*
+ * The caller must set asid = 0 if the device does not support asid.
+ * This is not an ABI break since it is set to 0 by the initializer anyway.
+ */
+int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+                         hwaddr size)
 {
     struct vhost_msg_v2 msg = {};
     int fd = v->device_fd;
     int ret = 0;
 
     msg.type = v->msg_type;
+    msg.asid = asid;
     msg.iotlb.iova = iova;
     msg.iotlb.size = size;
     msg.iotlb.type = VHOST_IOTLB_INVALIDATE;
 
-    trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.iotlb.iova,
+    trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.asid, msg.iotlb.iova,
                                msg.iotlb.size, msg.iotlb.type);
 
     if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
@@ -229,8 +241,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
     }
 
     vhost_vdpa_iotlb_batch_begin_once(v);
-    ret = vhost_vdpa_dma_map(v, iova, int128_get64(llsize),
-                             vaddr, section->readonly);
+    ret = vhost_vdpa_dma_map(v, VHOST_VDPA_GUEST_PA_ASID, iova,
+                             int128_get64(llsize), vaddr, section->readonly);
     if (ret) {
         error_report("vhost vdpa map fail!");
         goto fail_map;
@@ -303,7 +315,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
         vhost_iova_tree_remove(v->iova_tree, *result);
     }
     vhost_vdpa_iotlb_batch_begin_once(v);
-    ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
+    ret = vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova,
+                               int128_get64(llsize));
     if (ret) {
         error_report("vhost_vdpa dma unmap error!");
     }
@@ -884,7 +897,7 @@ static void vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v, hwaddr addr)
     }
 
     size = ROUND_UP(result->size, qemu_real_host_page_size());
-    r = vhost_vdpa_dma_unmap(v, result->iova, size);
+    r = vhost_vdpa_dma_unmap(v, v->address_space_id, result->iova, size);
     if (unlikely(r < 0)) {
         error_report("Unable to unmap SVQ vring: %s (%d)", g_strerror(-r), -r);
         return;
@@ -924,7 +937,8 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa *v, DMAMap *needle,
         return false;
     }
 
-    r = vhost_vdpa_dma_map(v, needle->iova, needle->size + 1,
+    r = vhost_vdpa_dma_map(v, v->address_space_id, needle->iova,
+                           needle->size + 1,
                            (void *)(uintptr_t)needle->translated_addr,
                            needle->perm == IOMMU_RO);
     if (unlikely(r != 0)) {
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index dd9cea42d0..89b01fcaec 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -258,7 +258,7 @@ static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
         return;
     }
 
-    r = vhost_vdpa_dma_unmap(v, map->iova, map->size + 1);
+    r = vhost_vdpa_dma_unmap(v, v->address_space_id, map->iova, map->size + 1);
     if (unlikely(r != 0)) {
         error_report("Device cannot unmap: %s(%d)", g_strerror(r), r);
     }
@@ -298,8 +298,8 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
         return r;
     }
 
-    r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
-                           !write);
+    r = vhost_vdpa_dma_map(v, v->address_space_id, map.iova,
+                           vhost_vdpa_net_cvq_cmd_page_len(), buf, !write);
     if (unlikely(r < 0)) {
         goto dma_map_err;
     }
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 820dadc26c..0ad9390307 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -30,8 +30,8 @@ vhost_user_write(uint32_t req, uint32_t flags) "req:%d flags:0x%"PRIx32""
 vhost_user_create_notifier(int idx, void *n) "idx:%d n:%p"
 
 # vhost-vdpa.c
-vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
-vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8
+vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
+vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8
 vhost_vdpa_listener_begin_batch(void *v, int fd, uint32_t msg_type, uint8_t type)  "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
 vhost_vdpa_listener_commit(void *v, int fd, uint32_t msg_type, uint8_t type)  "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
 vhost_vdpa_listener_region_add(void *vdpa, uint64_t iova, uint64_t llend, void *vaddr, bool readonly) "vdpa: %p iova 0x%"PRIx64" llend 0x%"PRIx64" vaddr: %p read-only: %d"
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH for 8.0 v7 08/10] vdpa: Store x-svq parameter in VhostVDPAState
  2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (6 preceding siblings ...)
  2022-11-16 15:05 ` [PATCH for 8.0 v7 07/10] vdpa: Add asid parameter to vhost_vdpa_dma_map/unmap Eugenio Pérez
@ 2022-11-16 15:05 ` Eugenio Pérez
  2022-11-17  6:15   ` Jason Wang
  2022-11-16 15:05 ` [PATCH for 8.0 v7 09/10] vdpa: Add shadow_data to vhost_vdpa Eugenio Pérez
  2022-11-16 15:05 ` [PATCH for 8.0 v7 10/10] vdpa: Always start CVQ in SVQ mode if possible Eugenio Pérez
  9 siblings, 1 reply; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

CVQ can be shadowed two ways:
- Device has x-svq=on parameter (current way)
- The device can isolate CVQ in its own vq group

QEMU needs to check for the second condition dynamically, because CVQ
index is not known at initialization time. Since this is dynamic, the
CVQ isolation could vary with different conditions, making it possible
to go from "not isolated group" to "isolated".

Saving the cmdline parameter in an extra field so we never disable CVQ
SVQ in case the device was started with cmdline.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 89b01fcaec..5185ac7042 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -38,6 +38,8 @@ typedef struct VhostVDPAState {
     void *cvq_cmd_out_buffer;
     virtio_net_ctrl_ack *status;
 
+    /* The device always have SVQ enabled */
+    bool always_svq;
     bool started;
 } VhostVDPAState;
 
@@ -566,6 +568,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
 
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     s->vhost_vdpa.index = queue_pair_index;
+    s->always_svq = svq;
     s->vhost_vdpa.shadow_vqs_enabled = svq;
     s->vhost_vdpa.iova_tree = iova_tree;
     if (!is_datapath) {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH for 8.0 v7 09/10] vdpa: Add shadow_data to vhost_vdpa
  2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (7 preceding siblings ...)
  2022-11-16 15:05 ` [PATCH for 8.0 v7 08/10] vdpa: Store x-svq parameter in VhostVDPAState Eugenio Pérez
@ 2022-11-16 15:05 ` Eugenio Pérez
  2022-11-17  6:24   ` Jason Wang
  2022-11-16 15:05 ` [PATCH for 8.0 v7 10/10] vdpa: Always start CVQ in SVQ mode if possible Eugenio Pérez
  9 siblings, 1 reply; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

The memory listener that thells the device how to convert GPA to qemu's
va is registered against CVQ vhost_vdpa. memory listener translations
are always ASID 0, CVQ ones are ASID 1 if supported.

Let's tell the listener if it needs to register them on iova tree or
not.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v7: Rename listener_shadow_vq to shadow_data
v5: Solve conflict about vhost_iova_tree_remove accepting mem_region by
    value.
---
 include/hw/virtio/vhost-vdpa.h | 2 ++
 hw/virtio/vhost-vdpa.c         | 6 +++---
 net/vhost-vdpa.c               | 1 +
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index e57dfa1fd1..45b969a311 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -40,6 +40,8 @@ typedef struct vhost_vdpa {
     struct vhost_vdpa_iova_range iova_range;
     uint64_t acked_features;
     bool shadow_vqs_enabled;
+    /* Vdpa must send shadow addresses as IOTLB key for data queues, not GPA */
+    bool shadow_data;
     /* IOVA mapping used by the Shadow Virtqueue */
     VhostIOVATree *iova_tree;
     GPtrArray *shadow_vqs;
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 1e4e1cb523..852baf8b2c 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -224,7 +224,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
                                          vaddr, section->readonly);
 
     llsize = int128_sub(llend, int128_make64(iova));
-    if (v->shadow_vqs_enabled) {
+    if (v->shadow_data) {
         int r;
 
         mem_region.translated_addr = (hwaddr)(uintptr_t)vaddr,
@@ -251,7 +251,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
     return;
 
 fail_map:
-    if (v->shadow_vqs_enabled) {
+    if (v->shadow_data) {
         vhost_iova_tree_remove(v->iova_tree, mem_region);
     }
 
@@ -296,7 +296,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
 
     llsize = int128_sub(llend, int128_make64(iova));
 
-    if (v->shadow_vqs_enabled) {
+    if (v->shadow_data) {
         const DMAMap *result;
         const void *vaddr = memory_region_get_ram_ptr(section->mr) +
             section->offset_within_region +
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 5185ac7042..a9c864741a 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -570,6 +570,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
     s->vhost_vdpa.index = queue_pair_index;
     s->always_svq = svq;
     s->vhost_vdpa.shadow_vqs_enabled = svq;
+    s->vhost_vdpa.shadow_data = svq;
     s->vhost_vdpa.iova_tree = iova_tree;
     if (!is_datapath) {
         s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH for 8.0 v7 10/10] vdpa: Always start CVQ in SVQ mode if possible
  2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (8 preceding siblings ...)
  2022-11-16 15:05 ` [PATCH for 8.0 v7 09/10] vdpa: Add shadow_data to vhost_vdpa Eugenio Pérez
@ 2022-11-16 15:05 ` Eugenio Pérez
  2022-11-17  6:51   ` Jason Wang
  9 siblings, 1 reply; 20+ messages in thread
From: Eugenio Pérez @ 2022-11-16 15:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Jason Wang, Parav Pandit, Si-Wei Liu, Zhu Lingshan,
	Laurent Vivier, Michael S. Tsirkin, kvm

Isolate control virtqueue in its own group, allowing to intercept control
commands but letting dataplane run totally passthrough to the guest.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v7:
* Never ask for number of address spaces, just react if isolation is not
  possible.
* Return ASID ioctl errors instead of masking them as if the device has
  no asid.
* Simplify net_init_vhost_vdpa logic
* Add "if possible" suffix

v6:
* Disable control SVQ if the device does not support it because of
features.

v5:
* Fixing the not adding cvq buffers when x-svq=on is specified.
* Move vring state in vhost_vdpa_get_vring_group instead of using a
  parameter.
* Rename VHOST_VDPA_NET_CVQ_PASSTHROUGH to VHOST_VDPA_NET_DATA_ASID

v4:
* Squash vhost_vdpa_cvq_group_is_independent.
* Rebased on last CVQ start series, that allocated CVQ cmd bufs at load
* Do not check for cvq index on vhost_vdpa_net_prepare, we only have one
  that callback registered in that NetClientInfo.

v3:
* Make asid related queries print a warning instead of returning an
  error and stop the start of qemu.
---
 hw/virtio/vhost-vdpa.c |   3 +-
 net/vhost-vdpa.c       | 117 +++++++++++++++++++++++++++++++++++++++--
 2 files changed, 114 insertions(+), 6 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 852baf8b2c..a29a18a6a9 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -653,7 +653,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
 {
     uint64_t features;
     uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
-        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
+        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
+        0x1ULL << VHOST_BACKEND_F_IOTLB_ASID;
     int r;
 
     if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index a9c864741a..dc13a49311 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -101,6 +101,8 @@ static const uint64_t vdpa_svq_device_features =
     BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
     BIT_ULL(VIRTIO_NET_F_STANDBY);
 
+#define VHOST_VDPA_NET_CVQ_ASID 1
+
 VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
 {
     VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
@@ -242,6 +244,40 @@ static NetClientInfo net_vhost_vdpa_info = {
         .check_peer_type = vhost_vdpa_check_peer_type,
 };
 
+static int64_t vhost_vdpa_get_vring_group(int device_fd, unsigned vq_index)
+{
+    struct vhost_vring_state state = {
+        .index = vq_index,
+    };
+    int r = ioctl(device_fd, VHOST_VDPA_GET_VRING_GROUP, &state);
+
+    if (unlikely(r < 0)) {
+        error_report("Cannot get VQ %u group: %s", vq_index,
+                     g_strerror(errno));
+        return r;
+    }
+
+    return state.num;
+}
+
+static int vhost_vdpa_set_address_space_id(struct vhost_vdpa *v,
+                                           unsigned vq_group,
+                                           unsigned asid_num)
+{
+    struct vhost_vring_state asid = {
+        .index = vq_group,
+        .num = asid_num,
+    };
+    int r;
+
+    r = ioctl(v->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid);
+    if (unlikely(r < 0)) {
+        error_report("Can't set vq group %u asid %u, errno=%d (%s)",
+                     asid.index, asid.num, errno, g_strerror(errno));
+    }
+    return r;
+}
+
 static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
 {
     VhostIOVATree *tree = v->iova_tree;
@@ -316,11 +352,69 @@ dma_map_err:
 static int vhost_vdpa_net_cvq_start(NetClientState *nc)
 {
     VhostVDPAState *s;
-    int r;
+    struct vhost_vdpa *v;
+    uint64_t backend_features;
+    int64_t cvq_group;
+    int cvq_index, r;
 
     assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
 
     s = DO_UPCAST(VhostVDPAState, nc, nc);
+    v = &s->vhost_vdpa;
+
+    v->shadow_data = s->always_svq;
+    v->shadow_vqs_enabled = s->always_svq;
+    s->vhost_vdpa.address_space_id = VHOST_VDPA_GUEST_PA_ASID;
+
+    if (s->always_svq) {
+        goto out;
+    }
+
+    /* Backend features are not available in v->dev yet. */
+    r = ioctl(v->device_fd, VHOST_GET_BACKEND_FEATURES, &backend_features);
+    if (unlikely(r < 0)) {
+        error_report("Cannot get vdpa backend_features: %s(%d)",
+            g_strerror(errno), errno);
+        return -1;
+    }
+    if (!(backend_features & VHOST_BACKEND_F_IOTLB_ASID) ||
+        !vhost_vdpa_net_valid_svq_features(v->dev->features, NULL)) {
+        return 0;
+    }
+
+    /**
+     * Check if all the virtqueues of the virtio device are in a different vq
+     * than the last vq. VQ group of last group passed in cvq_group.
+     */
+    cvq_index = v->dev->vq_index_end - 1;
+    cvq_group = vhost_vdpa_get_vring_group(v->device_fd, cvq_index);
+    if (unlikely(cvq_group < 0)) {
+        return cvq_group;
+    }
+    for (int i = 0; i < cvq_index; ++i) {
+        int64_t group = vhost_vdpa_get_vring_group(v->device_fd, i);
+
+        if (unlikely(group < 0)) {
+            return group;
+        }
+
+        if (unlikely(group == cvq_group)) {
+            warn_report(
+                "CVQ %"PRId64" group is the same as VQ %d one (%"PRId64")",
+                cvq_group, i, group);
+            return 0;
+        }
+    }
+
+    r = vhost_vdpa_set_address_space_id(v, cvq_group, VHOST_VDPA_NET_CVQ_ASID);
+    if (unlikely(r < 0)) {
+        return r;
+    } else {
+        v->shadow_vqs_enabled = true;
+        s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_CVQ_ASID;
+    }
+
+out:
     if (!s->vhost_vdpa.shadow_vqs_enabled) {
         return 0;
     }
@@ -652,6 +746,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
     g_autoptr(VhostIOVATree) iova_tree = NULL;
     NetClientState *nc;
     int queue_pairs, r, i = 0, has_cvq = 0;
+    bool svq_cvq;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -693,12 +788,24 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         return queue_pairs;
     }
 
-    if (opts->x_svq) {
-        struct vhost_vdpa_iova_range iova_range;
+    svq_cvq = opts->x_svq || has_cvq;
+    if (svq_cvq) {
+        Error *warn = NULL;
 
-        if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
-            goto err_svq;
+        svq_cvq = vhost_vdpa_net_valid_svq_features(features,
+                                                   opts->x_svq ? errp : &warn);
+        if (!svq_cvq) {
+            if (opts->x_svq) {
+                goto err_svq;
+            } else {
+                warn_reportf_err(warn, "Cannot shadow CVQ: ");
+            }
         }
+    }
+
+    if (svq_cvq) {
+        /* Allocate a common iova tree if there is a possibility of SVQ */
+        struct vhost_vdpa_iova_range iova_range;
 
         vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
         iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH for 8.0 v7 01/10] vdpa: Use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
  2022-11-16 15:05 ` [PATCH for 8.0 v7 01/10] vdpa: Use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop Eugenio Pérez
@ 2022-11-17  5:43   ` Jason Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Jason Wang @ 2022-11-17  5:43 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cornelia Huck, Gautam Dawar, Eli Cohen,
	Stefano Garzarella, Harpreet Singh Anand, Paolo Bonzini,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong, Gonglei (Arei),
	Parav Pandit, Si-Wei Liu, Zhu Lingshan, Laurent Vivier,
	Michael S. Tsirkin, kvm

On Wed, Nov 16, 2022 at 11:06 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> This function used to trust in v->shadow_vqs != NULL to know if it must
> start svq or not.
>
> This is not going to be valid anymore, as qemu is going to allocate svq
> unconditionally (but it will only start them conditionally).
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  hw/virtio/vhost-vdpa.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 7468e44b87..7f0ff4df5b 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -1029,7 +1029,7 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
>      Error *err = NULL;
>      unsigned i;
>
> -    if (!v->shadow_vqs) {
> +    if (!v->shadow_vqs_enabled) {
>          return true;
>      }
>
> @@ -1082,7 +1082,7 @@ static void vhost_vdpa_svqs_stop(struct vhost_dev *dev)
>  {
>      struct vhost_vdpa *v = dev->opaque;
>
> -    if (!v->shadow_vqs) {
> +    if (!v->shadow_vqs_enabled) {
>          return;
>      }
>
> --
> 2.31.1
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH for 8.0 v7 05/10] vdpa: move SVQ vring features check to net/
  2022-11-16 15:05 ` [PATCH for 8.0 v7 05/10] vdpa: move SVQ vring features check to net/ Eugenio Pérez
@ 2022-11-17  5:44   ` Jason Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Jason Wang @ 2022-11-17  5:44 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cornelia Huck, Gautam Dawar, Eli Cohen,
	Stefano Garzarella, Harpreet Singh Anand, Paolo Bonzini,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong, Gonglei (Arei),
	Parav Pandit, Si-Wei Liu, Zhu Lingshan, Laurent Vivier,
	Michael S. Tsirkin, kvm

On Wed, Nov 16, 2022 at 11:06 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> The next patches will start control SVQ if possible. However, we don't
> know if that will be possible at qemu boot anymore.
>
> Since the moved checks will be already evaluated at net/ to know if it
> is ok to shadow CVQ, move them.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  hw/virtio/vhost-vdpa.c | 33 ++-------------------------------
>  net/vhost-vdpa.c       |  3 ++-
>  2 files changed, 4 insertions(+), 32 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 3df2775760..146f0dcb40 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -402,29 +402,9 @@ static int vhost_vdpa_get_dev_features(struct vhost_dev *dev,
>      return ret;
>  }
>
> -static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
> -                               Error **errp)
> +static void vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v)
>  {
>      g_autoptr(GPtrArray) shadow_vqs = NULL;
> -    uint64_t dev_features, svq_features;
> -    int r;
> -    bool ok;
> -
> -    if (!v->shadow_vqs_enabled) {
> -        return 0;
> -    }
> -
> -    r = vhost_vdpa_get_dev_features(hdev, &dev_features);
> -    if (r != 0) {
> -        error_setg_errno(errp, -r, "Can't get vdpa device features");
> -        return r;
> -    }
> -
> -    svq_features = dev_features;
> -    ok = vhost_svq_valid_features(svq_features, errp);
> -    if (unlikely(!ok)) {
> -        return -1;
> -    }
>
>      shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free);
>      for (unsigned n = 0; n < hdev->nvqs; ++n) {
> @@ -436,7 +416,6 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
>      }
>
>      v->shadow_vqs = g_steal_pointer(&shadow_vqs);
> -    return 0;
>  }
>
>  static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> @@ -461,11 +440,7 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
>      dev->opaque =  opaque ;
>      v->listener = vhost_vdpa_memory_listener;
>      v->msg_type = VHOST_IOTLB_MSG_V2;
> -    ret = vhost_vdpa_init_svq(dev, v, errp);
> -    if (ret) {
> -        goto err;
> -    }
> -
> +    vhost_vdpa_init_svq(dev, v);
>      vhost_vdpa_get_iova_range(v);
>
>      if (!vhost_vdpa_first_dev(dev)) {
> @@ -476,10 +451,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
>                                 VIRTIO_CONFIG_S_DRIVER);
>
>      return 0;
> -
> -err:
> -    ram_block_discard_disable(false);
> -    return ret;
>  }
>
>  static void vhost_vdpa_host_notifier_uninit(struct vhost_dev *dev,
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index e98d5f5eac..dd9cea42d0 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -117,9 +117,10 @@ static bool vhost_vdpa_net_valid_svq_features(uint64_t features, Error **errp)
>      if (invalid_dev_features) {
>          error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
>                     invalid_dev_features);
> +        return false;
>      }
>
> -    return !invalid_dev_features;
> +    return vhost_svq_valid_features(features, errp);
>  }
>
>  static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
> --
> 2.31.1
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH for 8.0 v7 06/10] vdpa: Allocate SVQ unconditionally
  2022-11-16 15:05 ` [PATCH for 8.0 v7 06/10] vdpa: Allocate SVQ unconditionally Eugenio Pérez
@ 2022-11-17  5:44   ` Jason Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Jason Wang @ 2022-11-17  5:44 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cornelia Huck, Gautam Dawar, Eli Cohen,
	Stefano Garzarella, Harpreet Singh Anand, Paolo Bonzini,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong, Gonglei (Arei),
	Parav Pandit, Si-Wei Liu, Zhu Lingshan, Laurent Vivier,
	Michael S. Tsirkin, kvm

On Wed, Nov 16, 2022 at 11:06 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> SVQ may run or not in a device depending on runtime conditions (for
> example, if the device can move CVQ to its own group or not).
>
> Allocate the SVQ array unconditionally at startup, since its hard to
> move this allocation elsewhere.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  hw/virtio/vhost-vdpa.c | 4 ----
>  1 file changed, 4 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 146f0dcb40..23efb8f49d 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -547,10 +547,6 @@ static void vhost_vdpa_svq_cleanup(struct vhost_dev *dev)
>      struct vhost_vdpa *v = dev->opaque;
>      size_t idx;
>
> -    if (!v->shadow_vqs) {
> -        return;
> -    }
> -
>      for (idx = 0; idx < v->shadow_vqs->len; ++idx) {
>          vhost_svq_stop(g_ptr_array_index(v->shadow_vqs, idx));
>      }
> --
> 2.31.1
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH for 8.0 v7 07/10] vdpa: Add asid parameter to vhost_vdpa_dma_map/unmap
  2022-11-16 15:05 ` [PATCH for 8.0 v7 07/10] vdpa: Add asid parameter to vhost_vdpa_dma_map/unmap Eugenio Pérez
@ 2022-11-17  5:45   ` Jason Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Jason Wang @ 2022-11-17  5:45 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cornelia Huck, Gautam Dawar, Eli Cohen,
	Stefano Garzarella, Harpreet Singh Anand, Paolo Bonzini,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong, Gonglei (Arei),
	Parav Pandit, Si-Wei Liu, Zhu Lingshan, Laurent Vivier,
	Michael S. Tsirkin, kvm

On Wed, Nov 16, 2022 at 11:06 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> So the caller can choose which ASID is destined.
>
> No need to update the batch functions as they will always be called from
> memory listener updates at the moment. Memory listener updates will
> always update ASID 0, as it's the passthrough ASID.
>
> All vhost devices's ASID are 0 at this moment.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
> v7:
> * Move comment on zero initailization of vhost_vdpa_dma_map above the
>   functions.
> * Add VHOST_VDPA_GUEST_PA_ASID macro.
>
> v5:
> * Solve conflict, now vhost_vdpa_svq_unmap_ring returns void
> * Change comment on zero initialization.
>
> v4: Add comment specifying behavior if device does not support _F_ASID
>
> v3: Deleted unneeded space
> ---
>  include/hw/virtio/vhost-vdpa.h | 14 ++++++++++---
>  hw/virtio/vhost-vdpa.c         | 36 +++++++++++++++++++++++-----------
>  net/vhost-vdpa.c               |  6 +++---
>  hw/virtio/trace-events         |  4 ++--
>  4 files changed, 41 insertions(+), 19 deletions(-)
>
> diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> index 1111d85643..e57dfa1fd1 100644
> --- a/include/hw/virtio/vhost-vdpa.h
> +++ b/include/hw/virtio/vhost-vdpa.h
> @@ -19,6 +19,12 @@
>  #include "hw/virtio/virtio.h"
>  #include "standard-headers/linux/vhost_types.h"
>
> +/*
> + * ASID dedicated to map guest's addresses.  If SVQ is disabled it maps GPA to
> + * qemu's IOVA.  If SVQ is enabled it maps also the SVQ vring here
> + */
> +#define VHOST_VDPA_GUEST_PA_ASID 0
> +
>  typedef struct VhostVDPAHostNotifier {
>      MemoryRegion mr;
>      void *addr;
> @@ -29,6 +35,7 @@ typedef struct vhost_vdpa {
>      int index;
>      uint32_t msg_type;
>      bool iotlb_batch_begin_sent;
> +    uint32_t address_space_id;
>      MemoryListener listener;
>      struct vhost_vdpa_iova_range iova_range;
>      uint64_t acked_features;
> @@ -42,8 +49,9 @@ typedef struct vhost_vdpa {
>      VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
>  } VhostVDPA;
>
> -int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
> -                       void *vaddr, bool readonly);
> -int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size);
> +int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
> +                       hwaddr size, void *vaddr, bool readonly);
> +int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
> +                         hwaddr size);
>
>  #endif
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 23efb8f49d..1e4e1cb523 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -72,22 +72,28 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
>      return false;
>  }
>
> -int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
> -                       void *vaddr, bool readonly)
> +/*
> + * The caller must set asid = 0 if the device does not support asid.
> + * This is not an ABI break since it is set to 0 by the initializer anyway.
> + */
> +int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
> +                       hwaddr size, void *vaddr, bool readonly)
>  {
>      struct vhost_msg_v2 msg = {};
>      int fd = v->device_fd;
>      int ret = 0;
>
>      msg.type = v->msg_type;
> +    msg.asid = asid;
>      msg.iotlb.iova = iova;
>      msg.iotlb.size = size;
>      msg.iotlb.uaddr = (uint64_t)(uintptr_t)vaddr;
>      msg.iotlb.perm = readonly ? VHOST_ACCESS_RO : VHOST_ACCESS_RW;
>      msg.iotlb.type = VHOST_IOTLB_UPDATE;
>
> -   trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.iotlb.iova, msg.iotlb.size,
> -                            msg.iotlb.uaddr, msg.iotlb.perm, msg.iotlb.type);
> +    trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.asid, msg.iotlb.iova,
> +                             msg.iotlb.size, msg.iotlb.uaddr, msg.iotlb.perm,
> +                             msg.iotlb.type);
>
>      if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
>          error_report("failed to write, fd=%d, errno=%d (%s)",
> @@ -98,18 +104,24 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
>      return ret;
>  }
>
> -int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size)
> +/*
> + * The caller must set asid = 0 if the device does not support asid.
> + * This is not an ABI break since it is set to 0 by the initializer anyway.
> + */
> +int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
> +                         hwaddr size)
>  {
>      struct vhost_msg_v2 msg = {};
>      int fd = v->device_fd;
>      int ret = 0;
>
>      msg.type = v->msg_type;
> +    msg.asid = asid;
>      msg.iotlb.iova = iova;
>      msg.iotlb.size = size;
>      msg.iotlb.type = VHOST_IOTLB_INVALIDATE;
>
> -    trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.iotlb.iova,
> +    trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.asid, msg.iotlb.iova,
>                                 msg.iotlb.size, msg.iotlb.type);
>
>      if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
> @@ -229,8 +241,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
>      }
>
>      vhost_vdpa_iotlb_batch_begin_once(v);
> -    ret = vhost_vdpa_dma_map(v, iova, int128_get64(llsize),
> -                             vaddr, section->readonly);
> +    ret = vhost_vdpa_dma_map(v, VHOST_VDPA_GUEST_PA_ASID, iova,
> +                             int128_get64(llsize), vaddr, section->readonly);
>      if (ret) {
>          error_report("vhost vdpa map fail!");
>          goto fail_map;
> @@ -303,7 +315,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
>          vhost_iova_tree_remove(v->iova_tree, *result);
>      }
>      vhost_vdpa_iotlb_batch_begin_once(v);
> -    ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
> +    ret = vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova,
> +                               int128_get64(llsize));
>      if (ret) {
>          error_report("vhost_vdpa dma unmap error!");
>      }
> @@ -884,7 +897,7 @@ static void vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v, hwaddr addr)
>      }
>
>      size = ROUND_UP(result->size, qemu_real_host_page_size());
> -    r = vhost_vdpa_dma_unmap(v, result->iova, size);
> +    r = vhost_vdpa_dma_unmap(v, v->address_space_id, result->iova, size);
>      if (unlikely(r < 0)) {
>          error_report("Unable to unmap SVQ vring: %s (%d)", g_strerror(-r), -r);
>          return;
> @@ -924,7 +937,8 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa *v, DMAMap *needle,
>          return false;
>      }
>
> -    r = vhost_vdpa_dma_map(v, needle->iova, needle->size + 1,
> +    r = vhost_vdpa_dma_map(v, v->address_space_id, needle->iova,
> +                           needle->size + 1,
>                             (void *)(uintptr_t)needle->translated_addr,
>                             needle->perm == IOMMU_RO);
>      if (unlikely(r != 0)) {
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index dd9cea42d0..89b01fcaec 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -258,7 +258,7 @@ static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
>          return;
>      }
>
> -    r = vhost_vdpa_dma_unmap(v, map->iova, map->size + 1);
> +    r = vhost_vdpa_dma_unmap(v, v->address_space_id, map->iova, map->size + 1);
>      if (unlikely(r != 0)) {
>          error_report("Device cannot unmap: %s(%d)", g_strerror(r), r);
>      }
> @@ -298,8 +298,8 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
>          return r;
>      }
>
> -    r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
> -                           !write);
> +    r = vhost_vdpa_dma_map(v, v->address_space_id, map.iova,
> +                           vhost_vdpa_net_cvq_cmd_page_len(), buf, !write);
>      if (unlikely(r < 0)) {
>          goto dma_map_err;
>      }
> diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> index 820dadc26c..0ad9390307 100644
> --- a/hw/virtio/trace-events
> +++ b/hw/virtio/trace-events
> @@ -30,8 +30,8 @@ vhost_user_write(uint32_t req, uint32_t flags) "req:%d flags:0x%"PRIx32""
>  vhost_user_create_notifier(int idx, void *n) "idx:%d n:%p"
>
>  # vhost-vdpa.c
> -vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
> -vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8
> +vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
> +vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8
>  vhost_vdpa_listener_begin_batch(void *v, int fd, uint32_t msg_type, uint8_t type)  "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
>  vhost_vdpa_listener_commit(void *v, int fd, uint32_t msg_type, uint8_t type)  "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
>  vhost_vdpa_listener_region_add(void *vdpa, uint64_t iova, uint64_t llend, void *vaddr, bool readonly) "vdpa: %p iova 0x%"PRIx64" llend 0x%"PRIx64" vaddr: %p read-only: %d"
> --
> 2.31.1
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH for 8.0 v7 08/10] vdpa: Store x-svq parameter in VhostVDPAState
  2022-11-16 15:05 ` [PATCH for 8.0 v7 08/10] vdpa: Store x-svq parameter in VhostVDPAState Eugenio Pérez
@ 2022-11-17  6:15   ` Jason Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Jason Wang @ 2022-11-17  6:15 UTC (permalink / raw)
  To: Eugenio Pérez, qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Parav Pandit, Si-Wei Liu, Zhu Lingshan, Laurent Vivier,
	Michael S. Tsirkin, kvm


在 2022/11/16 23:05, Eugenio Pérez 写道:
> CVQ can be shadowed two ways:
> - Device has x-svq=on parameter (current way)
> - The device can isolate CVQ in its own vq group
>
> QEMU needs to check for the second condition dynamically, because CVQ
> index is not known at initialization time. Since this is dynamic, the
> CVQ isolation could vary with different conditions, making it possible
> to go from "not isolated group" to "isolated".
>
> Saving the cmdline parameter in an extra field so we never disable CVQ
> SVQ in case the device was started with cmdline.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>


Acked-by: Jason Wang <jasowang@redhat.com>

Thanks


> ---
>   net/vhost-vdpa.c | 3 +++
>   1 file changed, 3 insertions(+)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 89b01fcaec..5185ac7042 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -38,6 +38,8 @@ typedef struct VhostVDPAState {
>       void *cvq_cmd_out_buffer;
>       virtio_net_ctrl_ack *status;
>   
> +    /* The device always have SVQ enabled */
> +    bool always_svq;
>       bool started;
>   } VhostVDPAState;
>   
> @@ -566,6 +568,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>   
>       s->vhost_vdpa.device_fd = vdpa_device_fd;
>       s->vhost_vdpa.index = queue_pair_index;
> +    s->always_svq = svq;
>       s->vhost_vdpa.shadow_vqs_enabled = svq;
>       s->vhost_vdpa.iova_tree = iova_tree;
>       if (!is_datapath) {


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH for 8.0 v7 09/10] vdpa: Add shadow_data to vhost_vdpa
  2022-11-16 15:05 ` [PATCH for 8.0 v7 09/10] vdpa: Add shadow_data to vhost_vdpa Eugenio Pérez
@ 2022-11-17  6:24   ` Jason Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Jason Wang @ 2022-11-17  6:24 UTC (permalink / raw)
  To: Eugenio Pérez, qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Parav Pandit, Si-Wei Liu, Zhu Lingshan, Laurent Vivier,
	Michael S. Tsirkin, kvm


在 2022/11/16 23:05, Eugenio Pérez 写道:
> The memory listener that thells the device how to convert GPA to qemu's
> va is registered against CVQ vhost_vdpa. memory listener translations
> are always ASID 0, CVQ ones are ASID 1 if supported.
>
> Let's tell the listener if it needs to register them on iova tree or
> not.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>


Acked-by: Jason Wang <jasowang@redhat.com>

Thanks


> ---
> v7: Rename listener_shadow_vq to shadow_data
> v5: Solve conflict about vhost_iova_tree_remove accepting mem_region by
>      value.
> ---
>   include/hw/virtio/vhost-vdpa.h | 2 ++
>   hw/virtio/vhost-vdpa.c         | 6 +++---
>   net/vhost-vdpa.c               | 1 +
>   3 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> index e57dfa1fd1..45b969a311 100644
> --- a/include/hw/virtio/vhost-vdpa.h
> +++ b/include/hw/virtio/vhost-vdpa.h
> @@ -40,6 +40,8 @@ typedef struct vhost_vdpa {
>       struct vhost_vdpa_iova_range iova_range;
>       uint64_t acked_features;
>       bool shadow_vqs_enabled;
> +    /* Vdpa must send shadow addresses as IOTLB key for data queues, not GPA */
> +    bool shadow_data;
>       /* IOVA mapping used by the Shadow Virtqueue */
>       VhostIOVATree *iova_tree;
>       GPtrArray *shadow_vqs;
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 1e4e1cb523..852baf8b2c 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -224,7 +224,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
>                                            vaddr, section->readonly);
>   
>       llsize = int128_sub(llend, int128_make64(iova));
> -    if (v->shadow_vqs_enabled) {
> +    if (v->shadow_data) {
>           int r;
>   
>           mem_region.translated_addr = (hwaddr)(uintptr_t)vaddr,
> @@ -251,7 +251,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
>       return;
>   
>   fail_map:
> -    if (v->shadow_vqs_enabled) {
> +    if (v->shadow_data) {
>           vhost_iova_tree_remove(v->iova_tree, mem_region);
>       }
>   
> @@ -296,7 +296,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
>   
>       llsize = int128_sub(llend, int128_make64(iova));
>   
> -    if (v->shadow_vqs_enabled) {
> +    if (v->shadow_data) {
>           const DMAMap *result;
>           const void *vaddr = memory_region_get_ram_ptr(section->mr) +
>               section->offset_within_region +
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 5185ac7042..a9c864741a 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -570,6 +570,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>       s->vhost_vdpa.index = queue_pair_index;
>       s->always_svq = svq;
>       s->vhost_vdpa.shadow_vqs_enabled = svq;
> +    s->vhost_vdpa.shadow_data = svq;
>       s->vhost_vdpa.iova_tree = iova_tree;
>       if (!is_datapath) {
>           s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH for 8.0 v7 10/10] vdpa: Always start CVQ in SVQ mode if possible
  2022-11-16 15:05 ` [PATCH for 8.0 v7 10/10] vdpa: Always start CVQ in SVQ mode if possible Eugenio Pérez
@ 2022-11-17  6:51   ` Jason Wang
  2022-11-17  7:43     ` Eugenio Perez Martin
  0 siblings, 1 reply; 20+ messages in thread
From: Jason Wang @ 2022-11-17  6:51 UTC (permalink / raw)
  To: Eugenio Pérez, qemu-devel
  Cc: Cornelia Huck, Gautam Dawar, Eli Cohen, Stefano Garzarella,
	Harpreet Singh Anand, Paolo Bonzini, Stefan Hajnoczi, Cindy Lu,
	Liuxiangdong, Gonglei (Arei),
	Parav Pandit, Si-Wei Liu, Zhu Lingshan, Laurent Vivier,
	Michael S. Tsirkin, kvm


在 2022/11/16 23:05, Eugenio Pérez 写道:
> Isolate control virtqueue in its own group, allowing to intercept control
> commands but letting dataplane run totally passthrough to the guest.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
> v7:
> * Never ask for number of address spaces, just react if isolation is not
>    possible.
> * Return ASID ioctl errors instead of masking them as if the device has
>    no asid.
> * Simplify net_init_vhost_vdpa logic
> * Add "if possible" suffix
>
> v6:
> * Disable control SVQ if the device does not support it because of
> features.
>
> v5:
> * Fixing the not adding cvq buffers when x-svq=on is specified.
> * Move vring state in vhost_vdpa_get_vring_group instead of using a
>    parameter.
> * Rename VHOST_VDPA_NET_CVQ_PASSTHROUGH to VHOST_VDPA_NET_DATA_ASID
>
> v4:
> * Squash vhost_vdpa_cvq_group_is_independent.
> * Rebased on last CVQ start series, that allocated CVQ cmd bufs at load
> * Do not check for cvq index on vhost_vdpa_net_prepare, we only have one
>    that callback registered in that NetClientInfo.
>
> v3:
> * Make asid related queries print a warning instead of returning an
>    error and stop the start of qemu.
> ---
>   hw/virtio/vhost-vdpa.c |   3 +-
>   net/vhost-vdpa.c       | 117 +++++++++++++++++++++++++++++++++++++++--
>   2 files changed, 114 insertions(+), 6 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 852baf8b2c..a29a18a6a9 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -653,7 +653,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
>   {
>       uint64_t features;
>       uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
> -        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
> +        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
> +        0x1ULL << VHOST_BACKEND_F_IOTLB_ASID;
>       int r;
>   
>       if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index a9c864741a..dc13a49311 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -101,6 +101,8 @@ static const uint64_t vdpa_svq_device_features =
>       BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
>       BIT_ULL(VIRTIO_NET_F_STANDBY);
>   
> +#define VHOST_VDPA_NET_CVQ_ASID 1
> +
>   VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
>   {
>       VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> @@ -242,6 +244,40 @@ static NetClientInfo net_vhost_vdpa_info = {
>           .check_peer_type = vhost_vdpa_check_peer_type,
>   };
>   
> +static int64_t vhost_vdpa_get_vring_group(int device_fd, unsigned vq_index)
> +{
> +    struct vhost_vring_state state = {
> +        .index = vq_index,
> +    };
> +    int r = ioctl(device_fd, VHOST_VDPA_GET_VRING_GROUP, &state);
> +
> +    if (unlikely(r < 0)) {
> +        error_report("Cannot get VQ %u group: %s", vq_index,
> +                     g_strerror(errno));
> +        return r;
> +    }
> +
> +    return state.num;
> +}
> +
> +static int vhost_vdpa_set_address_space_id(struct vhost_vdpa *v,
> +                                           unsigned vq_group,
> +                                           unsigned asid_num)
> +{
> +    struct vhost_vring_state asid = {
> +        .index = vq_group,
> +        .num = asid_num,
> +    };
> +    int r;
> +
> +    r = ioctl(v->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid);
> +    if (unlikely(r < 0)) {
> +        error_report("Can't set vq group %u asid %u, errno=%d (%s)",
> +                     asid.index, asid.num, errno, g_strerror(errno));
> +    }
> +    return r;
> +}
> +
>   static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
>   {
>       VhostIOVATree *tree = v->iova_tree;
> @@ -316,11 +352,69 @@ dma_map_err:
>   static int vhost_vdpa_net_cvq_start(NetClientState *nc)
>   {
>       VhostVDPAState *s;
> -    int r;
> +    struct vhost_vdpa *v;
> +    uint64_t backend_features;
> +    int64_t cvq_group;
> +    int cvq_index, r;
>   
>       assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>   
>       s = DO_UPCAST(VhostVDPAState, nc, nc);
> +    v = &s->vhost_vdpa;
> +
> +    v->shadow_data = s->always_svq;
> +    v->shadow_vqs_enabled = s->always_svq;
> +    s->vhost_vdpa.address_space_id = VHOST_VDPA_GUEST_PA_ASID;
> +
> +    if (s->always_svq) {
> +        goto out;
> +    }
> +
> +    /* Backend features are not available in v->dev yet. */
> +    r = ioctl(v->device_fd, VHOST_GET_BACKEND_FEATURES, &backend_features);
> +    if (unlikely(r < 0)) {
> +        error_report("Cannot get vdpa backend_features: %s(%d)",
> +            g_strerror(errno), errno);
> +        return -1;
> +    }
> +    if (!(backend_features & VHOST_BACKEND_F_IOTLB_ASID) ||
> +        !vhost_vdpa_net_valid_svq_features(v->dev->features, NULL)) {


I think there should be some logic to block migration in this case?


> +        return 0;
> +    }
> +
> +    /**
> +     * Check if all the virtqueues of the virtio device are in a different vq
> +     * than the last vq. VQ group of last group passed in cvq_group.
> +     */
> +    cvq_index = v->dev->vq_index_end - 1;
> +    cvq_group = vhost_vdpa_get_vring_group(v->device_fd, cvq_index);
> +    if (unlikely(cvq_group < 0)) {
> +        return cvq_group;x
> +    }
> +    for (int i = 0; i < cvq_index; ++i) {
> +        int64_t group = vhost_vdpa_get_vring_group(v->device_fd, i);
> +
> +        if (unlikely(group < 0)) {
> +            return group;
> +        }
> +
> +        if (unlikely(group == cvq_group)) {
> +            warn_report(
> +                "CVQ %"PRId64" group is the same as VQ %d one (%"PRId64")",
> +                cvq_group, i, group);
> +            return 0;


Ditto.


> +        }
> +    }
> +
> +    r = vhost_vdpa_set_address_space_id(v, cvq_group, VHOST_VDPA_NET_CVQ_ASID);
> +    if (unlikely(r < 0)) {
> +        return r;
> +    } else {
> +        v->shadow_vqs_enabled = true;
> +        s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_CVQ_ASID;
> +    }
> +
> +out:
>       if (!s->vhost_vdpa.shadow_vqs_enabled) {
>           return 0;
>       }
> @@ -652,6 +746,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>       g_autoptr(VhostIOVATree) iova_tree = NULL;
>       NetClientState *nc;
>       int queue_pairs, r, i = 0, has_cvq = 0;
> +    bool svq_cvq;
>   
>       assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>       opts = &netdev->u.vhost_vdpa;
> @@ -693,12 +788,24 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>           return queue_pairs;
>       }
>   
> -    if (opts->x_svq) {
> -        struct vhost_vdpa_iova_range iova_range;
> +    svq_cvq = opts->x_svq || has_cvq;
> +    if (svq_cvq) {
> +        Error *warn = NULL;
>   
> -        if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
> -            goto err_svq;
> +        svq_cvq = vhost_vdpa_net_valid_svq_features(features,
> +                                                   opts->x_svq ? errp : &warn);
> +        if (!svq_cvq) {
> +            if (opts->x_svq) {
> +                goto err_svq;
> +            } else {
> +                warn_reportf_err(warn, "Cannot shadow CVQ: ");


This seems suspicious, we reach here we we can't just use svq for cvq.



> +            }
>           }


The above logic is not easy to follow. I guess the root cause is the 
variable name. It looks to me svq_cvq is better to be renamed as "svq"?

Thanks


> +    }
> +
> +    if (svq_cvq) {
> +        /* Allocate a common iova tree if there is a possibility of SVQ */
> +        struct vhost_vdpa_iova_range iova_range;
>   
>           vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
>           iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH for 8.0 v7 10/10] vdpa: Always start CVQ in SVQ mode if possible
  2022-11-17  6:51   ` Jason Wang
@ 2022-11-17  7:43     ` Eugenio Perez Martin
  2022-11-17  8:12       ` Eugenio Perez Martin
  0 siblings, 1 reply; 20+ messages in thread
From: Eugenio Perez Martin @ 2022-11-17  7:43 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Cornelia Huck, Gautam Dawar, Eli Cohen,
	Stefano Garzarella, Harpreet Singh Anand, Paolo Bonzini,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong, Gonglei (Arei),
	Parav Pandit, Si-Wei Liu, Zhu Lingshan, Laurent Vivier,
	Michael S. Tsirkin, kvm

On Thu, Nov 17, 2022 at 7:52 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/11/16 23:05, Eugenio Pérez 写道:
> > Isolate control virtqueue in its own group, allowing to intercept control
> > commands but letting dataplane run totally passthrough to the guest.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> > v7:
> > * Never ask for number of address spaces, just react if isolation is not
> >    possible.
> > * Return ASID ioctl errors instead of masking them as if the device has
> >    no asid.
> > * Simplify net_init_vhost_vdpa logic
> > * Add "if possible" suffix
> >
> > v6:
> > * Disable control SVQ if the device does not support it because of
> > features.
> >
> > v5:
> > * Fixing the not adding cvq buffers when x-svq=on is specified.
> > * Move vring state in vhost_vdpa_get_vring_group instead of using a
> >    parameter.
> > * Rename VHOST_VDPA_NET_CVQ_PASSTHROUGH to VHOST_VDPA_NET_DATA_ASID
> >
> > v4:
> > * Squash vhost_vdpa_cvq_group_is_independent.
> > * Rebased on last CVQ start series, that allocated CVQ cmd bufs at load
> > * Do not check for cvq index on vhost_vdpa_net_prepare, we only have one
> >    that callback registered in that NetClientInfo.
> >
> > v3:
> > * Make asid related queries print a warning instead of returning an
> >    error and stop the start of qemu.
> > ---
> >   hw/virtio/vhost-vdpa.c |   3 +-
> >   net/vhost-vdpa.c       | 117 +++++++++++++++++++++++++++++++++++++++--
> >   2 files changed, 114 insertions(+), 6 deletions(-)
> >
> > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > index 852baf8b2c..a29a18a6a9 100644
> > --- a/hw/virtio/vhost-vdpa.c
> > +++ b/hw/virtio/vhost-vdpa.c
> > @@ -653,7 +653,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
> >   {
> >       uint64_t features;
> >       uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
> > -        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
> > +        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
> > +        0x1ULL << VHOST_BACKEND_F_IOTLB_ASID;
> >       int r;
> >
> >       if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index a9c864741a..dc13a49311 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -101,6 +101,8 @@ static const uint64_t vdpa_svq_device_features =
> >       BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
> >       BIT_ULL(VIRTIO_NET_F_STANDBY);
> >
> > +#define VHOST_VDPA_NET_CVQ_ASID 1
> > +
> >   VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
> >   {
> >       VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > @@ -242,6 +244,40 @@ static NetClientInfo net_vhost_vdpa_info = {
> >           .check_peer_type = vhost_vdpa_check_peer_type,
> >   };
> >
> > +static int64_t vhost_vdpa_get_vring_group(int device_fd, unsigned vq_index)
> > +{
> > +    struct vhost_vring_state state = {
> > +        .index = vq_index,
> > +    };
> > +    int r = ioctl(device_fd, VHOST_VDPA_GET_VRING_GROUP, &state);
> > +
> > +    if (unlikely(r < 0)) {
> > +        error_report("Cannot get VQ %u group: %s", vq_index,
> > +                     g_strerror(errno));
> > +        return r;
> > +    }
> > +
> > +    return state.num;
> > +}
> > +
> > +static int vhost_vdpa_set_address_space_id(struct vhost_vdpa *v,
> > +                                           unsigned vq_group,
> > +                                           unsigned asid_num)
> > +{
> > +    struct vhost_vring_state asid = {
> > +        .index = vq_group,
> > +        .num = asid_num,
> > +    };
> > +    int r;
> > +
> > +    r = ioctl(v->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid);
> > +    if (unlikely(r < 0)) {
> > +        error_report("Can't set vq group %u asid %u, errno=%d (%s)",
> > +                     asid.index, asid.num, errno, g_strerror(errno));
> > +    }
> > +    return r;
> > +}
> > +
> >   static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
> >   {
> >       VhostIOVATree *tree = v->iova_tree;
> > @@ -316,11 +352,69 @@ dma_map_err:
> >   static int vhost_vdpa_net_cvq_start(NetClientState *nc)
> >   {
> >       VhostVDPAState *s;
> > -    int r;
> > +    struct vhost_vdpa *v;
> > +    uint64_t backend_features;
> > +    int64_t cvq_group;
> > +    int cvq_index, r;
> >
> >       assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> >
> >       s = DO_UPCAST(VhostVDPAState, nc, nc);
> > +    v = &s->vhost_vdpa;
> > +
> > +    v->shadow_data = s->always_svq;
> > +    v->shadow_vqs_enabled = s->always_svq;
> > +    s->vhost_vdpa.address_space_id = VHOST_VDPA_GUEST_PA_ASID;
> > +
> > +    if (s->always_svq) {
> > +        goto out;
> > +    }
> > +
> > +    /* Backend features are not available in v->dev yet. */
> > +    r = ioctl(v->device_fd, VHOST_GET_BACKEND_FEATURES, &backend_features);
> > +    if (unlikely(r < 0)) {
> > +        error_report("Cannot get vdpa backend_features: %s(%d)",
> > +            g_strerror(errno), errno);
> > +        return -1;
> > +    }
> > +    if (!(backend_features & VHOST_BACKEND_F_IOTLB_ASID) ||
> > +        !vhost_vdpa_net_valid_svq_features(v->dev->features, NULL)) {
>
>
> I think there should be some logic to block migration in this case?
>

Since vhost_vdpa are not shadowed they don't expose _F_LOG, so
migration is blocked that way.

We can override it to make its message a little bit clearer.

>
> > +        return 0;
> > +    }
> > +
> > +    /**
> > +     * Check if all the virtqueues of the virtio device are in a different vq
> > +     * than the last vq. VQ group of last group passed in cvq_group.
> > +     */
> > +    cvq_index = v->dev->vq_index_end - 1;
> > +    cvq_group = vhost_vdpa_get_vring_group(v->device_fd, cvq_index);
> > +    if (unlikely(cvq_group < 0)) {
> > +        return cvq_group;x
> > +    }
> > +    for (int i = 0; i < cvq_index; ++i) {
> > +        int64_t group = vhost_vdpa_get_vring_group(v->device_fd, i);
> > +
> > +        if (unlikely(group < 0)) {
> > +            return group;
> > +        }
> > +
> > +        if (unlikely(group == cvq_group)) {
> > +            warn_report(
> > +                "CVQ %"PRId64" group is the same as VQ %d one (%"PRId64")",
> > +                cvq_group, i, group);
> > +            return 0;
>
>
> Ditto.
>
>
> > +        }
> > +    }
> > +
> > +    r = vhost_vdpa_set_address_space_id(v, cvq_group, VHOST_VDPA_NET_CVQ_ASID);
> > +    if (unlikely(r < 0)) {
> > +        return r;
> > +    } else {
> > +        v->shadow_vqs_enabled = true;
> > +        s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_CVQ_ASID;
> > +    }
> > +
> > +out:
> >       if (!s->vhost_vdpa.shadow_vqs_enabled) {
> >           return 0;
> >       }
> > @@ -652,6 +746,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> >       g_autoptr(VhostIOVATree) iova_tree = NULL;
> >       NetClientState *nc;
> >       int queue_pairs, r, i = 0, has_cvq = 0;
> > +    bool svq_cvq;
> >
> >       assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> >       opts = &netdev->u.vhost_vdpa;
> > @@ -693,12 +788,24 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> >           return queue_pairs;
> >       }
> >
> > -    if (opts->x_svq) {
> > -        struct vhost_vdpa_iova_range iova_range;
> > +    svq_cvq = opts->x_svq || has_cvq;
> > +    if (svq_cvq) {
> > +        Error *warn = NULL;
> >
> > -        if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
> > -            goto err_svq;
> > +        svq_cvq = vhost_vdpa_net_valid_svq_features(features,
> > +                                                   opts->x_svq ? errp : &warn);
> > +        if (!svq_cvq) {
> > +            if (opts->x_svq) {
> > +                goto err_svq;
> > +            } else {
> > +                warn_reportf_err(warn, "Cannot shadow CVQ: ");
>
>
> This seems suspicious, we reach here we we can't just use svq for cvq.
>

Right, but what is the suspicious part?

>
>
> > +            }
> >           }
>
>
> The above logic is not easy to follow. I guess the root cause is the
> variable name. It looks to me svq_cvq is better to be renamed as "svq"?
>

Yes, we can rename it. I'll send a newer version with the rename.

Thanks!

> Thanks
>
>
> > +    }
> > +
> > +    if (svq_cvq) {
> > +        /* Allocate a common iova tree if there is a possibility of SVQ */
> > +        struct vhost_vdpa_iova_range iova_range;
> >
> >           vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> >           iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH for 8.0 v7 10/10] vdpa: Always start CVQ in SVQ mode if possible
  2022-11-17  7:43     ` Eugenio Perez Martin
@ 2022-11-17  8:12       ` Eugenio Perez Martin
  0 siblings, 0 replies; 20+ messages in thread
From: Eugenio Perez Martin @ 2022-11-17  8:12 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Cornelia Huck, Gautam Dawar, Eli Cohen,
	Stefano Garzarella, Harpreet Singh Anand, Paolo Bonzini,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong, Gonglei (Arei),
	Parav Pandit, Si-Wei Liu, Zhu Lingshan, Laurent Vivier,
	Michael S. Tsirkin, kvm

On Thu, Nov 17, 2022 at 8:43 AM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Thu, Nov 17, 2022 at 7:52 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> >
> > 在 2022/11/16 23:05, Eugenio Pérez 写道:
> > > Isolate control virtqueue in its own group, allowing to intercept control
> > > commands but letting dataplane run totally passthrough to the guest.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > > v7:
> > > * Never ask for number of address spaces, just react if isolation is not
> > >    possible.
> > > * Return ASID ioctl errors instead of masking them as if the device has
> > >    no asid.
> > > * Simplify net_init_vhost_vdpa logic
> > > * Add "if possible" suffix
> > >
> > > v6:
> > > * Disable control SVQ if the device does not support it because of
> > > features.
> > >
> > > v5:
> > > * Fixing the not adding cvq buffers when x-svq=on is specified.
> > > * Move vring state in vhost_vdpa_get_vring_group instead of using a
> > >    parameter.
> > > * Rename VHOST_VDPA_NET_CVQ_PASSTHROUGH to VHOST_VDPA_NET_DATA_ASID
> > >
> > > v4:
> > > * Squash vhost_vdpa_cvq_group_is_independent.
> > > * Rebased on last CVQ start series, that allocated CVQ cmd bufs at load
> > > * Do not check for cvq index on vhost_vdpa_net_prepare, we only have one
> > >    that callback registered in that NetClientInfo.
> > >
> > > v3:
> > > * Make asid related queries print a warning instead of returning an
> > >    error and stop the start of qemu.
> > > ---
> > >   hw/virtio/vhost-vdpa.c |   3 +-
> > >   net/vhost-vdpa.c       | 117 +++++++++++++++++++++++++++++++++++++++--
> > >   2 files changed, 114 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > > index 852baf8b2c..a29a18a6a9 100644
> > > --- a/hw/virtio/vhost-vdpa.c
> > > +++ b/hw/virtio/vhost-vdpa.c
> > > @@ -653,7 +653,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
> > >   {
> > >       uint64_t features;
> > >       uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
> > > -        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
> > > +        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
> > > +        0x1ULL << VHOST_BACKEND_F_IOTLB_ASID;
> > >       int r;
> > >
> > >       if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
> > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > index a9c864741a..dc13a49311 100644
> > > --- a/net/vhost-vdpa.c
> > > +++ b/net/vhost-vdpa.c
> > > @@ -101,6 +101,8 @@ static const uint64_t vdpa_svq_device_features =
> > >       BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
> > >       BIT_ULL(VIRTIO_NET_F_STANDBY);
> > >
> > > +#define VHOST_VDPA_NET_CVQ_ASID 1
> > > +
> > >   VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
> > >   {
> > >       VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > @@ -242,6 +244,40 @@ static NetClientInfo net_vhost_vdpa_info = {
> > >           .check_peer_type = vhost_vdpa_check_peer_type,
> > >   };
> > >
> > > +static int64_t vhost_vdpa_get_vring_group(int device_fd, unsigned vq_index)
> > > +{
> > > +    struct vhost_vring_state state = {
> > > +        .index = vq_index,
> > > +    };
> > > +    int r = ioctl(device_fd, VHOST_VDPA_GET_VRING_GROUP, &state);
> > > +
> > > +    if (unlikely(r < 0)) {
> > > +        error_report("Cannot get VQ %u group: %s", vq_index,
> > > +                     g_strerror(errno));
> > > +        return r;
> > > +    }
> > > +
> > > +    return state.num;
> > > +}
> > > +
> > > +static int vhost_vdpa_set_address_space_id(struct vhost_vdpa *v,
> > > +                                           unsigned vq_group,
> > > +                                           unsigned asid_num)
> > > +{
> > > +    struct vhost_vring_state asid = {
> > > +        .index = vq_group,
> > > +        .num = asid_num,
> > > +    };
> > > +    int r;
> > > +
> > > +    r = ioctl(v->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid);
> > > +    if (unlikely(r < 0)) {
> > > +        error_report("Can't set vq group %u asid %u, errno=%d (%s)",
> > > +                     asid.index, asid.num, errno, g_strerror(errno));
> > > +    }
> > > +    return r;
> > > +}
> > > +
> > >   static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
> > >   {
> > >       VhostIOVATree *tree = v->iova_tree;
> > > @@ -316,11 +352,69 @@ dma_map_err:
> > >   static int vhost_vdpa_net_cvq_start(NetClientState *nc)
> > >   {
> > >       VhostVDPAState *s;
> > > -    int r;
> > > +    struct vhost_vdpa *v;
> > > +    uint64_t backend_features;
> > > +    int64_t cvq_group;
> > > +    int cvq_index, r;
> > >
> > >       assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > >
> > >       s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > +    v = &s->vhost_vdpa;
> > > +
> > > +    v->shadow_data = s->always_svq;
> > > +    v->shadow_vqs_enabled = s->always_svq;
> > > +    s->vhost_vdpa.address_space_id = VHOST_VDPA_GUEST_PA_ASID;
> > > +
> > > +    if (s->always_svq) {
> > > +        goto out;
> > > +    }
> > > +
> > > +    /* Backend features are not available in v->dev yet. */
> > > +    r = ioctl(v->device_fd, VHOST_GET_BACKEND_FEATURES, &backend_features);
> > > +    if (unlikely(r < 0)) {
> > > +        error_report("Cannot get vdpa backend_features: %s(%d)",
> > > +            g_strerror(errno), errno);
> > > +        return -1;
> > > +    }
> > > +    if (!(backend_features & VHOST_BACKEND_F_IOTLB_ASID) ||
> > > +        !vhost_vdpa_net_valid_svq_features(v->dev->features, NULL)) {
> >
> >
> > I think there should be some logic to block migration in this case?
> >
>
> Since vhost_vdpa are not shadowed they don't expose _F_LOG, so
> migration is blocked that way.
>
> We can override it to make its message a little bit clearer.
>
> >
> > > +        return 0;
> > > +    }
> > > +
> > > +    /**
> > > +     * Check if all the virtqueues of the virtio device are in a different vq
> > > +     * than the last vq. VQ group of last group passed in cvq_group.
> > > +     */
> > > +    cvq_index = v->dev->vq_index_end - 1;
> > > +    cvq_group = vhost_vdpa_get_vring_group(v->device_fd, cvq_index);
> > > +    if (unlikely(cvq_group < 0)) {
> > > +        return cvq_group;x
> > > +    }
> > > +    for (int i = 0; i < cvq_index; ++i) {
> > > +        int64_t group = vhost_vdpa_get_vring_group(v->device_fd, i);
> > > +
> > > +        if (unlikely(group < 0)) {
> > > +            return group;
> > > +        }
> > > +
> > > +        if (unlikely(group == cvq_group)) {
> > > +            warn_report(
> > > +                "CVQ %"PRId64" group is the same as VQ %d one (%"PRId64")",
> > > +                cvq_group, i, group);
> > > +            return 0;
> >
> >
> > Ditto.
> >
> >
> > > +        }
> > > +    }
> > > +
> > > +    r = vhost_vdpa_set_address_space_id(v, cvq_group, VHOST_VDPA_NET_CVQ_ASID);
> > > +    if (unlikely(r < 0)) {
> > > +        return r;
> > > +    } else {
> > > +        v->shadow_vqs_enabled = true;
> > > +        s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_CVQ_ASID;
> > > +    }
> > > +
> > > +out:
> > >       if (!s->vhost_vdpa.shadow_vqs_enabled) {
> > >           return 0;
> > >       }
> > > @@ -652,6 +746,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > >       g_autoptr(VhostIOVATree) iova_tree = NULL;
> > >       NetClientState *nc;
> > >       int queue_pairs, r, i = 0, has_cvq = 0;
> > > +    bool svq_cvq;
> > >
> > >       assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > >       opts = &netdev->u.vhost_vdpa;
> > > @@ -693,12 +788,24 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > >           return queue_pairs;
> > >       }
> > >
> > > -    if (opts->x_svq) {
> > > -        struct vhost_vdpa_iova_range iova_range;
> > > +    svq_cvq = opts->x_svq || has_cvq;
> > > +    if (svq_cvq) {
> > > +        Error *warn = NULL;
> > >
> > > -        if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
> > > -            goto err_svq;
> > > +        svq_cvq = vhost_vdpa_net_valid_svq_features(features,
> > > +                                                   opts->x_svq ? errp : &warn);
> > > +        if (!svq_cvq) {
> > > +            if (opts->x_svq) {
> > > +                goto err_svq;
> > > +            } else {
> > > +                warn_reportf_err(warn, "Cannot shadow CVQ: ");
> >
> >
> > This seems suspicious, we reach here we we can't just use svq for cvq.
> >
>
> Right, but what is the suspicious part?
>
> >
> >
> > > +            }
> > >           }
> >
> >
> > The above logic is not easy to follow. I guess the root cause is the
> > variable name. It looks to me svq_cvq is better to be renamed as "svq"?
> >
>
> Yes, we can rename it. I'll send a newer version with the rename.
>

Another possibility is to get rid of the variable and allocate the
iova_tree unconditionally. We could send the log in
vhost_vdpa_net_cvq_start, or make that message the vhost migration
blocker.

Thanks!

> Thanks!
>
> > Thanks
> >
> >
> > > +    }
> > > +
> > > +    if (svq_cvq) {
> > > +        /* Allocate a common iova tree if there is a possibility of SVQ */
> > > +        struct vhost_vdpa_iova_range iova_range;
> > >
> > >           vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> > >           iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
> >


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2022-11-17  8:13 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-16 15:05 [PATCH for 8.0 v7 00/10] ASID support in vhost-vdpa net Eugenio Pérez
2022-11-16 15:05 ` [PATCH for 8.0 v7 01/10] vdpa: Use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop Eugenio Pérez
2022-11-17  5:43   ` Jason Wang
2022-11-16 15:05 ` [PATCH for 8.0 v7 02/10] vhost: set SVQ device call handler at SVQ start Eugenio Pérez
2022-11-16 15:05 ` [PATCH for 8.0 v7 03/10] vhost: Allocate SVQ device file descriptors at device start Eugenio Pérez
2022-11-16 15:05 ` [PATCH for 8.0 v7 04/10] vdpa: add vhost_vdpa_net_valid_svq_features Eugenio Pérez
2022-11-16 15:05 ` [PATCH for 8.0 v7 05/10] vdpa: move SVQ vring features check to net/ Eugenio Pérez
2022-11-17  5:44   ` Jason Wang
2022-11-16 15:05 ` [PATCH for 8.0 v7 06/10] vdpa: Allocate SVQ unconditionally Eugenio Pérez
2022-11-17  5:44   ` Jason Wang
2022-11-16 15:05 ` [PATCH for 8.0 v7 07/10] vdpa: Add asid parameter to vhost_vdpa_dma_map/unmap Eugenio Pérez
2022-11-17  5:45   ` Jason Wang
2022-11-16 15:05 ` [PATCH for 8.0 v7 08/10] vdpa: Store x-svq parameter in VhostVDPAState Eugenio Pérez
2022-11-17  6:15   ` Jason Wang
2022-11-16 15:05 ` [PATCH for 8.0 v7 09/10] vdpa: Add shadow_data to vhost_vdpa Eugenio Pérez
2022-11-17  6:24   ` Jason Wang
2022-11-16 15:05 ` [PATCH for 8.0 v7 10/10] vdpa: Always start CVQ in SVQ mode if possible Eugenio Pérez
2022-11-17  6:51   ` Jason Wang
2022-11-17  7:43     ` Eugenio Perez Martin
2022-11-17  8:12       ` Eugenio Perez Martin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).