All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v9 00/12] ASID support in vhost-vdpa net
@ 2022-12-15 11:31 Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 01/12] vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop Eugenio Pérez
                   ` (11 more replies)
  0 siblings, 12 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

Control VQ is the way net devices use to send changes to the device state, like
the number of active queues or its mac address.

QEMU needs to intercept this queue so it can track these changes and is able to
migrate the device. It can do it from 1576dbb5bbc4 ("vdpa: Add x-svq to
NetdevVhostVDPAOptions"). However, to enable x-svq implies to shadow all VirtIO
device's virtqueues, which will damage performance.

This series adds address space isolation, so the device and the guest
communicate directly with them (passthrough) and CVQ communication is split in
two: The guest communicates with qemu and qemu forwards the commands to the
device.

Comments are welcome. Thanks!

v9:
- Reuse iova_range fetched from the device at initialization, instead of
  fetch it again at vhost_vdpa_net_cvq_start.
- Add comment about how migration is blocked in case ASID does not met
  our expectations.
- Delete warning about CVQ group not being independent.

v8:
- Do not allocate iova_tree on net_init_vhost_vdpa if only CVQ is
  shadowed. Move the iova_tree allocation to
  vhost_vdpa_net_cvq_start and vhost_vdpa_net_cvq_stop in this case.

v7:
- Never ask for number of address spaces, just react if isolation is not
  possible.
- Return ASID ioctl errors instead of masking them as if the device has
  no asid.
- Rename listener_shadow_vq to shadow_data
- Move comment on zero initailization of vhost_vdpa_dma_map above the
  functions.
- Add VHOST_VDPA_GUEST_PA_ASID macro.

v6:
- Do not allocate SVQ resources like file descriptors if SVQ cannot be used.
- Disable shadow CVQ if the device does not support it because of net
  features.

v5:
- Move vring state in vhost_vdpa_get_vring_group instead of using a
  parameter.
- Rename VHOST_VDPA_NET_CVQ_PASSTHROUGH to VHOST_VDPA_NET_DATA_ASID

v4:
- Rebased on last CVQ start series, that allocated CVQ cmd bufs at load
- Squash vhost_vdpa_cvq_group_is_independent.
- Do not check for cvq index on vhost_vdpa_net_prepare, we only have one
  that callback registered in that NetClientInfo.
- Add comment specifying behavior if device does not support _F_ASID
- Update headers to a later Linux commit to not to remove SETUP_RNG_SEED

v3:
- Do not return an error but just print a warning if vdpa device initialization
  returns failure while getting AS num of VQ groups
- Delete extra newline

v2:
- Much as commented on series [1], handle vhost_net backend through
  NetClientInfo callbacks instead of directly.
- Fix not freeing SVQ properly when device does not support CVQ
- Add BIT_ULL missed checking device's backend feature for _F_ASID.

Eugenio Pérez (12):
  vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
  vhost: set SVQ device call handler at SVQ start
  vhost: allocate SVQ device file descriptors at device start
  vhost: move iova_tree set to vhost_svq_start
  vdpa: add vhost_vdpa_net_valid_svq_features
  vdpa: request iova_range only once
  vdpa: move SVQ vring features check to net/
  vdpa: allocate SVQ array unconditionally
  vdpa: add asid parameter to vhost_vdpa_dma_map/unmap
  vdpa: store x-svq parameter in VhostVDPAState
  vdpa: add shadow_data to vhost_vdpa
  vdpa: always start CVQ in SVQ mode if possible

 hw/virtio/vhost-shadow-virtqueue.h |   5 +-
 include/hw/virtio/vhost-vdpa.h     |  16 ++-
 hw/virtio/vhost-shadow-virtqueue.c |  44 ++------
 hw/virtio/vhost-vdpa.c             | 140 +++++++++++------------
 net/vhost-vdpa.c                   | 174 ++++++++++++++++++++++++-----
 hw/virtio/trace-events             |   4 +-
 6 files changed, 237 insertions(+), 146 deletions(-)

-- 
2.31.1



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v9 01/12] vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 02/12] vhost: set SVQ device call handler at SVQ start Eugenio Pérez
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

This function used to trust in v->shadow_vqs != NULL to know if it must
start svq or not.

This is not going to be valid anymore, as qemu is going to allocate svq
array unconditionally (but it will only start them conditionally).

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 7468e44b87..7f0ff4df5b 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1029,7 +1029,7 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
     Error *err = NULL;
     unsigned i;
 
-    if (!v->shadow_vqs) {
+    if (!v->shadow_vqs_enabled) {
         return true;
     }
 
@@ -1082,7 +1082,7 @@ static void vhost_vdpa_svqs_stop(struct vhost_dev *dev)
 {
     struct vhost_vdpa *v = dev->opaque;
 
-    if (!v->shadow_vqs) {
+    if (!v->shadow_vqs_enabled) {
         return;
     }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 02/12] vhost: set SVQ device call handler at SVQ start
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 01/12] vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 03/12] vhost: allocate SVQ device file descriptors at device start Eugenio Pérez
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

By the end of this series CVQ is shadowed as long as the features
support it.

Since we don't know at the beginning of qemu running if this is
supported, move the event notifier handler setting to the start of the
SVQ, instead of the start of qemu run. This will avoid to create them if
the device does not support SVQ.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 5bd14cad96..264ddc166d 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -648,6 +648,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
 {
     size_t desc_size, driver_size, device_size;
 
+    event_notifier_set_handler(&svq->hdev_call, vhost_svq_handle_call);
     svq->next_guest_avail_elem = NULL;
     svq->shadow_avail_idx = 0;
     svq->shadow_used_idx = 0;
@@ -704,6 +705,7 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
     g_free(svq->desc_state);
     qemu_vfree(svq->vring.desc);
     qemu_vfree(svq->vring.used);
+    event_notifier_set_handler(&svq->hdev_call, NULL);
 }
 
 /**
@@ -740,7 +742,6 @@ VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
     }
 
     event_notifier_init_fd(&svq->svq_kick, VHOST_FILE_UNBIND);
-    event_notifier_set_handler(&svq->hdev_call, vhost_svq_handle_call);
     svq->iova_tree = iova_tree;
     svq->ops = ops;
     svq->ops_opaque = ops_opaque;
@@ -763,7 +764,6 @@ void vhost_svq_free(gpointer pvq)
     VhostShadowVirtqueue *vq = pvq;
     vhost_svq_stop(vq);
     event_notifier_cleanup(&vq->hdev_kick);
-    event_notifier_set_handler(&vq->hdev_call, NULL);
     event_notifier_cleanup(&vq->hdev_call);
     g_free(vq);
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 03/12] vhost: allocate SVQ device file descriptors at device start
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 01/12] vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 02/12] vhost: set SVQ device call handler at SVQ start Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 04/12] vhost: move iova_tree set to vhost_svq_start Eugenio Pérez
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

The next patches will start control SVQ if possible. However, we don't
know if that will be possible at qemu boot anymore.

Delay device file descriptors until we know it at device start. This
will avoid to create them if the device does not support SVQ.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 31 ++------------------------
 hw/virtio/vhost-vdpa.c             | 35 ++++++++++++++++++++++++------
 2 files changed, 30 insertions(+), 36 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 264ddc166d..3b05bab44d 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -715,43 +715,18 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
  * @iova_tree: Tree to perform descriptors translations
  * @ops: SVQ owner callbacks
  * @ops_opaque: ops opaque pointer
- *
- * Returns the new virtqueue or NULL.
- *
- * In case of error, reason is reported through error_report.
  */
 VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
                                     const VhostShadowVirtqueueOps *ops,
                                     void *ops_opaque)
 {
-    g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
-    int r;
-
-    r = event_notifier_init(&svq->hdev_kick, 0);
-    if (r != 0) {
-        error_report("Couldn't create kick event notifier: %s (%d)",
-                     g_strerror(errno), errno);
-        goto err_init_hdev_kick;
-    }
-
-    r = event_notifier_init(&svq->hdev_call, 0);
-    if (r != 0) {
-        error_report("Couldn't create call event notifier: %s (%d)",
-                     g_strerror(errno), errno);
-        goto err_init_hdev_call;
-    }
+    VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
 
     event_notifier_init_fd(&svq->svq_kick, VHOST_FILE_UNBIND);
     svq->iova_tree = iova_tree;
     svq->ops = ops;
     svq->ops_opaque = ops_opaque;
-    return g_steal_pointer(&svq);
-
-err_init_hdev_call:
-    event_notifier_cleanup(&svq->hdev_kick);
-
-err_init_hdev_kick:
-    return NULL;
+    return svq;
 }
 
 /**
@@ -763,7 +738,5 @@ void vhost_svq_free(gpointer pvq)
 {
     VhostShadowVirtqueue *vq = pvq;
     vhost_svq_stop(vq);
-    event_notifier_cleanup(&vq->hdev_kick);
-    event_notifier_cleanup(&vq->hdev_call);
     g_free(vq);
 }
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 7f0ff4df5b..3df2775760 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -428,15 +428,11 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
 
     shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free);
     for (unsigned n = 0; n < hdev->nvqs; ++n) {
-        g_autoptr(VhostShadowVirtqueue) svq;
+        VhostShadowVirtqueue *svq;
 
         svq = vhost_svq_new(v->iova_tree, v->shadow_vq_ops,
                             v->shadow_vq_ops_opaque);
-        if (unlikely(!svq)) {
-            error_setg(errp, "Cannot create svq %u", n);
-            return -1;
-        }
-        g_ptr_array_add(shadow_vqs, g_steal_pointer(&svq));
+        g_ptr_array_add(shadow_vqs, svq);
     }
 
     v->shadow_vqs = g_steal_pointer(&shadow_vqs);
@@ -864,11 +860,23 @@ static int vhost_vdpa_svq_set_fds(struct vhost_dev *dev,
     const EventNotifier *event_notifier = &svq->hdev_kick;
     int r;
 
+    r = event_notifier_init(&svq->hdev_kick, 0);
+    if (r != 0) {
+        error_setg_errno(errp, -r, "Couldn't create kick event notifier");
+        goto err_init_hdev_kick;
+    }
+
+    r = event_notifier_init(&svq->hdev_call, 0);
+    if (r != 0) {
+        error_setg_errno(errp, -r, "Couldn't create call event notifier");
+        goto err_init_hdev_call;
+    }
+
     file.fd = event_notifier_get_fd(event_notifier);
     r = vhost_vdpa_set_vring_dev_kick(dev, &file);
     if (unlikely(r != 0)) {
         error_setg_errno(errp, -r, "Can't set device kick fd");
-        return r;
+        goto err_init_set_dev_fd;
     }
 
     event_notifier = &svq->hdev_call;
@@ -876,8 +884,18 @@ static int vhost_vdpa_svq_set_fds(struct vhost_dev *dev,
     r = vhost_vdpa_set_vring_dev_call(dev, &file);
     if (unlikely(r != 0)) {
         error_setg_errno(errp, -r, "Can't set device call fd");
+        goto err_init_set_dev_fd;
     }
 
+    return 0;
+
+err_init_set_dev_fd:
+    event_notifier_set_handler(&svq->hdev_call, NULL);
+
+err_init_hdev_call:
+    event_notifier_cleanup(&svq->hdev_kick);
+
+err_init_hdev_kick:
     return r;
 }
 
@@ -1089,6 +1107,9 @@ static void vhost_vdpa_svqs_stop(struct vhost_dev *dev)
     for (unsigned i = 0; i < v->shadow_vqs->len; ++i) {
         VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i);
         vhost_vdpa_svq_unmap_rings(dev, svq);
+
+        event_notifier_cleanup(&svq->hdev_kick);
+        event_notifier_cleanup(&svq->hdev_call);
     }
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 04/12] vhost: move iova_tree set to vhost_svq_start
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (2 preceding siblings ...)
  2022-12-15 11:31 ` [PATCH v9 03/12] vhost: allocate SVQ device file descriptors at device start Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 05/12] vdpa: add vhost_vdpa_net_valid_svq_features Eugenio Pérez
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

Since we don't know if we will use SVQ at qemu initialization, let's
allocate iova_tree only if needed. To do so, accept it at SVQ start, not
at initialization.

This will avoid to create it if the device does not support SVQ.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.h | 5 ++---
 hw/virtio/vhost-shadow-virtqueue.c | 9 ++++-----
 hw/virtio/vhost-vdpa.c             | 5 ++---
 3 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
index d04c34a589..926a4897b1 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -126,11 +126,10 @@ size_t vhost_svq_driver_area_size(const VhostShadowVirtqueue *svq);
 size_t vhost_svq_device_area_size(const VhostShadowVirtqueue *svq);
 
 void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
-                     VirtQueue *vq);
+                     VirtQueue *vq, VhostIOVATree *iova_tree);
 void vhost_svq_stop(VhostShadowVirtqueue *svq);
 
-VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
-                                    const VhostShadowVirtqueueOps *ops,
+VhostShadowVirtqueue *vhost_svq_new(const VhostShadowVirtqueueOps *ops,
                                     void *ops_opaque);
 
 void vhost_svq_free(gpointer vq);
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 3b05bab44d..4307296358 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -642,9 +642,10 @@ void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int svq_kick_fd)
  * @svq: Shadow Virtqueue
  * @vdev: VirtIO device
  * @vq: Virtqueue to shadow
+ * @iova_tree: Tree to perform descriptors translations
  */
 void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
-                     VirtQueue *vq)
+                     VirtQueue *vq, VhostIOVATree *iova_tree)
 {
     size_t desc_size, driver_size, device_size;
 
@@ -655,6 +656,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
     svq->last_used_idx = 0;
     svq->vdev = vdev;
     svq->vq = vq;
+    svq->iova_tree = iova_tree;
 
     svq->vring.num = virtio_queue_get_num(vdev, virtio_get_queue_index(vq));
     driver_size = vhost_svq_driver_area_size(svq);
@@ -712,18 +714,15 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
  * Creates vhost shadow virtqueue, and instructs the vhost device to use the
  * shadow methods and file descriptors.
  *
- * @iova_tree: Tree to perform descriptors translations
  * @ops: SVQ owner callbacks
  * @ops_opaque: ops opaque pointer
  */
-VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
-                                    const VhostShadowVirtqueueOps *ops,
+VhostShadowVirtqueue *vhost_svq_new(const VhostShadowVirtqueueOps *ops,
                                     void *ops_opaque)
 {
     VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
 
     event_notifier_init_fd(&svq->svq_kick, VHOST_FILE_UNBIND);
-    svq->iova_tree = iova_tree;
     svq->ops = ops;
     svq->ops_opaque = ops_opaque;
     return svq;
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 3df2775760..691bcc811a 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -430,8 +430,7 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
     for (unsigned n = 0; n < hdev->nvqs; ++n) {
         VhostShadowVirtqueue *svq;
 
-        svq = vhost_svq_new(v->iova_tree, v->shadow_vq_ops,
-                            v->shadow_vq_ops_opaque);
+        svq = vhost_svq_new(v->shadow_vq_ops, v->shadow_vq_ops_opaque);
         g_ptr_array_add(shadow_vqs, svq);
     }
 
@@ -1063,7 +1062,7 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
             goto err;
         }
 
-        vhost_svq_start(svq, dev->vdev, vq);
+        vhost_svq_start(svq, dev->vdev, vq, v->iova_tree);
         ok = vhost_vdpa_svq_map_rings(dev, svq, &addr, &err);
         if (unlikely(!ok)) {
             goto err_map;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 05/12] vdpa: add vhost_vdpa_net_valid_svq_features
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (3 preceding siblings ...)
  2022-12-15 11:31 ` [PATCH v9 04/12] vhost: move iova_tree set to vhost_svq_start Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 06/12] vdpa: request iova_range only once Eugenio Pérez
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

It will be reused at vdpa device start so let's extract in its own
function.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 26 +++++++++++++++++---------
 1 file changed, 17 insertions(+), 9 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 260e474863..2c0ff6d7b0 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -107,6 +107,22 @@ VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
     return s->vhost_net;
 }
 
+static bool vhost_vdpa_net_valid_svq_features(uint64_t features, Error **errp)
+{
+    uint64_t invalid_dev_features =
+        features & ~vdpa_svq_device_features &
+        /* Transport are all accepted at this point */
+        ~MAKE_64BIT_MASK(VIRTIO_TRANSPORT_F_START,
+                         VIRTIO_TRANSPORT_F_END - VIRTIO_TRANSPORT_F_START);
+
+    if (invalid_dev_features) {
+        error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
+                   invalid_dev_features);
+    }
+
+    return !invalid_dev_features;
+}
+
 static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
 {
     uint32_t device_id;
@@ -676,15 +692,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
     if (opts->x_svq) {
         struct vhost_vdpa_iova_range iova_range;
 
-        uint64_t invalid_dev_features =
-            features & ~vdpa_svq_device_features &
-            /* Transport are all accepted at this point */
-            ~MAKE_64BIT_MASK(VIRTIO_TRANSPORT_F_START,
-                             VIRTIO_TRANSPORT_F_END - VIRTIO_TRANSPORT_F_START);
-
-        if (invalid_dev_features) {
-            error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
-                       invalid_dev_features);
+        if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
             goto err_svq;
         }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 06/12] vdpa: request iova_range only once
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (4 preceding siblings ...)
  2022-12-15 11:31 ` [PATCH v9 05/12] vdpa: add vhost_vdpa_net_valid_svq_features Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-16  7:29     ` Jason Wang
  2022-12-15 11:31 ` [PATCH v9 07/12] vdpa: move SVQ vring features check to net/ Eugenio Pérez
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

Currently iova range is requested once per queue pair in the case of
net. Reduce the number of ioctls asking it once at initialization and
reusing that value for each vhost_vdpa.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 15 ---------------
 net/vhost-vdpa.c       | 27 ++++++++++++++-------------
 2 files changed, 14 insertions(+), 28 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 691bcc811a..9b7f4ef083 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -365,19 +365,6 @@ static int vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
     return 0;
 }
 
-static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
-{
-    int ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE,
-                              &v->iova_range);
-    if (ret != 0) {
-        v->iova_range.first = 0;
-        v->iova_range.last = UINT64_MAX;
-    }
-
-    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
-                                    v->iova_range.last);
-}
-
 /*
  * The use of this function is for requests that only need to be
  * applied once. Typically such request occurs at the beginning
@@ -465,8 +452,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
         goto err;
     }
 
-    vhost_vdpa_get_iova_range(v);
-
     if (!vhost_vdpa_first_dev(dev)) {
         return 0;
     }
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 2c0ff6d7b0..b6462f0192 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -541,14 +541,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
 };
 
 static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
-                                           const char *device,
-                                           const char *name,
-                                           int vdpa_device_fd,
-                                           int queue_pair_index,
-                                           int nvqs,
-                                           bool is_datapath,
-                                           bool svq,
-                                           VhostIOVATree *iova_tree)
+                                       const char *device,
+                                       const char *name,
+                                       int vdpa_device_fd,
+                                       int queue_pair_index,
+                                       int nvqs,
+                                       bool is_datapath,
+                                       bool svq,
+                                       struct vhost_vdpa_iova_range iova_range,
+                                       VhostIOVATree *iova_tree)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
@@ -567,6 +568,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     s->vhost_vdpa.index = queue_pair_index;
     s->vhost_vdpa.shadow_vqs_enabled = svq;
+    s->vhost_vdpa.iova_range = iova_range;
     s->vhost_vdpa.iova_tree = iova_tree;
     if (!is_datapath) {
         s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
@@ -646,6 +648,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
     int vdpa_device_fd;
     g_autofree NetClientState **ncs = NULL;
     g_autoptr(VhostIOVATree) iova_tree = NULL;
+    struct vhost_vdpa_iova_range iova_range;
     NetClientState *nc;
     int queue_pairs, r, i = 0, has_cvq = 0;
 
@@ -689,14 +692,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         return queue_pairs;
     }
 
+    vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
     if (opts->x_svq) {
-        struct vhost_vdpa_iova_range iova_range;
-
         if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
             goto err_svq;
         }
 
-        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
         iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
     }
 
@@ -705,7 +706,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
     for (i = 0; i < queue_pairs; i++) {
         ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
                                      vdpa_device_fd, i, 2, true, opts->x_svq,
-                                     iova_tree);
+                                     iova_range, iova_tree);
         if (!ncs[i])
             goto err;
     }
@@ -713,7 +714,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
     if (has_cvq) {
         nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
                                  vdpa_device_fd, i, 1, false,
-                                 opts->x_svq, iova_tree);
+                                 opts->x_svq, iova_range, iova_tree);
         if (!nc)
             goto err;
     }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 07/12] vdpa: move SVQ vring features check to net/
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (5 preceding siblings ...)
  2022-12-15 11:31 ` [PATCH v9 06/12] vdpa: request iova_range only once Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 08/12] vdpa: allocate SVQ array unconditionally Eugenio Pérez
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

The next patches will start control SVQ if possible. However, we don't
know if that will be possible at qemu boot anymore.

Since the moved checks will be already evaluated at net/ to know if it
is ok to shadow CVQ, move them.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 32 ++------------------------------
 net/vhost-vdpa.c       |  3 ++-
 2 files changed, 4 insertions(+), 31 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 9b7f4ef083..5039d9bb2f 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -389,29 +389,9 @@ static int vhost_vdpa_get_dev_features(struct vhost_dev *dev,
     return ret;
 }
 
-static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
-                               Error **errp)
+static void vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v)
 {
     g_autoptr(GPtrArray) shadow_vqs = NULL;
-    uint64_t dev_features, svq_features;
-    int r;
-    bool ok;
-
-    if (!v->shadow_vqs_enabled) {
-        return 0;
-    }
-
-    r = vhost_vdpa_get_dev_features(hdev, &dev_features);
-    if (r != 0) {
-        error_setg_errno(errp, -r, "Can't get vdpa device features");
-        return r;
-    }
-
-    svq_features = dev_features;
-    ok = vhost_svq_valid_features(svq_features, errp);
-    if (unlikely(!ok)) {
-        return -1;
-    }
 
     shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free);
     for (unsigned n = 0; n < hdev->nvqs; ++n) {
@@ -422,7 +402,6 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
     }
 
     v->shadow_vqs = g_steal_pointer(&shadow_vqs);
-    return 0;
 }
 
 static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
@@ -447,10 +426,7 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
     dev->opaque =  opaque ;
     v->listener = vhost_vdpa_memory_listener;
     v->msg_type = VHOST_IOTLB_MSG_V2;
-    ret = vhost_vdpa_init_svq(dev, v, errp);
-    if (ret) {
-        goto err;
-    }
+    vhost_vdpa_init_svq(dev, v);
 
     if (!vhost_vdpa_first_dev(dev)) {
         return 0;
@@ -460,10 +436,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
                                VIRTIO_CONFIG_S_DRIVER);
 
     return 0;
-
-err:
-    ram_block_discard_disable(false);
-    return ret;
 }
 
 static void vhost_vdpa_host_notifier_uninit(struct vhost_dev *dev,
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index b6462f0192..e829ef1f43 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -118,9 +118,10 @@ static bool vhost_vdpa_net_valid_svq_features(uint64_t features, Error **errp)
     if (invalid_dev_features) {
         error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
                    invalid_dev_features);
+        return false;
     }
 
-    return !invalid_dev_features;
+    return vhost_svq_valid_features(features, errp);
 }
 
 static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 08/12] vdpa: allocate SVQ array unconditionally
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (6 preceding siblings ...)
  2022-12-15 11:31 ` [PATCH v9 07/12] vdpa: move SVQ vring features check to net/ Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 09/12] vdpa: add asid parameter to vhost_vdpa_dma_map/unmap Eugenio Pérez
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

SVQ may run or not in a device depending on runtime conditions (for
example, if the device can move CVQ to its own group or not).

Allocate the SVQ array unconditionally at startup, since its hard to
move this allocation elsewhere.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 5039d9bb2f..86e1fa8e9e 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -532,10 +532,6 @@ static void vhost_vdpa_svq_cleanup(struct vhost_dev *dev)
     struct vhost_vdpa *v = dev->opaque;
     size_t idx;
 
-    if (!v->shadow_vqs) {
-        return;
-    }
-
     for (idx = 0; idx < v->shadow_vqs->len; ++idx) {
         vhost_svq_stop(g_ptr_array_index(v->shadow_vqs, idx));
     }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 09/12] vdpa: add asid parameter to vhost_vdpa_dma_map/unmap
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (7 preceding siblings ...)
  2022-12-15 11:31 ` [PATCH v9 08/12] vdpa: allocate SVQ array unconditionally Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 10/12] vdpa: store x-svq parameter in VhostVDPAState Eugenio Pérez
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

So the caller can choose which ASID is destined.

No need to update the batch functions as they will always be called from
memory listener updates at the moment. Memory listener updates will
always update ASID 0, as it's the passthrough ASID.

All vhost devices's ASID are 0 at this moment.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
v7:
* Move comment on zero initailization of vhost_vdpa_dma_map above the
  functions.
* Add VHOST_VDPA_GUEST_PA_ASID macro.

v5:
* Solve conflict, now vhost_vdpa_svq_unmap_ring returns void
* Change comment on zero initialization.

v4: Add comment specifying behavior if device does not support _F_ASID

v3: Deleted unneeded space
---
 include/hw/virtio/vhost-vdpa.h | 14 ++++++++++---
 hw/virtio/vhost-vdpa.c         | 36 +++++++++++++++++++++++-----------
 net/vhost-vdpa.c               |  6 +++---
 hw/virtio/trace-events         |  4 ++--
 4 files changed, 41 insertions(+), 19 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 1111d85643..e57dfa1fd1 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -19,6 +19,12 @@
 #include "hw/virtio/virtio.h"
 #include "standard-headers/linux/vhost_types.h"
 
+/*
+ * ASID dedicated to map guest's addresses.  If SVQ is disabled it maps GPA to
+ * qemu's IOVA.  If SVQ is enabled it maps also the SVQ vring here
+ */
+#define VHOST_VDPA_GUEST_PA_ASID 0
+
 typedef struct VhostVDPAHostNotifier {
     MemoryRegion mr;
     void *addr;
@@ -29,6 +35,7 @@ typedef struct vhost_vdpa {
     int index;
     uint32_t msg_type;
     bool iotlb_batch_begin_sent;
+    uint32_t address_space_id;
     MemoryListener listener;
     struct vhost_vdpa_iova_range iova_range;
     uint64_t acked_features;
@@ -42,8 +49,9 @@ typedef struct vhost_vdpa {
     VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
 } VhostVDPA;
 
-int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
-                       void *vaddr, bool readonly);
-int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size);
+int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+                       hwaddr size, void *vaddr, bool readonly);
+int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+                         hwaddr size);
 
 #endif
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 86e1fa8e9e..5e591a8fda 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -72,22 +72,28 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
     return false;
 }
 
-int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
-                       void *vaddr, bool readonly)
+/*
+ * The caller must set asid = 0 if the device does not support asid.
+ * This is not an ABI break since it is set to 0 by the initializer anyway.
+ */
+int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+                       hwaddr size, void *vaddr, bool readonly)
 {
     struct vhost_msg_v2 msg = {};
     int fd = v->device_fd;
     int ret = 0;
 
     msg.type = v->msg_type;
+    msg.asid = asid;
     msg.iotlb.iova = iova;
     msg.iotlb.size = size;
     msg.iotlb.uaddr = (uint64_t)(uintptr_t)vaddr;
     msg.iotlb.perm = readonly ? VHOST_ACCESS_RO : VHOST_ACCESS_RW;
     msg.iotlb.type = VHOST_IOTLB_UPDATE;
 
-   trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.iotlb.iova, msg.iotlb.size,
-                            msg.iotlb.uaddr, msg.iotlb.perm, msg.iotlb.type);
+    trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.asid, msg.iotlb.iova,
+                             msg.iotlb.size, msg.iotlb.uaddr, msg.iotlb.perm,
+                             msg.iotlb.type);
 
     if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
         error_report("failed to write, fd=%d, errno=%d (%s)",
@@ -98,18 +104,24 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
     return ret;
 }
 
-int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size)
+/*
+ * The caller must set asid = 0 if the device does not support asid.
+ * This is not an ABI break since it is set to 0 by the initializer anyway.
+ */
+int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+                         hwaddr size)
 {
     struct vhost_msg_v2 msg = {};
     int fd = v->device_fd;
     int ret = 0;
 
     msg.type = v->msg_type;
+    msg.asid = asid;
     msg.iotlb.iova = iova;
     msg.iotlb.size = size;
     msg.iotlb.type = VHOST_IOTLB_INVALIDATE;
 
-    trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.iotlb.iova,
+    trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.asid, msg.iotlb.iova,
                                msg.iotlb.size, msg.iotlb.type);
 
     if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
@@ -229,8 +241,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
     }
 
     vhost_vdpa_iotlb_batch_begin_once(v);
-    ret = vhost_vdpa_dma_map(v, iova, int128_get64(llsize),
-                             vaddr, section->readonly);
+    ret = vhost_vdpa_dma_map(v, VHOST_VDPA_GUEST_PA_ASID, iova,
+                             int128_get64(llsize), vaddr, section->readonly);
     if (ret) {
         error_report("vhost vdpa map fail!");
         goto fail_map;
@@ -303,7 +315,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
         vhost_iova_tree_remove(v->iova_tree, *result);
     }
     vhost_vdpa_iotlb_batch_begin_once(v);
-    ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
+    ret = vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova,
+                               int128_get64(llsize));
     if (ret) {
         error_report("vhost_vdpa dma unmap error!");
     }
@@ -869,7 +882,7 @@ static void vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v, hwaddr addr)
     }
 
     size = ROUND_UP(result->size, qemu_real_host_page_size());
-    r = vhost_vdpa_dma_unmap(v, result->iova, size);
+    r = vhost_vdpa_dma_unmap(v, v->address_space_id, result->iova, size);
     if (unlikely(r < 0)) {
         error_report("Unable to unmap SVQ vring: %s (%d)", g_strerror(-r), -r);
         return;
@@ -909,7 +922,8 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa *v, DMAMap *needle,
         return false;
     }
 
-    r = vhost_vdpa_dma_map(v, needle->iova, needle->size + 1,
+    r = vhost_vdpa_dma_map(v, v->address_space_id, needle->iova,
+                           needle->size + 1,
                            (void *)(uintptr_t)needle->translated_addr,
                            needle->perm == IOMMU_RO);
     if (unlikely(r != 0)) {
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index e829ef1f43..a592ee07ec 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -259,7 +259,7 @@ static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
         return;
     }
 
-    r = vhost_vdpa_dma_unmap(v, map->iova, map->size + 1);
+    r = vhost_vdpa_dma_unmap(v, v->address_space_id, map->iova, map->size + 1);
     if (unlikely(r != 0)) {
         error_report("Device cannot unmap: %s(%d)", g_strerror(r), r);
     }
@@ -299,8 +299,8 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
         return r;
     }
 
-    r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
-                           !write);
+    r = vhost_vdpa_dma_map(v, v->address_space_id, map.iova,
+                           vhost_vdpa_net_cvq_cmd_page_len(), buf, !write);
     if (unlikely(r < 0)) {
         goto dma_map_err;
     }
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 14fc5b9bb2..96da58a41f 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -30,8 +30,8 @@ vhost_user_write(uint32_t req, uint32_t flags) "req:%d flags:0x%"PRIx32""
 vhost_user_create_notifier(int idx, void *n) "idx:%d n:%p"
 
 # vhost-vdpa.c
-vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
-vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8
+vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
+vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8
 vhost_vdpa_listener_begin_batch(void *v, int fd, uint32_t msg_type, uint8_t type)  "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
 vhost_vdpa_listener_commit(void *v, int fd, uint32_t msg_type, uint8_t type)  "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
 vhost_vdpa_listener_region_add(void *vdpa, uint64_t iova, uint64_t llend, void *vaddr, bool readonly) "vdpa: %p iova 0x%"PRIx64" llend 0x%"PRIx64" vaddr: %p read-only: %d"
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 10/12] vdpa: store x-svq parameter in VhostVDPAState
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (8 preceding siblings ...)
  2022-12-15 11:31 ` [PATCH v9 09/12] vdpa: add asid parameter to vhost_vdpa_dma_map/unmap Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 11/12] vdpa: add shadow_data to vhost_vdpa Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 12/12] vdpa: always start CVQ in SVQ mode if possible Eugenio Pérez
  11 siblings, 0 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

CVQ can be shadowed two ways:
- Device has x-svq=on parameter (current way)
- The device can isolate CVQ in its own vq group

QEMU needs to check for the second condition dynamically, because CVQ
index is not known before the driver ack the features. Since this is
dynamic, the CVQ isolation could vary with different conditions, making
it possible to go from "not isolated group" to "isolated".

Saving the cmdline parameter in an extra field so we never disable CVQ
SVQ in case the device was started with x-svq cmdline.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index a592ee07ec..bff72717d0 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -38,6 +38,8 @@ typedef struct VhostVDPAState {
     void *cvq_cmd_out_buffer;
     virtio_net_ctrl_ack *status;
 
+    /* The device always have SVQ enabled */
+    bool always_svq;
     bool started;
 } VhostVDPAState;
 
@@ -568,6 +570,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
 
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     s->vhost_vdpa.index = queue_pair_index;
+    s->always_svq = svq;
     s->vhost_vdpa.shadow_vqs_enabled = svq;
     s->vhost_vdpa.iova_range = iova_range;
     s->vhost_vdpa.iova_tree = iova_tree;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 11/12] vdpa: add shadow_data to vhost_vdpa
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (9 preceding siblings ...)
  2022-12-15 11:31 ` [PATCH v9 10/12] vdpa: store x-svq parameter in VhostVDPAState Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-15 11:31 ` [PATCH v9 12/12] vdpa: always start CVQ in SVQ mode if possible Eugenio Pérez
  11 siblings, 0 replies; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

The memory listener that thells the device how to convert GPA to qemu's
va is registered against CVQ vhost_vdpa. memory listener translations
are always ASID 0, CVQ ones are ASID 1 if supported.

Let's tell the listener if it needs to register them on iova tree or
not.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
v7: Rename listener_shadow_vq to shadow_data
v5: Solve conflict about vhost_iova_tree_remove accepting mem_region by
    value.
---
 include/hw/virtio/vhost-vdpa.h | 2 ++
 hw/virtio/vhost-vdpa.c         | 6 +++---
 net/vhost-vdpa.c               | 1 +
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index e57dfa1fd1..45b969a311 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -40,6 +40,8 @@ typedef struct vhost_vdpa {
     struct vhost_vdpa_iova_range iova_range;
     uint64_t acked_features;
     bool shadow_vqs_enabled;
+    /* Vdpa must send shadow addresses as IOTLB key for data queues, not GPA */
+    bool shadow_data;
     /* IOVA mapping used by the Shadow Virtqueue */
     VhostIOVATree *iova_tree;
     GPtrArray *shadow_vqs;
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 5e591a8fda..48d8c60e76 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -224,7 +224,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
                                          vaddr, section->readonly);
 
     llsize = int128_sub(llend, int128_make64(iova));
-    if (v->shadow_vqs_enabled) {
+    if (v->shadow_data) {
         int r;
 
         mem_region.translated_addr = (hwaddr)(uintptr_t)vaddr,
@@ -251,7 +251,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
     return;
 
 fail_map:
-    if (v->shadow_vqs_enabled) {
+    if (v->shadow_data) {
         vhost_iova_tree_remove(v->iova_tree, mem_region);
     }
 
@@ -296,7 +296,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
 
     llsize = int128_sub(llend, int128_make64(iova));
 
-    if (v->shadow_vqs_enabled) {
+    if (v->shadow_data) {
         const DMAMap *result;
         const void *vaddr = memory_region_get_ram_ptr(section->mr) +
             section->offset_within_region +
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index bff72717d0..710c5efe96 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -573,6 +573,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
     s->always_svq = svq;
     s->vhost_vdpa.shadow_vqs_enabled = svq;
     s->vhost_vdpa.iova_range = iova_range;
+    s->vhost_vdpa.shadow_data = svq;
     s->vhost_vdpa.iova_tree = iova_tree;
     if (!is_datapath) {
         s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v9 12/12] vdpa: always start CVQ in SVQ mode if possible
  2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
                   ` (10 preceding siblings ...)
  2022-12-15 11:31 ` [PATCH v9 11/12] vdpa: add shadow_data to vhost_vdpa Eugenio Pérez
@ 2022-12-15 11:31 ` Eugenio Pérez
  2022-12-16  7:35     ` Jason Wang
  11 siblings, 1 reply; 22+ messages in thread
From: Eugenio Pérez @ 2022-12-15 11:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Liuxiangdong, Stefano Garzarella, Zhu Lingshan, Si-Wei Liu,
	Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Jason Wang, Michael S. Tsirkin, Cindy Lu,
	Gautam Dawar, Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

Isolate control virtqueue in its own group, allowing to intercept control
commands but letting dataplane run totally passthrough to the guest.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v9:
* Reuse iova_range fetched from the device at initialization, instead of
  fetch it again at vhost_vdpa_net_cvq_start.
* Add comment about how migration is blocked in case ASID does not met
  our expectations.
* Delete warning about CVQ group not being independent.

v8:
* Do not allocate iova_tree on net_init_vhost_vdpa if only CVQ is
  shadowed. Move the iova_tree handling in this case to
  vhost_vdpa_net_cvq_start and vhost_vdpa_net_cvq_stop.

v7:
* Never ask for number of address spaces, just react if isolation is not
  possible.
* Return ASID ioctl errors instead of masking them as if the device has
  no asid.
* Simplify net_init_vhost_vdpa logic
* Add "if possible" suffix

v6:
* Disable control SVQ if the device does not support it because of
features.

v5:
* Fixing the not adding cvq buffers when x-svq=on is specified.
* Move vring state in vhost_vdpa_get_vring_group instead of using a
  parameter.
* Rename VHOST_VDPA_NET_CVQ_PASSTHROUGH to VHOST_VDPA_NET_DATA_ASID

v4:
* Squash vhost_vdpa_cvq_group_is_independent.
* Rebased on last CVQ start series, that allocated CVQ cmd bufs at load
* Do not check for cvq index on vhost_vdpa_net_prepare, we only have one
  that callback registered in that NetClientInfo.

v3:
* Make asid related queries print a warning instead of returning an
  error and stop the start of qemu.
---
 hw/virtio/vhost-vdpa.c |   3 +-
 net/vhost-vdpa.c       | 110 ++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 111 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 48d8c60e76..8cd00f5a96 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -638,7 +638,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
 {
     uint64_t features;
     uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
-        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
+        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
+        0x1ULL << VHOST_BACKEND_F_IOTLB_ASID;
     int r;
 
     if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 710c5efe96..d36664f33a 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -102,6 +102,8 @@ static const uint64_t vdpa_svq_device_features =
     BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
     BIT_ULL(VIRTIO_NET_F_STANDBY);
 
+#define VHOST_VDPA_NET_CVQ_ASID 1
+
 VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
 {
     VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
@@ -243,6 +245,40 @@ static NetClientInfo net_vhost_vdpa_info = {
         .check_peer_type = vhost_vdpa_check_peer_type,
 };
 
+static int64_t vhost_vdpa_get_vring_group(int device_fd, unsigned vq_index)
+{
+    struct vhost_vring_state state = {
+        .index = vq_index,
+    };
+    int r = ioctl(device_fd, VHOST_VDPA_GET_VRING_GROUP, &state);
+
+    if (unlikely(r < 0)) {
+        error_report("Cannot get VQ %u group: %s", vq_index,
+                     g_strerror(errno));
+        return r;
+    }
+
+    return state.num;
+}
+
+static int vhost_vdpa_set_address_space_id(struct vhost_vdpa *v,
+                                           unsigned vq_group,
+                                           unsigned asid_num)
+{
+    struct vhost_vring_state asid = {
+        .index = vq_group,
+        .num = asid_num,
+    };
+    int r;
+
+    r = ioctl(v->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid);
+    if (unlikely(r < 0)) {
+        error_report("Can't set vq group %u asid %u, errno=%d (%s)",
+                     asid.index, asid.num, errno, g_strerror(errno));
+    }
+    return r;
+}
+
 static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
 {
     VhostIOVATree *tree = v->iova_tree;
@@ -317,11 +353,75 @@ dma_map_err:
 static int vhost_vdpa_net_cvq_start(NetClientState *nc)
 {
     VhostVDPAState *s;
-    int r;
+    struct vhost_vdpa *v;
+    uint64_t backend_features;
+    int64_t cvq_group;
+    int cvq_index, r;
 
     assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
 
     s = DO_UPCAST(VhostVDPAState, nc, nc);
+    v = &s->vhost_vdpa;
+
+    v->shadow_data = s->always_svq;
+    v->shadow_vqs_enabled = s->always_svq;
+    s->vhost_vdpa.address_space_id = VHOST_VDPA_GUEST_PA_ASID;
+
+    if (s->always_svq) {
+        /* SVQ is already configured for all virtqueues */
+        goto out;
+    }
+
+    /*
+     * If we early return in these cases SVQ will not be enabled. The migration
+     * will be blocked as long as vhost-vdpa backends will not offer _F_LOG.
+     *
+     * Calling VHOST_GET_BACKEND_FEATURES as they are not available in v->dev
+     * yet.
+     */
+    r = ioctl(v->device_fd, VHOST_GET_BACKEND_FEATURES, &backend_features);
+    if (unlikely(r < 0)) {
+        error_report("Cannot get vdpa backend_features: %s(%d)",
+            g_strerror(errno), errno);
+        return -1;
+    }
+    if (!(backend_features & VHOST_BACKEND_F_IOTLB_ASID) ||
+        !vhost_vdpa_net_valid_svq_features(v->dev->features, NULL)) {
+        return 0;
+    }
+
+    /*
+     * Check if all the virtqueues of the virtio device are in a different vq
+     * than the last vq. VQ group of last group passed in cvq_group.
+     */
+    cvq_index = v->dev->vq_index_end - 1;
+    cvq_group = vhost_vdpa_get_vring_group(v->device_fd, cvq_index);
+    if (unlikely(cvq_group < 0)) {
+        return cvq_group;
+    }
+    for (int i = 0; i < cvq_index; ++i) {
+        int64_t group = vhost_vdpa_get_vring_group(v->device_fd, i);
+
+        if (unlikely(group < 0)) {
+            return group;
+        }
+
+        if (group == cvq_group) {
+            return 0;
+        }
+    }
+
+    r = vhost_vdpa_set_address_space_id(v, cvq_group, VHOST_VDPA_NET_CVQ_ASID);
+    if (unlikely(r < 0)) {
+        return r;
+    }
+
+    v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
+                                       v->iova_range.last);
+    v->shadow_vqs_enabled = true;
+    s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_CVQ_ASID;
+
+out:
     if (!s->vhost_vdpa.shadow_vqs_enabled) {
         return 0;
     }
@@ -350,6 +450,14 @@ static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
     if (s->vhost_vdpa.shadow_vqs_enabled) {
         vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
         vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->status);
+        if (!s->always_svq) {
+            /*
+             * If only the CVQ is shadowed we can delete this safely.
+             * If all the VQs are shadows this will be needed by the time the
+             * device is started again to register SVQ vrings and similar.
+             */
+            g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
+        }
     }
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v9 06/12] vdpa: request iova_range only once
  2022-12-15 11:31 ` [PATCH v9 06/12] vdpa: request iova_range only once Eugenio Pérez
@ 2022-12-16  7:29     ` Jason Wang
  0 siblings, 0 replies; 22+ messages in thread
From: Jason Wang @ 2022-12-16  7:29 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Laurent Vivier, kvm, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel, Gautam Dawar, virtualization,
	Harpreet Singh Anand, Stefan Hajnoczi, Eli Cohen, Paolo Bonzini,
	Liuxiangdong, Longpeng

On Thu, Dec 15, 2022 at 7:32 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Currently iova range is requested once per queue pair in the case of
> net. Reduce the number of ioctls asking it once at initialization and
> reusing that value for each vhost_vdpa.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  hw/virtio/vhost-vdpa.c | 15 ---------------
>  net/vhost-vdpa.c       | 27 ++++++++++++++-------------
>  2 files changed, 14 insertions(+), 28 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 691bcc811a..9b7f4ef083 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -365,19 +365,6 @@ static int vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
>      return 0;
>  }
>
> -static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> -{
> -    int ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE,
> -                              &v->iova_range);
> -    if (ret != 0) {
> -        v->iova_range.first = 0;
> -        v->iova_range.last = UINT64_MAX;
> -    }
> -
> -    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> -                                    v->iova_range.last);
> -}
> -
>  /*
>   * The use of this function is for requests that only need to be
>   * applied once. Typically such request occurs at the beginning
> @@ -465,8 +452,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
>          goto err;
>      }
>
> -    vhost_vdpa_get_iova_range(v);
> -
>      if (!vhost_vdpa_first_dev(dev)) {
>          return 0;
>      }
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 2c0ff6d7b0..b6462f0192 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -541,14 +541,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
>  };
>
>  static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> -                                           const char *device,
> -                                           const char *name,
> -                                           int vdpa_device_fd,
> -                                           int queue_pair_index,
> -                                           int nvqs,
> -                                           bool is_datapath,
> -                                           bool svq,
> -                                           VhostIOVATree *iova_tree)
> +                                       const char *device,
> +                                       const char *name,
> +                                       int vdpa_device_fd,
> +                                       int queue_pair_index,
> +                                       int nvqs,
> +                                       bool is_datapath,
> +                                       bool svq,
> +                                       struct vhost_vdpa_iova_range iova_range,
> +                                       VhostIOVATree *iova_tree)

Nit: it's better not mix style changes.

Other than this:

Acked-by: Jason Wang <jasonwang@redhat.com>

Thanks

>  {
>      NetClientState *nc = NULL;
>      VhostVDPAState *s;
> @@ -567,6 +568,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>      s->vhost_vdpa.device_fd = vdpa_device_fd;
>      s->vhost_vdpa.index = queue_pair_index;
>      s->vhost_vdpa.shadow_vqs_enabled = svq;
> +    s->vhost_vdpa.iova_range = iova_range;
>      s->vhost_vdpa.iova_tree = iova_tree;
>      if (!is_datapath) {
>          s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
> @@ -646,6 +648,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>      int vdpa_device_fd;
>      g_autofree NetClientState **ncs = NULL;
>      g_autoptr(VhostIOVATree) iova_tree = NULL;
> +    struct vhost_vdpa_iova_range iova_range;
>      NetClientState *nc;
>      int queue_pairs, r, i = 0, has_cvq = 0;
>
> @@ -689,14 +692,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>          return queue_pairs;
>      }
>
> +    vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
>      if (opts->x_svq) {
> -        struct vhost_vdpa_iova_range iova_range;
> -
>          if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
>              goto err_svq;
>          }
>
> -        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
>          iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
>      }
>
> @@ -705,7 +706,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>      for (i = 0; i < queue_pairs; i++) {
>          ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>                                       vdpa_device_fd, i, 2, true, opts->x_svq,
> -                                     iova_tree);
> +                                     iova_range, iova_tree);
>          if (!ncs[i])
>              goto err;
>      }
> @@ -713,7 +714,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>      if (has_cvq) {
>          nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>                                   vdpa_device_fd, i, 1, false,
> -                                 opts->x_svq, iova_tree);
> +                                 opts->x_svq, iova_range, iova_tree);
>          if (!nc)
>              goto err;
>      }
> --
> 2.31.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v9 06/12] vdpa: request iova_range only once
@ 2022-12-16  7:29     ` Jason Wang
  0 siblings, 0 replies; 22+ messages in thread
From: Jason Wang @ 2022-12-16  7:29 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Liuxiangdong, Stefano Garzarella, Zhu Lingshan,
	Si-Wei Liu, Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Michael S. Tsirkin, Cindy Lu, Gautam Dawar,
	Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

On Thu, Dec 15, 2022 at 7:32 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Currently iova range is requested once per queue pair in the case of
> net. Reduce the number of ioctls asking it once at initialization and
> reusing that value for each vhost_vdpa.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  hw/virtio/vhost-vdpa.c | 15 ---------------
>  net/vhost-vdpa.c       | 27 ++++++++++++++-------------
>  2 files changed, 14 insertions(+), 28 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 691bcc811a..9b7f4ef083 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -365,19 +365,6 @@ static int vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
>      return 0;
>  }
>
> -static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> -{
> -    int ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE,
> -                              &v->iova_range);
> -    if (ret != 0) {
> -        v->iova_range.first = 0;
> -        v->iova_range.last = UINT64_MAX;
> -    }
> -
> -    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> -                                    v->iova_range.last);
> -}
> -
>  /*
>   * The use of this function is for requests that only need to be
>   * applied once. Typically such request occurs at the beginning
> @@ -465,8 +452,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
>          goto err;
>      }
>
> -    vhost_vdpa_get_iova_range(v);
> -
>      if (!vhost_vdpa_first_dev(dev)) {
>          return 0;
>      }
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 2c0ff6d7b0..b6462f0192 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -541,14 +541,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
>  };
>
>  static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> -                                           const char *device,
> -                                           const char *name,
> -                                           int vdpa_device_fd,
> -                                           int queue_pair_index,
> -                                           int nvqs,
> -                                           bool is_datapath,
> -                                           bool svq,
> -                                           VhostIOVATree *iova_tree)
> +                                       const char *device,
> +                                       const char *name,
> +                                       int vdpa_device_fd,
> +                                       int queue_pair_index,
> +                                       int nvqs,
> +                                       bool is_datapath,
> +                                       bool svq,
> +                                       struct vhost_vdpa_iova_range iova_range,
> +                                       VhostIOVATree *iova_tree)

Nit: it's better not mix style changes.

Other than this:

Acked-by: Jason Wang <jasonwang@redhat.com>

Thanks

>  {
>      NetClientState *nc = NULL;
>      VhostVDPAState *s;
> @@ -567,6 +568,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>      s->vhost_vdpa.device_fd = vdpa_device_fd;
>      s->vhost_vdpa.index = queue_pair_index;
>      s->vhost_vdpa.shadow_vqs_enabled = svq;
> +    s->vhost_vdpa.iova_range = iova_range;
>      s->vhost_vdpa.iova_tree = iova_tree;
>      if (!is_datapath) {
>          s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
> @@ -646,6 +648,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>      int vdpa_device_fd;
>      g_autofree NetClientState **ncs = NULL;
>      g_autoptr(VhostIOVATree) iova_tree = NULL;
> +    struct vhost_vdpa_iova_range iova_range;
>      NetClientState *nc;
>      int queue_pairs, r, i = 0, has_cvq = 0;
>
> @@ -689,14 +692,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>          return queue_pairs;
>      }
>
> +    vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
>      if (opts->x_svq) {
> -        struct vhost_vdpa_iova_range iova_range;
> -
>          if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
>              goto err_svq;
>          }
>
> -        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
>          iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
>      }
>
> @@ -705,7 +706,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>      for (i = 0; i < queue_pairs; i++) {
>          ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>                                       vdpa_device_fd, i, 2, true, opts->x_svq,
> -                                     iova_tree);
> +                                     iova_range, iova_tree);
>          if (!ncs[i])
>              goto err;
>      }
> @@ -713,7 +714,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>      if (has_cvq) {
>          nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>                                   vdpa_device_fd, i, 1, false,
> -                                 opts->x_svq, iova_tree);
> +                                 opts->x_svq, iova_range, iova_tree);
>          if (!nc)
>              goto err;
>      }
> --
> 2.31.1
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v9 12/12] vdpa: always start CVQ in SVQ mode if possible
  2022-12-15 11:31 ` [PATCH v9 12/12] vdpa: always start CVQ in SVQ mode if possible Eugenio Pérez
@ 2022-12-16  7:35     ` Jason Wang
  0 siblings, 0 replies; 22+ messages in thread
From: Jason Wang @ 2022-12-16  7:35 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Laurent Vivier, kvm, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel, Gautam Dawar, virtualization,
	Harpreet Singh Anand, Stefan Hajnoczi, Eli Cohen, Paolo Bonzini,
	Liuxiangdong, Longpeng

On Thu, Dec 15, 2022 at 7:32 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Isolate control virtqueue in its own group, allowing to intercept control
> commands but letting dataplane run totally passthrough to the guest.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
> v9:
> * Reuse iova_range fetched from the device at initialization, instead of
>   fetch it again at vhost_vdpa_net_cvq_start.
> * Add comment about how migration is blocked in case ASID does not met
>   our expectations.
> * Delete warning about CVQ group not being independent.
>
> v8:
> * Do not allocate iova_tree on net_init_vhost_vdpa if only CVQ is
>   shadowed. Move the iova_tree handling in this case to
>   vhost_vdpa_net_cvq_start and vhost_vdpa_net_cvq_stop.
>
> v7:
> * Never ask for number of address spaces, just react if isolation is not
>   possible.
> * Return ASID ioctl errors instead of masking them as if the device has
>   no asid.
> * Simplify net_init_vhost_vdpa logic
> * Add "if possible" suffix
>
> v6:
> * Disable control SVQ if the device does not support it because of
> features.
>
> v5:
> * Fixing the not adding cvq buffers when x-svq=on is specified.
> * Move vring state in vhost_vdpa_get_vring_group instead of using a
>   parameter.
> * Rename VHOST_VDPA_NET_CVQ_PASSTHROUGH to VHOST_VDPA_NET_DATA_ASID
>
> v4:
> * Squash vhost_vdpa_cvq_group_is_independent.
> * Rebased on last CVQ start series, that allocated CVQ cmd bufs at load
> * Do not check for cvq index on vhost_vdpa_net_prepare, we only have one
>   that callback registered in that NetClientInfo.
>
> v3:
> * Make asid related queries print a warning instead of returning an
>   error and stop the start of qemu.
> ---
>  hw/virtio/vhost-vdpa.c |   3 +-
>  net/vhost-vdpa.c       | 110 ++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 111 insertions(+), 2 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 48d8c60e76..8cd00f5a96 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -638,7 +638,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
>  {
>      uint64_t features;
>      uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
> -        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
> +        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
> +        0x1ULL << VHOST_BACKEND_F_IOTLB_ASID;
>      int r;
>
>      if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 710c5efe96..d36664f33a 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -102,6 +102,8 @@ static const uint64_t vdpa_svq_device_features =
>      BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
>      BIT_ULL(VIRTIO_NET_F_STANDBY);
>
> +#define VHOST_VDPA_NET_CVQ_ASID 1
> +
>  VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
>  {
>      VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> @@ -243,6 +245,40 @@ static NetClientInfo net_vhost_vdpa_info = {
>          .check_peer_type = vhost_vdpa_check_peer_type,
>  };
>
> +static int64_t vhost_vdpa_get_vring_group(int device_fd, unsigned vq_index)
> +{
> +    struct vhost_vring_state state = {
> +        .index = vq_index,
> +    };
> +    int r = ioctl(device_fd, VHOST_VDPA_GET_VRING_GROUP, &state);
> +
> +    if (unlikely(r < 0)) {
> +        error_report("Cannot get VQ %u group: %s", vq_index,
> +                     g_strerror(errno));
> +        return r;
> +    }
> +
> +    return state.num;
> +}
> +
> +static int vhost_vdpa_set_address_space_id(struct vhost_vdpa *v,
> +                                           unsigned vq_group,
> +                                           unsigned asid_num)
> +{
> +    struct vhost_vring_state asid = {
> +        .index = vq_group,
> +        .num = asid_num,
> +    };
> +    int r;
> +
> +    r = ioctl(v->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid);
> +    if (unlikely(r < 0)) {
> +        error_report("Can't set vq group %u asid %u, errno=%d (%s)",
> +                     asid.index, asid.num, errno, g_strerror(errno));
> +    }
> +    return r;
> +}
> +
>  static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
>  {
>      VhostIOVATree *tree = v->iova_tree;
> @@ -317,11 +353,75 @@ dma_map_err:
>  static int vhost_vdpa_net_cvq_start(NetClientState *nc)
>  {
>      VhostVDPAState *s;
> -    int r;
> +    struct vhost_vdpa *v;
> +    uint64_t backend_features;
> +    int64_t cvq_group;
> +    int cvq_index, r;
>
>      assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>
>      s = DO_UPCAST(VhostVDPAState, nc, nc);
> +    v = &s->vhost_vdpa;
> +
> +    v->shadow_data = s->always_svq;
> +    v->shadow_vqs_enabled = s->always_svq;
> +    s->vhost_vdpa.address_space_id = VHOST_VDPA_GUEST_PA_ASID;
> +
> +    if (s->always_svq) {
> +        /* SVQ is already configured for all virtqueues */
> +        goto out;
> +    }
> +
> +    /*
> +     * If we early return in these cases SVQ will not be enabled. The migration
> +     * will be blocked as long as vhost-vdpa backends will not offer _F_LOG.
> +     *
> +     * Calling VHOST_GET_BACKEND_FEATURES as they are not available in v->dev
> +     * yet.
> +     */
> +    r = ioctl(v->device_fd, VHOST_GET_BACKEND_FEATURES, &backend_features);
> +    if (unlikely(r < 0)) {
> +        error_report("Cannot get vdpa backend_features: %s(%d)",
> +            g_strerror(errno), errno);
> +        return -1;
> +    }
> +    if (!(backend_features & VHOST_BACKEND_F_IOTLB_ASID) ||
> +        !vhost_vdpa_net_valid_svq_features(v->dev->features, NULL)) {
> +        return 0;
> +    }
> +
> +    /*
> +     * Check if all the virtqueues of the virtio device are in a different vq
> +     * than the last vq. VQ group of last group passed in cvq_group.
> +     */
> +    cvq_index = v->dev->vq_index_end - 1;
> +    cvq_group = vhost_vdpa_get_vring_group(v->device_fd, cvq_index);
> +    if (unlikely(cvq_group < 0)) {
> +        return cvq_group;
> +    }
> +    for (int i = 0; i < cvq_index; ++i) {
> +        int64_t group = vhost_vdpa_get_vring_group(v->device_fd, i);
> +
> +        if (unlikely(group < 0)) {
> +            return group;
> +        }
> +
> +        if (group == cvq_group) {
> +            return 0;
> +        }
> +    }
> +
> +    r = vhost_vdpa_set_address_space_id(v, cvq_group, VHOST_VDPA_NET_CVQ_ASID);
> +    if (unlikely(r < 0)) {
> +        return r;
> +    }
> +
> +    v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
> +                                       v->iova_range.last);
> +    v->shadow_vqs_enabled = true;
> +    s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_CVQ_ASID;
> +
> +out:
>      if (!s->vhost_vdpa.shadow_vqs_enabled) {
>          return 0;
>      }
> @@ -350,6 +450,14 @@ static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
>      if (s->vhost_vdpa.shadow_vqs_enabled) {
>          vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
>          vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->status);
> +        if (!s->always_svq) {
> +            /*
> +             * If only the CVQ is shadowed we can delete this safely.
> +             * If all the VQs are shadows this will be needed by the time the
> +             * device is started again to register SVQ vrings and similar.
> +             */
> +            g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
> +        }
>      }
>  }
>
> --
> 2.31.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v9 12/12] vdpa: always start CVQ in SVQ mode if possible
@ 2022-12-16  7:35     ` Jason Wang
  0 siblings, 0 replies; 22+ messages in thread
From: Jason Wang @ 2022-12-16  7:35 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Liuxiangdong, Stefano Garzarella, Zhu Lingshan,
	Si-Wei Liu, Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Michael S. Tsirkin, Cindy Lu, Gautam Dawar,
	Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

On Thu, Dec 15, 2022 at 7:32 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Isolate control virtqueue in its own group, allowing to intercept control
> commands but letting dataplane run totally passthrough to the guest.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
> v9:
> * Reuse iova_range fetched from the device at initialization, instead of
>   fetch it again at vhost_vdpa_net_cvq_start.
> * Add comment about how migration is blocked in case ASID does not met
>   our expectations.
> * Delete warning about CVQ group not being independent.
>
> v8:
> * Do not allocate iova_tree on net_init_vhost_vdpa if only CVQ is
>   shadowed. Move the iova_tree handling in this case to
>   vhost_vdpa_net_cvq_start and vhost_vdpa_net_cvq_stop.
>
> v7:
> * Never ask for number of address spaces, just react if isolation is not
>   possible.
> * Return ASID ioctl errors instead of masking them as if the device has
>   no asid.
> * Simplify net_init_vhost_vdpa logic
> * Add "if possible" suffix
>
> v6:
> * Disable control SVQ if the device does not support it because of
> features.
>
> v5:
> * Fixing the not adding cvq buffers when x-svq=on is specified.
> * Move vring state in vhost_vdpa_get_vring_group instead of using a
>   parameter.
> * Rename VHOST_VDPA_NET_CVQ_PASSTHROUGH to VHOST_VDPA_NET_DATA_ASID
>
> v4:
> * Squash vhost_vdpa_cvq_group_is_independent.
> * Rebased on last CVQ start series, that allocated CVQ cmd bufs at load
> * Do not check for cvq index on vhost_vdpa_net_prepare, we only have one
>   that callback registered in that NetClientInfo.
>
> v3:
> * Make asid related queries print a warning instead of returning an
>   error and stop the start of qemu.
> ---
>  hw/virtio/vhost-vdpa.c |   3 +-
>  net/vhost-vdpa.c       | 110 ++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 111 insertions(+), 2 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 48d8c60e76..8cd00f5a96 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -638,7 +638,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
>  {
>      uint64_t features;
>      uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
> -        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
> +        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
> +        0x1ULL << VHOST_BACKEND_F_IOTLB_ASID;
>      int r;
>
>      if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 710c5efe96..d36664f33a 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -102,6 +102,8 @@ static const uint64_t vdpa_svq_device_features =
>      BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
>      BIT_ULL(VIRTIO_NET_F_STANDBY);
>
> +#define VHOST_VDPA_NET_CVQ_ASID 1
> +
>  VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
>  {
>      VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> @@ -243,6 +245,40 @@ static NetClientInfo net_vhost_vdpa_info = {
>          .check_peer_type = vhost_vdpa_check_peer_type,
>  };
>
> +static int64_t vhost_vdpa_get_vring_group(int device_fd, unsigned vq_index)
> +{
> +    struct vhost_vring_state state = {
> +        .index = vq_index,
> +    };
> +    int r = ioctl(device_fd, VHOST_VDPA_GET_VRING_GROUP, &state);
> +
> +    if (unlikely(r < 0)) {
> +        error_report("Cannot get VQ %u group: %s", vq_index,
> +                     g_strerror(errno));
> +        return r;
> +    }
> +
> +    return state.num;
> +}
> +
> +static int vhost_vdpa_set_address_space_id(struct vhost_vdpa *v,
> +                                           unsigned vq_group,
> +                                           unsigned asid_num)
> +{
> +    struct vhost_vring_state asid = {
> +        .index = vq_group,
> +        .num = asid_num,
> +    };
> +    int r;
> +
> +    r = ioctl(v->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid);
> +    if (unlikely(r < 0)) {
> +        error_report("Can't set vq group %u asid %u, errno=%d (%s)",
> +                     asid.index, asid.num, errno, g_strerror(errno));
> +    }
> +    return r;
> +}
> +
>  static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
>  {
>      VhostIOVATree *tree = v->iova_tree;
> @@ -317,11 +353,75 @@ dma_map_err:
>  static int vhost_vdpa_net_cvq_start(NetClientState *nc)
>  {
>      VhostVDPAState *s;
> -    int r;
> +    struct vhost_vdpa *v;
> +    uint64_t backend_features;
> +    int64_t cvq_group;
> +    int cvq_index, r;
>
>      assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>
>      s = DO_UPCAST(VhostVDPAState, nc, nc);
> +    v = &s->vhost_vdpa;
> +
> +    v->shadow_data = s->always_svq;
> +    v->shadow_vqs_enabled = s->always_svq;
> +    s->vhost_vdpa.address_space_id = VHOST_VDPA_GUEST_PA_ASID;
> +
> +    if (s->always_svq) {
> +        /* SVQ is already configured for all virtqueues */
> +        goto out;
> +    }
> +
> +    /*
> +     * If we early return in these cases SVQ will not be enabled. The migration
> +     * will be blocked as long as vhost-vdpa backends will not offer _F_LOG.
> +     *
> +     * Calling VHOST_GET_BACKEND_FEATURES as they are not available in v->dev
> +     * yet.
> +     */
> +    r = ioctl(v->device_fd, VHOST_GET_BACKEND_FEATURES, &backend_features);
> +    if (unlikely(r < 0)) {
> +        error_report("Cannot get vdpa backend_features: %s(%d)",
> +            g_strerror(errno), errno);
> +        return -1;
> +    }
> +    if (!(backend_features & VHOST_BACKEND_F_IOTLB_ASID) ||
> +        !vhost_vdpa_net_valid_svq_features(v->dev->features, NULL)) {
> +        return 0;
> +    }
> +
> +    /*
> +     * Check if all the virtqueues of the virtio device are in a different vq
> +     * than the last vq. VQ group of last group passed in cvq_group.
> +     */
> +    cvq_index = v->dev->vq_index_end - 1;
> +    cvq_group = vhost_vdpa_get_vring_group(v->device_fd, cvq_index);
> +    if (unlikely(cvq_group < 0)) {
> +        return cvq_group;
> +    }
> +    for (int i = 0; i < cvq_index; ++i) {
> +        int64_t group = vhost_vdpa_get_vring_group(v->device_fd, i);
> +
> +        if (unlikely(group < 0)) {
> +            return group;
> +        }
> +
> +        if (group == cvq_group) {
> +            return 0;
> +        }
> +    }
> +
> +    r = vhost_vdpa_set_address_space_id(v, cvq_group, VHOST_VDPA_NET_CVQ_ASID);
> +    if (unlikely(r < 0)) {
> +        return r;
> +    }
> +
> +    v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
> +                                       v->iova_range.last);
> +    v->shadow_vqs_enabled = true;
> +    s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_CVQ_ASID;
> +
> +out:
>      if (!s->vhost_vdpa.shadow_vqs_enabled) {
>          return 0;
>      }
> @@ -350,6 +450,14 @@ static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
>      if (s->vhost_vdpa.shadow_vqs_enabled) {
>          vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
>          vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->status);
> +        if (!s->always_svq) {
> +            /*
> +             * If only the CVQ is shadowed we can delete this safely.
> +             * If all the VQs are shadows this will be needed by the time the
> +             * device is started again to register SVQ vrings and similar.
> +             */
> +            g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
> +        }
>      }
>  }
>
> --
> 2.31.1
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v9 06/12] vdpa: request iova_range only once
  2022-12-16  7:29     ` Jason Wang
  (?)
@ 2022-12-16  9:52     ` Eugenio Perez Martin
  2022-12-21  8:21         ` Jason Wang
  -1 siblings, 1 reply; 22+ messages in thread
From: Eugenio Perez Martin @ 2022-12-16  9:52 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Liuxiangdong, Stefano Garzarella, Zhu Lingshan,
	Si-Wei Liu, Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Michael S. Tsirkin, Cindy Lu, Gautam Dawar,
	Eli Cohen, Cornelia Huck, Paolo Bonzini, Longpeng,
	Harpreet Singh Anand, Parav Pandit, kvm, virtualization

On Fri, Dec 16, 2022 at 8:29 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Thu, Dec 15, 2022 at 7:32 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > Currently iova range is requested once per queue pair in the case of
> > net. Reduce the number of ioctls asking it once at initialization and
> > reusing that value for each vhost_vdpa.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  hw/virtio/vhost-vdpa.c | 15 ---------------
> >  net/vhost-vdpa.c       | 27 ++++++++++++++-------------
> >  2 files changed, 14 insertions(+), 28 deletions(-)
> >
> > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > index 691bcc811a..9b7f4ef083 100644
> > --- a/hw/virtio/vhost-vdpa.c
> > +++ b/hw/virtio/vhost-vdpa.c
> > @@ -365,19 +365,6 @@ static int vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
> >      return 0;
> >  }
> >
> > -static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> > -{
> > -    int ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE,
> > -                              &v->iova_range);
> > -    if (ret != 0) {
> > -        v->iova_range.first = 0;
> > -        v->iova_range.last = UINT64_MAX;
> > -    }
> > -
> > -    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> > -                                    v->iova_range.last);
> > -}
> > -
> >  /*
> >   * The use of this function is for requests that only need to be
> >   * applied once. Typically such request occurs at the beginning
> > @@ -465,8 +452,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> >          goto err;
> >      }
> >
> > -    vhost_vdpa_get_iova_range(v);
> > -
> >      if (!vhost_vdpa_first_dev(dev)) {
> >          return 0;
> >      }
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 2c0ff6d7b0..b6462f0192 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -541,14 +541,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
> >  };
> >
> >  static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > -                                           const char *device,
> > -                                           const char *name,
> > -                                           int vdpa_device_fd,
> > -                                           int queue_pair_index,
> > -                                           int nvqs,
> > -                                           bool is_datapath,
> > -                                           bool svq,
> > -                                           VhostIOVATree *iova_tree)
> > +                                       const char *device,
> > +                                       const char *name,
> > +                                       int vdpa_device_fd,
> > +                                       int queue_pair_index,
> > +                                       int nvqs,
> > +                                       bool is_datapath,
> > +                                       bool svq,
> > +                                       struct vhost_vdpa_iova_range iova_range,
> > +                                       VhostIOVATree *iova_tree)
>
> Nit: it's better not mix style changes.
>

The style changes are because the new parameter is longer than 80
characters, do you prefer me to send a previous patch reducing
indentation?

Thanks!

> Other than this:
>
> Acked-by: Jason Wang <jasonwang@redhat.com>
>
> Thanks
>
> >  {
> >      NetClientState *nc = NULL;
> >      VhostVDPAState *s;
> > @@ -567,6 +568,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> >      s->vhost_vdpa.device_fd = vdpa_device_fd;
> >      s->vhost_vdpa.index = queue_pair_index;
> >      s->vhost_vdpa.shadow_vqs_enabled = svq;
> > +    s->vhost_vdpa.iova_range = iova_range;
> >      s->vhost_vdpa.iova_tree = iova_tree;
> >      if (!is_datapath) {
> >          s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
> > @@ -646,6 +648,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> >      int vdpa_device_fd;
> >      g_autofree NetClientState **ncs = NULL;
> >      g_autoptr(VhostIOVATree) iova_tree = NULL;
> > +    struct vhost_vdpa_iova_range iova_range;
> >      NetClientState *nc;
> >      int queue_pairs, r, i = 0, has_cvq = 0;
> >
> > @@ -689,14 +692,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> >          return queue_pairs;
> >      }
> >
> > +    vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> >      if (opts->x_svq) {
> > -        struct vhost_vdpa_iova_range iova_range;
> > -
> >          if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
> >              goto err_svq;
> >          }
> >
> > -        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> >          iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
> >      }
> >
> > @@ -705,7 +706,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> >      for (i = 0; i < queue_pairs; i++) {
> >          ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> >                                       vdpa_device_fd, i, 2, true, opts->x_svq,
> > -                                     iova_tree);
> > +                                     iova_range, iova_tree);
> >          if (!ncs[i])
> >              goto err;
> >      }
> > @@ -713,7 +714,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> >      if (has_cvq) {
> >          nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> >                                   vdpa_device_fd, i, 1, false,
> > -                                 opts->x_svq, iova_tree);
> > +                                 opts->x_svq, iova_range, iova_tree);
> >          if (!nc)
> >              goto err;
> >      }
> > --
> > 2.31.1
> >
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v9 06/12] vdpa: request iova_range only once
  2022-12-16  9:52     ` Eugenio Perez Martin
@ 2022-12-21  8:21         ` Jason Wang
  0 siblings, 0 replies; 22+ messages in thread
From: Jason Wang @ 2022-12-21  8:21 UTC (permalink / raw)
  To: Eugenio Perez Martin, Michael S. Tsirkin
  Cc: qemu-devel, Liuxiangdong, Stefano Garzarella, Zhu Lingshan,
	Si-Wei Liu, Laurent Vivier, Gonglei (Arei),
	Stefan Hajnoczi, Cindy Lu, Gautam Dawar, Eli Cohen,
	Cornelia Huck, Paolo Bonzini, Longpeng, Harpreet Singh Anand,
	Parav Pandit, kvm, virtualization

On Fri, Dec 16, 2022 at 5:53 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Fri, Dec 16, 2022 at 8:29 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Thu, Dec 15, 2022 at 7:32 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > Currently iova range is requested once per queue pair in the case of
> > > net. Reduce the number of ioctls asking it once at initialization and
> > > reusing that value for each vhost_vdpa.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >  hw/virtio/vhost-vdpa.c | 15 ---------------
> > >  net/vhost-vdpa.c       | 27 ++++++++++++++-------------
> > >  2 files changed, 14 insertions(+), 28 deletions(-)
> > >
> > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > > index 691bcc811a..9b7f4ef083 100644
> > > --- a/hw/virtio/vhost-vdpa.c
> > > +++ b/hw/virtio/vhost-vdpa.c
> > > @@ -365,19 +365,6 @@ static int vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
> > >      return 0;
> > >  }
> > >
> > > -static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> > > -{
> > > -    int ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE,
> > > -                              &v->iova_range);
> > > -    if (ret != 0) {
> > > -        v->iova_range.first = 0;
> > > -        v->iova_range.last = UINT64_MAX;
> > > -    }
> > > -
> > > -    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> > > -                                    v->iova_range.last);
> > > -}
> > > -
> > >  /*
> > >   * The use of this function is for requests that only need to be
> > >   * applied once. Typically such request occurs at the beginning
> > > @@ -465,8 +452,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> > >          goto err;
> > >      }
> > >
> > > -    vhost_vdpa_get_iova_range(v);
> > > -
> > >      if (!vhost_vdpa_first_dev(dev)) {
> > >          return 0;
> > >      }
> > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > index 2c0ff6d7b0..b6462f0192 100644
> > > --- a/net/vhost-vdpa.c
> > > +++ b/net/vhost-vdpa.c
> > > @@ -541,14 +541,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
> > >  };
> > >
> > >  static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > > -                                           const char *device,
> > > -                                           const char *name,
> > > -                                           int vdpa_device_fd,
> > > -                                           int queue_pair_index,
> > > -                                           int nvqs,
> > > -                                           bool is_datapath,
> > > -                                           bool svq,
> > > -                                           VhostIOVATree *iova_tree)
> > > +                                       const char *device,
> > > +                                       const char *name,
> > > +                                       int vdpa_device_fd,
> > > +                                       int queue_pair_index,
> > > +                                       int nvqs,
> > > +                                       bool is_datapath,
> > > +                                       bool svq,
> > > +                                       struct vhost_vdpa_iova_range iova_range,
> > > +                                       VhostIOVATree *iova_tree)
> >
> > Nit: it's better not mix style changes.
> >
>
> The style changes are because the new parameter is longer than 80
> characters, do you prefer me to send a previous patch reducing
> indentation?
>

Michale, what's your preference? I'm fine with both.

Thanks

> Thanks!
>
> > Other than this:
> >
> > Acked-by: Jason Wang <jasonwang@redhat.com>
> >
> > Thanks
> >
> > >  {
> > >      NetClientState *nc = NULL;
> > >      VhostVDPAState *s;
> > > @@ -567,6 +568,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > >      s->vhost_vdpa.device_fd = vdpa_device_fd;
> > >      s->vhost_vdpa.index = queue_pair_index;
> > >      s->vhost_vdpa.shadow_vqs_enabled = svq;
> > > +    s->vhost_vdpa.iova_range = iova_range;
> > >      s->vhost_vdpa.iova_tree = iova_tree;
> > >      if (!is_datapath) {
> > >          s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
> > > @@ -646,6 +648,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > >      int vdpa_device_fd;
> > >      g_autofree NetClientState **ncs = NULL;
> > >      g_autoptr(VhostIOVATree) iova_tree = NULL;
> > > +    struct vhost_vdpa_iova_range iova_range;
> > >      NetClientState *nc;
> > >      int queue_pairs, r, i = 0, has_cvq = 0;
> > >
> > > @@ -689,14 +692,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > >          return queue_pairs;
> > >      }
> > >
> > > +    vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> > >      if (opts->x_svq) {
> > > -        struct vhost_vdpa_iova_range iova_range;
> > > -
> > >          if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
> > >              goto err_svq;
> > >          }
> > >
> > > -        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> > >          iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
> > >      }
> > >
> > > @@ -705,7 +706,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > >      for (i = 0; i < queue_pairs; i++) {
> > >          ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > >                                       vdpa_device_fd, i, 2, true, opts->x_svq,
> > > -                                     iova_tree);
> > > +                                     iova_range, iova_tree);
> > >          if (!ncs[i])
> > >              goto err;
> > >      }
> > > @@ -713,7 +714,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > >      if (has_cvq) {
> > >          nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > >                                   vdpa_device_fd, i, 1, false,
> > > -                                 opts->x_svq, iova_tree);
> > > +                                 opts->x_svq, iova_range, iova_tree);
> > >          if (!nc)
> > >              goto err;
> > >      }
> > > --
> > > 2.31.1
> > >
> >
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v9 06/12] vdpa: request iova_range only once
@ 2022-12-21  8:21         ` Jason Wang
  0 siblings, 0 replies; 22+ messages in thread
From: Jason Wang @ 2022-12-21  8:21 UTC (permalink / raw)
  To: Eugenio Perez Martin, Michael S. Tsirkin
  Cc: Laurent Vivier, kvm, Parav Pandit, Cindy Lu, Cornelia Huck,
	qemu-devel, Gautam Dawar, virtualization, Harpreet Singh Anand,
	Stefan Hajnoczi, Eli Cohen, Paolo Bonzini, Liuxiangdong,
	Longpeng

On Fri, Dec 16, 2022 at 5:53 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Fri, Dec 16, 2022 at 8:29 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Thu, Dec 15, 2022 at 7:32 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > Currently iova range is requested once per queue pair in the case of
> > > net. Reduce the number of ioctls asking it once at initialization and
> > > reusing that value for each vhost_vdpa.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >  hw/virtio/vhost-vdpa.c | 15 ---------------
> > >  net/vhost-vdpa.c       | 27 ++++++++++++++-------------
> > >  2 files changed, 14 insertions(+), 28 deletions(-)
> > >
> > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > > index 691bcc811a..9b7f4ef083 100644
> > > --- a/hw/virtio/vhost-vdpa.c
> > > +++ b/hw/virtio/vhost-vdpa.c
> > > @@ -365,19 +365,6 @@ static int vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
> > >      return 0;
> > >  }
> > >
> > > -static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> > > -{
> > > -    int ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE,
> > > -                              &v->iova_range);
> > > -    if (ret != 0) {
> > > -        v->iova_range.first = 0;
> > > -        v->iova_range.last = UINT64_MAX;
> > > -    }
> > > -
> > > -    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> > > -                                    v->iova_range.last);
> > > -}
> > > -
> > >  /*
> > >   * The use of this function is for requests that only need to be
> > >   * applied once. Typically such request occurs at the beginning
> > > @@ -465,8 +452,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> > >          goto err;
> > >      }
> > >
> > > -    vhost_vdpa_get_iova_range(v);
> > > -
> > >      if (!vhost_vdpa_first_dev(dev)) {
> > >          return 0;
> > >      }
> > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > index 2c0ff6d7b0..b6462f0192 100644
> > > --- a/net/vhost-vdpa.c
> > > +++ b/net/vhost-vdpa.c
> > > @@ -541,14 +541,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
> > >  };
> > >
> > >  static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > > -                                           const char *device,
> > > -                                           const char *name,
> > > -                                           int vdpa_device_fd,
> > > -                                           int queue_pair_index,
> > > -                                           int nvqs,
> > > -                                           bool is_datapath,
> > > -                                           bool svq,
> > > -                                           VhostIOVATree *iova_tree)
> > > +                                       const char *device,
> > > +                                       const char *name,
> > > +                                       int vdpa_device_fd,
> > > +                                       int queue_pair_index,
> > > +                                       int nvqs,
> > > +                                       bool is_datapath,
> > > +                                       bool svq,
> > > +                                       struct vhost_vdpa_iova_range iova_range,
> > > +                                       VhostIOVATree *iova_tree)
> >
> > Nit: it's better not mix style changes.
> >
>
> The style changes are because the new parameter is longer than 80
> characters, do you prefer me to send a previous patch reducing
> indentation?
>

Michale, what's your preference? I'm fine with both.

Thanks

> Thanks!
>
> > Other than this:
> >
> > Acked-by: Jason Wang <jasonwang@redhat.com>
> >
> > Thanks
> >
> > >  {
> > >      NetClientState *nc = NULL;
> > >      VhostVDPAState *s;
> > > @@ -567,6 +568,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > >      s->vhost_vdpa.device_fd = vdpa_device_fd;
> > >      s->vhost_vdpa.index = queue_pair_index;
> > >      s->vhost_vdpa.shadow_vqs_enabled = svq;
> > > +    s->vhost_vdpa.iova_range = iova_range;
> > >      s->vhost_vdpa.iova_tree = iova_tree;
> > >      if (!is_datapath) {
> > >          s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
> > > @@ -646,6 +648,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > >      int vdpa_device_fd;
> > >      g_autofree NetClientState **ncs = NULL;
> > >      g_autoptr(VhostIOVATree) iova_tree = NULL;
> > > +    struct vhost_vdpa_iova_range iova_range;
> > >      NetClientState *nc;
> > >      int queue_pairs, r, i = 0, has_cvq = 0;
> > >
> > > @@ -689,14 +692,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > >          return queue_pairs;
> > >      }
> > >
> > > +    vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> > >      if (opts->x_svq) {
> > > -        struct vhost_vdpa_iova_range iova_range;
> > > -
> > >          if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
> > >              goto err_svq;
> > >          }
> > >
> > > -        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> > >          iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
> > >      }
> > >
> > > @@ -705,7 +706,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > >      for (i = 0; i < queue_pairs; i++) {
> > >          ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > >                                       vdpa_device_fd, i, 2, true, opts->x_svq,
> > > -                                     iova_tree);
> > > +                                     iova_range, iova_tree);
> > >          if (!ncs[i])
> > >              goto err;
> > >      }
> > > @@ -713,7 +714,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > >      if (has_cvq) {
> > >          nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > >                                   vdpa_device_fd, i, 1, false,
> > > -                                 opts->x_svq, iova_tree);
> > > +                                 opts->x_svq, iova_range, iova_tree);
> > >          if (!nc)
> > >              goto err;
> > >      }
> > > --
> > > 2.31.1
> > >
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v9 06/12] vdpa: request iova_range only once
  2022-12-21  8:21         ` Jason Wang
@ 2022-12-21 11:47           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 22+ messages in thread
From: Michael S. Tsirkin @ 2022-12-21 11:47 UTC (permalink / raw)
  To: Jason Wang
  Cc: Eugenio Perez Martin, qemu-devel, Liuxiangdong,
	Stefano Garzarella, Zhu Lingshan, Si-Wei Liu, Laurent Vivier,
	Gonglei (Arei),
	Stefan Hajnoczi, Cindy Lu, Gautam Dawar, Eli Cohen,
	Cornelia Huck, Paolo Bonzini, Longpeng, Harpreet Singh Anand,
	Parav Pandit, kvm, virtualization

On Wed, Dec 21, 2022 at 04:21:52PM +0800, Jason Wang wrote:
> On Fri, Dec 16, 2022 at 5:53 PM Eugenio Perez Martin
> <eperezma@redhat.com> wrote:
> >
> > On Fri, Dec 16, 2022 at 8:29 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Thu, Dec 15, 2022 at 7:32 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> > > >
> > > > Currently iova range is requested once per queue pair in the case of
> > > > net. Reduce the number of ioctls asking it once at initialization and
> > > > reusing that value for each vhost_vdpa.
> > > >
> > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > ---
> > > >  hw/virtio/vhost-vdpa.c | 15 ---------------
> > > >  net/vhost-vdpa.c       | 27 ++++++++++++++-------------
> > > >  2 files changed, 14 insertions(+), 28 deletions(-)
> > > >
> > > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > > > index 691bcc811a..9b7f4ef083 100644
> > > > --- a/hw/virtio/vhost-vdpa.c
> > > > +++ b/hw/virtio/vhost-vdpa.c
> > > > @@ -365,19 +365,6 @@ static int vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
> > > >      return 0;
> > > >  }
> > > >
> > > > -static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> > > > -{
> > > > -    int ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE,
> > > > -                              &v->iova_range);
> > > > -    if (ret != 0) {
> > > > -        v->iova_range.first = 0;
> > > > -        v->iova_range.last = UINT64_MAX;
> > > > -    }
> > > > -
> > > > -    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> > > > -                                    v->iova_range.last);
> > > > -}
> > > > -
> > > >  /*
> > > >   * The use of this function is for requests that only need to be
> > > >   * applied once. Typically such request occurs at the beginning
> > > > @@ -465,8 +452,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> > > >          goto err;
> > > >      }
> > > >
> > > > -    vhost_vdpa_get_iova_range(v);
> > > > -
> > > >      if (!vhost_vdpa_first_dev(dev)) {
> > > >          return 0;
> > > >      }
> > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > index 2c0ff6d7b0..b6462f0192 100644
> > > > --- a/net/vhost-vdpa.c
> > > > +++ b/net/vhost-vdpa.c
> > > > @@ -541,14 +541,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
> > > >  };
> > > >
> > > >  static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > > > -                                           const char *device,
> > > > -                                           const char *name,
> > > > -                                           int vdpa_device_fd,
> > > > -                                           int queue_pair_index,
> > > > -                                           int nvqs,
> > > > -                                           bool is_datapath,
> > > > -                                           bool svq,
> > > > -                                           VhostIOVATree *iova_tree)
> > > > +                                       const char *device,
> > > > +                                       const char *name,
> > > > +                                       int vdpa_device_fd,
> > > > +                                       int queue_pair_index,
> > > > +                                       int nvqs,
> > > > +                                       bool is_datapath,
> > > > +                                       bool svq,
> > > > +                                       struct vhost_vdpa_iova_range iova_range,
> > > > +                                       VhostIOVATree *iova_tree)
> > >
> > > Nit: it's better not mix style changes.
> > >
> >
> > The style changes are because the new parameter is longer than 80
> > characters, do you prefer me to send a previous patch reducing
> > indentation?
> >
> 
> Michale, what's your preference? I'm fine with both.
> 
> Thanks

I think it doesn't matter much, but generally 80 char limit is
not a hard limit. We can just let it be.

> > Thanks!
> >
> > > Other than this:
> > >
> > > Acked-by: Jason Wang <jasonwang@redhat.com>
> > >
> > > Thanks
> > >
> > > >  {
> > > >      NetClientState *nc = NULL;
> > > >      VhostVDPAState *s;
> > > > @@ -567,6 +568,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > > >      s->vhost_vdpa.device_fd = vdpa_device_fd;
> > > >      s->vhost_vdpa.index = queue_pair_index;
> > > >      s->vhost_vdpa.shadow_vqs_enabled = svq;
> > > > +    s->vhost_vdpa.iova_range = iova_range;
> > > >      s->vhost_vdpa.iova_tree = iova_tree;
> > > >      if (!is_datapath) {
> > > >          s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
> > > > @@ -646,6 +648,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > > >      int vdpa_device_fd;
> > > >      g_autofree NetClientState **ncs = NULL;
> > > >      g_autoptr(VhostIOVATree) iova_tree = NULL;
> > > > +    struct vhost_vdpa_iova_range iova_range;
> > > >      NetClientState *nc;
> > > >      int queue_pairs, r, i = 0, has_cvq = 0;
> > > >
> > > > @@ -689,14 +692,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > > >          return queue_pairs;
> > > >      }
> > > >
> > > > +    vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> > > >      if (opts->x_svq) {
> > > > -        struct vhost_vdpa_iova_range iova_range;
> > > > -
> > > >          if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
> > > >              goto err_svq;
> > > >          }
> > > >
> > > > -        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> > > >          iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
> > > >      }
> > > >
> > > > @@ -705,7 +706,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > > >      for (i = 0; i < queue_pairs; i++) {
> > > >          ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > > >                                       vdpa_device_fd, i, 2, true, opts->x_svq,
> > > > -                                     iova_tree);
> > > > +                                     iova_range, iova_tree);
> > > >          if (!ncs[i])
> > > >              goto err;
> > > >      }
> > > > @@ -713,7 +714,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > > >      if (has_cvq) {
> > > >          nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > > >                                   vdpa_device_fd, i, 1, false,
> > > > -                                 opts->x_svq, iova_tree);
> > > > +                                 opts->x_svq, iova_range, iova_tree);
> > > >          if (!nc)
> > > >              goto err;
> > > >      }
> > > > --
> > > > 2.31.1
> > > >
> > >
> >


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v9 06/12] vdpa: request iova_range only once
@ 2022-12-21 11:47           ` Michael S. Tsirkin
  0 siblings, 0 replies; 22+ messages in thread
From: Michael S. Tsirkin @ 2022-12-21 11:47 UTC (permalink / raw)
  To: Jason Wang
  Cc: Laurent Vivier, kvm, Parav Pandit, Cindy Lu, Cornelia Huck,
	qemu-devel, Gautam Dawar, virtualization, Eugenio Perez Martin,
	Harpreet Singh Anand, Stefan Hajnoczi, Eli Cohen, Paolo Bonzini,
	Liuxiangdong, Longpeng

On Wed, Dec 21, 2022 at 04:21:52PM +0800, Jason Wang wrote:
> On Fri, Dec 16, 2022 at 5:53 PM Eugenio Perez Martin
> <eperezma@redhat.com> wrote:
> >
> > On Fri, Dec 16, 2022 at 8:29 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Thu, Dec 15, 2022 at 7:32 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> > > >
> > > > Currently iova range is requested once per queue pair in the case of
> > > > net. Reduce the number of ioctls asking it once at initialization and
> > > > reusing that value for each vhost_vdpa.
> > > >
> > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > ---
> > > >  hw/virtio/vhost-vdpa.c | 15 ---------------
> > > >  net/vhost-vdpa.c       | 27 ++++++++++++++-------------
> > > >  2 files changed, 14 insertions(+), 28 deletions(-)
> > > >
> > > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > > > index 691bcc811a..9b7f4ef083 100644
> > > > --- a/hw/virtio/vhost-vdpa.c
> > > > +++ b/hw/virtio/vhost-vdpa.c
> > > > @@ -365,19 +365,6 @@ static int vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
> > > >      return 0;
> > > >  }
> > > >
> > > > -static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> > > > -{
> > > > -    int ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE,
> > > > -                              &v->iova_range);
> > > > -    if (ret != 0) {
> > > > -        v->iova_range.first = 0;
> > > > -        v->iova_range.last = UINT64_MAX;
> > > > -    }
> > > > -
> > > > -    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> > > > -                                    v->iova_range.last);
> > > > -}
> > > > -
> > > >  /*
> > > >   * The use of this function is for requests that only need to be
> > > >   * applied once. Typically such request occurs at the beginning
> > > > @@ -465,8 +452,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> > > >          goto err;
> > > >      }
> > > >
> > > > -    vhost_vdpa_get_iova_range(v);
> > > > -
> > > >      if (!vhost_vdpa_first_dev(dev)) {
> > > >          return 0;
> > > >      }
> > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > index 2c0ff6d7b0..b6462f0192 100644
> > > > --- a/net/vhost-vdpa.c
> > > > +++ b/net/vhost-vdpa.c
> > > > @@ -541,14 +541,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
> > > >  };
> > > >
> > > >  static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > > > -                                           const char *device,
> > > > -                                           const char *name,
> > > > -                                           int vdpa_device_fd,
> > > > -                                           int queue_pair_index,
> > > > -                                           int nvqs,
> > > > -                                           bool is_datapath,
> > > > -                                           bool svq,
> > > > -                                           VhostIOVATree *iova_tree)
> > > > +                                       const char *device,
> > > > +                                       const char *name,
> > > > +                                       int vdpa_device_fd,
> > > > +                                       int queue_pair_index,
> > > > +                                       int nvqs,
> > > > +                                       bool is_datapath,
> > > > +                                       bool svq,
> > > > +                                       struct vhost_vdpa_iova_range iova_range,
> > > > +                                       VhostIOVATree *iova_tree)
> > >
> > > Nit: it's better not mix style changes.
> > >
> >
> > The style changes are because the new parameter is longer than 80
> > characters, do you prefer me to send a previous patch reducing
> > indentation?
> >
> 
> Michale, what's your preference? I'm fine with both.
> 
> Thanks

I think it doesn't matter much, but generally 80 char limit is
not a hard limit. We can just let it be.

> > Thanks!
> >
> > > Other than this:
> > >
> > > Acked-by: Jason Wang <jasonwang@redhat.com>
> > >
> > > Thanks
> > >
> > > >  {
> > > >      NetClientState *nc = NULL;
> > > >      VhostVDPAState *s;
> > > > @@ -567,6 +568,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > > >      s->vhost_vdpa.device_fd = vdpa_device_fd;
> > > >      s->vhost_vdpa.index = queue_pair_index;
> > > >      s->vhost_vdpa.shadow_vqs_enabled = svq;
> > > > +    s->vhost_vdpa.iova_range = iova_range;
> > > >      s->vhost_vdpa.iova_tree = iova_tree;
> > > >      if (!is_datapath) {
> > > >          s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
> > > > @@ -646,6 +648,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > > >      int vdpa_device_fd;
> > > >      g_autofree NetClientState **ncs = NULL;
> > > >      g_autoptr(VhostIOVATree) iova_tree = NULL;
> > > > +    struct vhost_vdpa_iova_range iova_range;
> > > >      NetClientState *nc;
> > > >      int queue_pairs, r, i = 0, has_cvq = 0;
> > > >
> > > > @@ -689,14 +692,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > > >          return queue_pairs;
> > > >      }
> > > >
> > > > +    vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> > > >      if (opts->x_svq) {
> > > > -        struct vhost_vdpa_iova_range iova_range;
> > > > -
> > > >          if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
> > > >              goto err_svq;
> > > >          }
> > > >
> > > > -        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> > > >          iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
> > > >      }
> > > >
> > > > @@ -705,7 +706,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > > >      for (i = 0; i < queue_pairs; i++) {
> > > >          ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > > >                                       vdpa_device_fd, i, 2, true, opts->x_svq,
> > > > -                                     iova_tree);
> > > > +                                     iova_range, iova_tree);
> > > >          if (!ncs[i])
> > > >              goto err;
> > > >      }
> > > > @@ -713,7 +714,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > > >      if (has_cvq) {
> > > >          nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > > >                                   vdpa_device_fd, i, 1, false,
> > > > -                                 opts->x_svq, iova_tree);
> > > > +                                 opts->x_svq, iova_range, iova_tree);
> > > >          if (!nc)
> > > >              goto err;
> > > >      }
> > > > --
> > > > 2.31.1
> > > >
> > >
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2022-12-21 11:48 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-15 11:31 [PATCH v9 00/12] ASID support in vhost-vdpa net Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 01/12] vdpa: use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 02/12] vhost: set SVQ device call handler at SVQ start Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 03/12] vhost: allocate SVQ device file descriptors at device start Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 04/12] vhost: move iova_tree set to vhost_svq_start Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 05/12] vdpa: add vhost_vdpa_net_valid_svq_features Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 06/12] vdpa: request iova_range only once Eugenio Pérez
2022-12-16  7:29   ` Jason Wang
2022-12-16  7:29     ` Jason Wang
2022-12-16  9:52     ` Eugenio Perez Martin
2022-12-21  8:21       ` Jason Wang
2022-12-21  8:21         ` Jason Wang
2022-12-21 11:47         ` Michael S. Tsirkin
2022-12-21 11:47           ` Michael S. Tsirkin
2022-12-15 11:31 ` [PATCH v9 07/12] vdpa: move SVQ vring features check to net/ Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 08/12] vdpa: allocate SVQ array unconditionally Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 09/12] vdpa: add asid parameter to vhost_vdpa_dma_map/unmap Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 10/12] vdpa: store x-svq parameter in VhostVDPAState Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 11/12] vdpa: add shadow_data to vhost_vdpa Eugenio Pérez
2022-12-15 11:31 ` [PATCH v9 12/12] vdpa: always start CVQ in SVQ mode if possible Eugenio Pérez
2022-12-16  7:35   ` Jason Wang
2022-12-16  7:35     ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.