All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ
@ 2022-03-30 18:30 Eugenio Pérez
  2022-03-30 18:30 ` [RFC PATCH v3 01/19] util: Return void on iova_tree_remove Eugenio Pérez
                   ` (18 more replies)
  0 siblings, 19 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

Control virtqueue is used by networking device for accepting various
commands from the driver. It's a must to support multiqueue and other
configurations.

Shadow VirtQueue (SVQ) already makes possible migration of virtqueue
states, effectively intercepting them so qemu can track what regions of memory
are dirty because device action and needs migration. However, this does not
solve networking device state seen by the driver because CVQ messages, like
changes on MAC addresses from the driver.

To solve that, this series uses SVQ infraestructure proposed at SVQ to
intercept networking control messages used by the device. This way, qemu is
able to update VirtIONet device model and to migrate it.

You can run qemu in two modes after applying this series: only intercepting
cvq with x-cvq-svq=on or intercept all the virtqueues adding cmdline x-svq=on:

-netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0,x-cvq-svq=on,x-svq=on

The most updated kernel part of ASID is proposed at [1].

Other modes without x-cvq-svq have been not tested with this series. Other vq
cmd commands than set mac are not tested. Some details like error control are
not 100% tested neither.

The first patch will be proposed in -trivial sepratedly

Comments are welcomed on every aspect of the patch.

Changes from rfc v2:
* Fix use-after-free

Changes from rfc v1:
* Rebase to latest master
* Configure ASID instead of assuming cvq asid != data vqs asid
* Update device model so (MAC) state can be migrated too.

[1] https://lkml.kernel.org/kvm/20220224212314.1326-1-gdawar@xilinx.com/

Eugenio Pérez (19):
  util: Return void on iova_tree_remove
  vdpa: Add x-svq to NetdevVhostVDPAOptions
  vhost: move descriptor translation to vhost_svq_vring_write_descs
  vdpa: Fix index calculus at vhost_vdpa_svqs_start
  virtio-net: use g_memdup2() instead of unsafe g_memdup()
  virtio-net: Expose ctrl virtqueue logic
  vdpa: Extract get geatures part from vhost_vdpa_get_max_queue_pairs
  virtio: Make virtqueue_alloc_element non-static
  vhost: Add SVQElement
  vhost: Add custom used buffer callback
  vdpa: control virtqueue support on shadow virtqueue
  vhost: Add vhost_iova_tree_find
  vdpa: Add map/unmap operation callback to SVQ
  vhost: Add vhost_svq_inject
  vdpa: add NetClientState->start() callback
  vdpa: Add vhost_vdpa_start_control_svq
  vhost: Update kernel headers
  vdpa: Add asid attribute to vdpa device
  vdpa: Add x-cvq-svq

 qapi/net.json                                |  13 +-
 hw/virtio/vhost-iova-tree.h                  |   2 +
 hw/virtio/vhost-shadow-virtqueue.h           |  46 ++-
 include/hw/virtio/vhost-vdpa.h               |   5 +
 include/hw/virtio/virtio-net.h               |   3 +
 include/hw/virtio/virtio.h                   |   1 +
 include/net/net.h                            |   2 +
 include/qemu/iova-tree.h                     |   4 +-
 include/standard-headers/linux/vhost_types.h |  11 +-
 linux-headers/linux/vhost.h                  |  25 +-
 hw/net/vhost_net.c                           |   4 +
 hw/net/virtio-net.c                          |  82 +++--
 hw/virtio/vhost-iova-tree.c                  |  14 +
 hw/virtio/vhost-shadow-virtqueue.c           | 238 +++++++++---
 hw/virtio/vhost-vdpa.c                       |  70 +++-
 hw/virtio/virtio.c                           |   2 +-
 net/vhost-vdpa.c                             | 368 +++++++++++++++++--
 util/iova-tree.c                             |   4 +-
 18 files changed, 764 insertions(+), 130 deletions(-)

-- 
2.27.0




^ permalink raw reply	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 01/19] util: Return void on iova_tree_remove
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
@ 2022-03-30 18:30 ` Eugenio Pérez
  2022-03-30 18:30 ` [RFC PATCH v3 02/19] vdpa: Add x-svq to NetdevVhostVDPAOptions Eugenio Pérez
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

It always returns IOVA_OK so nobody uses it.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/qemu/iova-tree.h | 4 +---
 util/iova-tree.c         | 4 +---
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/include/qemu/iova-tree.h b/include/qemu/iova-tree.h
index c938fb0793..16bbfdf5f8 100644
--- a/include/qemu/iova-tree.h
+++ b/include/qemu/iova-tree.h
@@ -72,10 +72,8 @@ int iova_tree_insert(IOVATree *tree, const DMAMap *map);
  * provided.  The range does not need to be exactly what has inserted,
  * all the mappings that are included in the provided range will be
  * removed from the tree.  Here map->translated_addr is meaningless.
- *
- * Return: 0 if succeeded, or <0 if error.
  */
-int iova_tree_remove(IOVATree *tree, const DMAMap *map);
+void iova_tree_remove(IOVATree *tree, const DMAMap *map);
 
 /**
  * iova_tree_find:
diff --git a/util/iova-tree.c b/util/iova-tree.c
index 6dff29c1f6..fee530a579 100644
--- a/util/iova-tree.c
+++ b/util/iova-tree.c
@@ -164,15 +164,13 @@ void iova_tree_foreach(IOVATree *tree, iova_tree_iterator iterator)
     g_tree_foreach(tree->tree, iova_tree_traverse, iterator);
 }
 
-int iova_tree_remove(IOVATree *tree, const DMAMap *map)
+void iova_tree_remove(IOVATree *tree, const DMAMap *map)
 {
     const DMAMap *overlap;
 
     while ((overlap = iova_tree_find(tree, map))) {
         g_tree_remove(tree->tree, overlap);
     }
-
-    return IOVA_OK;
 }
 
 /**
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 02/19] vdpa: Add x-svq to NetdevVhostVDPAOptions
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
  2022-03-30 18:30 ` [RFC PATCH v3 01/19] util: Return void on iova_tree_remove Eugenio Pérez
@ 2022-03-30 18:30 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 03/19] vhost: move descriptor translation to vhost_svq_vring_write_descs Eugenio Pérez
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

Finally offering the possibility to enable SVQ from the command line.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 qapi/net.json    |  9 ++++++++-
 net/vhost-vdpa.c | 48 ++++++++++++++++++++++++++++++++++++++++--------
 2 files changed, 48 insertions(+), 9 deletions(-)

diff --git a/qapi/net.json b/qapi/net.json
index 7fab2e7cd8..6a5460ce56 100644
--- a/qapi/net.json
+++ b/qapi/net.json
@@ -445,12 +445,19 @@
 # @queues: number of queues to be created for multiqueue vhost-vdpa
 #          (default: 1)
 #
+# @x-svq: Start device with (experimental) shadow virtqueue. (Since 7.1)
+#         (default: false)
+#
+# Features:
+# @unstable: Member @x-svq is experimental.
+#
 # Since: 5.1
 ##
 { 'struct': 'NetdevVhostVDPAOptions',
   'data': {
     '*vhostdev':     'str',
-    '*queues':       'int' } }
+    '*queues':       'int',
+    '*x-svq':        {'type': 'bool', 'features' : [ 'unstable'] } } }
 
 ##
 # @NetClientDriver:
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 1e9fe47c03..def738998b 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -127,7 +127,11 @@ err_init:
 static void vhost_vdpa_cleanup(NetClientState *nc)
 {
     VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
+    struct vhost_dev *dev = s->vhost_vdpa.dev;
 
+    if (dev && dev->vq_index + dev->nvqs == dev->vq_index_end) {
+        g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
+    }
     if (s->vhost_net) {
         vhost_net_cleanup(s->vhost_net);
         g_free(s->vhost_net);
@@ -187,13 +191,23 @@ static NetClientInfo net_vhost_vdpa_info = {
         .check_peer_type = vhost_vdpa_check_peer_type,
 };
 
+static int vhost_vdpa_get_iova_range(int fd,
+                                     struct vhost_vdpa_iova_range *iova_range)
+{
+    int ret = ioctl(fd, VHOST_VDPA_GET_IOVA_RANGE, iova_range);
+
+    return ret < 0 ? -errno : 0;
+}
+
 static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
-                                           const char *device,
-                                           const char *name,
-                                           int vdpa_device_fd,
-                                           int queue_pair_index,
-                                           int nvqs,
-                                           bool is_datapath)
+                                       const char *device,
+                                       const char *name,
+                                       int vdpa_device_fd,
+                                       int queue_pair_index,
+                                       int nvqs,
+                                       bool is_datapath,
+                                       bool svq,
+                                       VhostIOVATree *iova_tree)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
@@ -211,6 +225,8 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
 
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     s->vhost_vdpa.index = queue_pair_index;
+    s->vhost_vdpa.shadow_vqs_enabled = svq;
+    s->vhost_vdpa.iova_tree = iova_tree;
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
     if (ret) {
         qemu_del_net_client(nc);
@@ -266,6 +282,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
     g_autofree NetClientState **ncs = NULL;
     NetClientState *nc;
     int queue_pairs, i, has_cvq = 0;
+    g_autoptr(VhostIOVATree) iova_tree = NULL;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -285,29 +302,44 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         qemu_close(vdpa_device_fd);
         return queue_pairs;
     }
+    if (opts->x_svq) {
+        struct vhost_vdpa_iova_range iova_range;
+
+        if (has_cvq) {
+            error_setg(errp, "vdpa svq does not work with cvq");
+            goto err_svq;
+        }
+        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
+        iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
+    }
 
     ncs = g_malloc0(sizeof(*ncs) * queue_pairs);
 
     for (i = 0; i < queue_pairs; i++) {
         ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
-                                     vdpa_device_fd, i, 2, true);
+                                     vdpa_device_fd, i, 2, true, opts->x_svq,
+                                     iova_tree);
         if (!ncs[i])
             goto err;
     }
 
     if (has_cvq) {
         nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
-                                 vdpa_device_fd, i, 1, false);
+                                 vdpa_device_fd, i, 1, false, opts->x_svq,
+                                 iova_tree);
         if (!nc)
             goto err;
     }
 
+    iova_tree = NULL;
     return 0;
 
 err:
     if (i) {
         qemu_del_net_client(ncs[0]);
     }
+
+err_svq:
     qemu_close(vdpa_device_fd);
 
     return -1;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 03/19] vhost: move descriptor translation to vhost_svq_vring_write_descs
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
  2022-03-30 18:30 ` [RFC PATCH v3 01/19] util: Return void on iova_tree_remove Eugenio Pérez
  2022-03-30 18:30 ` [RFC PATCH v3 02/19] vdpa: Add x-svq to NetdevVhostVDPAOptions Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 04/19] vdpa: Fix index calculus at vhost_vdpa_svqs_start Eugenio Pérez
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

It's done for both in and out descriptors so it's better placed here.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index b232803d1b..349255525f 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -122,17 +122,23 @@ static bool vhost_svq_translate_addr(const VhostShadowVirtqueue *svq,
     return true;
 }
 
-static void vhost_vring_write_descs(VhostShadowVirtqueue *svq, hwaddr *sg,
-                                    const struct iovec *iovec, size_t num,
-                                    bool more_descs, bool write)
+static bool vhost_svq_vring_write_descs(VhostShadowVirtqueue *svq, hwaddr *sg,
+                                        const struct iovec *iovec, size_t num,
+                                        bool more_descs, bool write)
 {
     uint16_t i = svq->free_head, last = svq->free_head;
     unsigned n;
     uint16_t flags = write ? cpu_to_le16(VRING_DESC_F_WRITE) : 0;
     vring_desc_t *descs = svq->vring.desc;
+    bool ok;
 
     if (num == 0) {
-        return;
+        return true;
+    }
+
+    ok = vhost_svq_translate_addr(svq, sg, iovec, num);
+    if (unlikely(!ok)) {
+        return false;
     }
 
     for (n = 0; n < num; n++) {
@@ -149,6 +155,7 @@ static void vhost_vring_write_descs(VhostShadowVirtqueue *svq, hwaddr *sg,
     }
 
     svq->free_head = le16_to_cpu(descs[last].next);
+    return true;
 }
 
 static bool vhost_svq_add_split(VhostShadowVirtqueue *svq,
@@ -168,21 +175,18 @@ static bool vhost_svq_add_split(VhostShadowVirtqueue *svq,
         return false;
     }
 
-    ok = vhost_svq_translate_addr(svq, sgs, elem->out_sg, elem->out_num);
+    ok = vhost_svq_vring_write_descs(svq, sgs, elem->out_sg, elem->out_num,
+                                     elem->in_num > 0, false);
     if (unlikely(!ok)) {
         return false;
     }
-    vhost_vring_write_descs(svq, sgs, elem->out_sg, elem->out_num,
-                            elem->in_num > 0, false);
 
-
-    ok = vhost_svq_translate_addr(svq, sgs, elem->in_sg, elem->in_num);
+    ok = vhost_svq_vring_write_descs(svq, sgs, elem->in_sg, elem->in_num, false,
+                                     true);
     if (unlikely(!ok)) {
         return false;
     }
 
-    vhost_vring_write_descs(svq, sgs, elem->in_sg, elem->in_num, false, true);
-
     /*
      * Put the entry in the available array (but don't update avail->idx until
      * they do sync).
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 04/19] vdpa: Fix index calculus at vhost_vdpa_svqs_start
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (2 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 03/19] vhost: move descriptor translation to vhost_svq_vring_write_descs Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 05/19] virtio-net: use g_memdup2() instead of unsafe g_memdup() Eugenio Pérez
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index c5ed7a3779..9eeac8fa8e 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1016,7 +1016,7 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
         VirtQueue *vq = virtio_get_queue(dev->vdev, dev->vq_index + i);
         VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i);
         struct vhost_vring_addr addr = {
-            .index = i,
+            .index = dev->vq_index + i,
         };
         int r;
         bool ok = vhost_vdpa_svq_setup(dev, svq, i, &err);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 05/19] virtio-net: use g_memdup2() instead of unsafe g_memdup()
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (3 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 04/19] vdpa: Fix index calculus at vhost_vdpa_svqs_start Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 06/19] virtio-net: Expose ctrl virtqueue logic Eugenio Pérez
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

Fixing that literal checkpatch.pl because it will complain when we modify the file

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/net/virtio-net.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 1067e72b39..da05516a99 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1443,7 +1443,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
         }
 
         iov_cnt = elem->out_num;
-        iov2 = iov = g_memdup(elem->out_sg, sizeof(struct iovec) * elem->out_num);
+        iov2 = iov = g_memdup2(elem->out_sg, sizeof(struct iovec) * elem->out_num);
         s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
         iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
         if (s != sizeof(ctrl)) {
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 06/19] virtio-net: Expose ctrl virtqueue logic
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (4 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 05/19] virtio-net: use g_memdup2() instead of unsafe g_memdup() Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 07/19] vdpa: Extract get geatures part from vhost_vdpa_get_max_queue_pairs Eugenio Pérez
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

This allows external vhost-net devices to modify the state of the
VirtIO device model once vhost-vdpa device has acknowledge the control
commands.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/virtio-net.h |  3 ++
 hw/net/virtio-net.c            | 82 ++++++++++++++++++++--------------
 2 files changed, 51 insertions(+), 34 deletions(-)

diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
index eb87032627..e62f9e227f 100644
--- a/include/hw/virtio/virtio-net.h
+++ b/include/hw/virtio/virtio-net.h
@@ -218,6 +218,9 @@ struct VirtIONet {
     struct EBPFRSSContext ebpf_rss;
 };
 
+unsigned virtio_net_handle_ctrl_iov(VirtIODevice *vdev,
+                                    const struct iovec *in_sg, size_t in_num,
+                                    struct iovec *out_sg, unsigned out_num);
 void virtio_net_set_netclient_name(VirtIONet *n, const char *name,
                                    const char *type);
 
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index da05516a99..5905a9285c 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1419,56 +1419,70 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
     return VIRTIO_NET_OK;
 }
 
-static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+unsigned virtio_net_handle_ctrl_iov(VirtIODevice *vdev,
+                                    const struct iovec *in_sg, size_t in_num,
+                                    struct iovec *out_sg, unsigned out_num)
 {
     VirtIONet *n = VIRTIO_NET(vdev);
     struct virtio_net_ctrl_hdr ctrl;
     virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
-    VirtQueueElement *elem;
     size_t s;
     struct iovec *iov, *iov2;
-    unsigned int iov_cnt;
+
+    if (iov_size(in_sg, in_num) < sizeof(status) ||
+        iov_size(out_sg, out_num) < sizeof(ctrl)) {
+        virtio_error(vdev, "virtio-net ctrl missing headers");
+        return 0;
+    }
+
+    iov2 = iov = g_memdup2(out_sg, sizeof(struct iovec) * out_num);
+    s = iov_to_buf(iov, out_num, 0, &ctrl, sizeof(ctrl));
+    iov_discard_front(&iov, &out_num, sizeof(ctrl));
+    if (s != sizeof(ctrl)) {
+        status = VIRTIO_NET_ERR;
+    } else if (ctrl.class == VIRTIO_NET_CTRL_RX) {
+        status = virtio_net_handle_rx_mode(n, ctrl.cmd, iov, out_num);
+    } else if (ctrl.class == VIRTIO_NET_CTRL_MAC) {
+        status = virtio_net_handle_mac(n, ctrl.cmd, iov, out_num);
+    } else if (ctrl.class == VIRTIO_NET_CTRL_VLAN) {
+        status = virtio_net_handle_vlan_table(n, ctrl.cmd, iov, out_num);
+    } else if (ctrl.class == VIRTIO_NET_CTRL_ANNOUNCE) {
+        status = virtio_net_handle_announce(n, ctrl.cmd, iov, out_num);
+    } else if (ctrl.class == VIRTIO_NET_CTRL_MQ) {
+        status = virtio_net_handle_mq(n, ctrl.cmd, iov, out_num);
+    } else if (ctrl.class == VIRTIO_NET_CTRL_GUEST_OFFLOADS) {
+        status = virtio_net_handle_offloads(n, ctrl.cmd, iov, out_num);
+    }
+
+    s = iov_from_buf(in_sg, in_num, 0, &status, sizeof(status));
+    assert(s == sizeof(status));
+
+    g_free(iov2);
+    return sizeof(status);
+}
+
+static void virtio_net_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+{
+    VirtQueueElement *elem;
 
     for (;;) {
+        unsigned written;
         elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
         if (!elem) {
             break;
         }
-        if (iov_size(elem->in_sg, elem->in_num) < sizeof(status) ||
-            iov_size(elem->out_sg, elem->out_num) < sizeof(ctrl)) {
-            virtio_error(vdev, "virtio-net ctrl missing headers");
+
+        written = virtio_net_handle_ctrl_iov(vdev, elem->in_sg, elem->in_num,
+                                             elem->out_sg, elem->out_num);
+        if (written > 0) {
+            virtqueue_push(vq, elem, written);
+            virtio_notify(vdev, vq);
+            g_free(elem);
+        } else {
             virtqueue_detach_element(vq, elem, 0);
             g_free(elem);
             break;
         }
-
-        iov_cnt = elem->out_num;
-        iov2 = iov = g_memdup2(elem->out_sg, sizeof(struct iovec) * elem->out_num);
-        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
-        iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
-        if (s != sizeof(ctrl)) {
-            status = VIRTIO_NET_ERR;
-        } else if (ctrl.class == VIRTIO_NET_CTRL_RX) {
-            status = virtio_net_handle_rx_mode(n, ctrl.cmd, iov, iov_cnt);
-        } else if (ctrl.class == VIRTIO_NET_CTRL_MAC) {
-            status = virtio_net_handle_mac(n, ctrl.cmd, iov, iov_cnt);
-        } else if (ctrl.class == VIRTIO_NET_CTRL_VLAN) {
-            status = virtio_net_handle_vlan_table(n, ctrl.cmd, iov, iov_cnt);
-        } else if (ctrl.class == VIRTIO_NET_CTRL_ANNOUNCE) {
-            status = virtio_net_handle_announce(n, ctrl.cmd, iov, iov_cnt);
-        } else if (ctrl.class == VIRTIO_NET_CTRL_MQ) {
-            status = virtio_net_handle_mq(n, ctrl.cmd, iov, iov_cnt);
-        } else if (ctrl.class == VIRTIO_NET_CTRL_GUEST_OFFLOADS) {
-            status = virtio_net_handle_offloads(n, ctrl.cmd, iov, iov_cnt);
-        }
-
-        s = iov_from_buf(elem->in_sg, elem->in_num, 0, &status, sizeof(status));
-        assert(s == sizeof(status));
-
-        virtqueue_push(vq, elem, sizeof(status));
-        virtio_notify(vdev, vq);
-        g_free(iov2);
-        g_free(elem);
     }
 }
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 07/19] vdpa: Extract get geatures part from vhost_vdpa_get_max_queue_pairs
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (5 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 06/19] virtio-net: Expose ctrl virtqueue logic Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 08/19] virtio: Make virtqueue_alloc_element non-static Eugenio Pérez
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

To know the device features is also needed for CVQ SVQ. Extract from
vhost_vdpa_get_max_queue_pairs so we can reuse it.

Report errno in case of failure getting them while we're at it.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 30 ++++++++++++++++++++----------
 1 file changed, 20 insertions(+), 10 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index def738998b..290aa01e13 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -235,20 +235,24 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
     return nc;
 }
 
-static int vhost_vdpa_get_max_queue_pairs(int fd, int *has_cvq, Error **errp)
+static int vhost_vdpa_get_features(int fd, uint64_t *features, Error **errp)
+{
+    int ret = ioctl(fd, VHOST_GET_FEATURES, features);
+    if (ret) {
+        error_setg_errno(errp, errno,
+                         "Fail to query features from vhost-vDPA device");
+    }
+    return ret;
+}
+
+static int vhost_vdpa_get_max_queue_pairs(int fd, uint64_t features,
+                                          int *has_cvq, Error **errp)
 {
     unsigned long config_size = offsetof(struct vhost_vdpa_config, buf);
     g_autofree struct vhost_vdpa_config *config = NULL;
     __virtio16 *max_queue_pairs;
-    uint64_t features;
     int ret;
 
-    ret = ioctl(fd, VHOST_GET_FEATURES, &features);
-    if (ret) {
-        error_setg(errp, "Fail to query features from vhost-vDPA device");
-        return ret;
-    }
-
     if (features & (1 << VIRTIO_NET_F_CTRL_VQ)) {
         *has_cvq = 1;
     } else {
@@ -278,10 +282,11 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
+    uint64_t features;
     int vdpa_device_fd;
     g_autofree NetClientState **ncs = NULL;
     NetClientState *nc;
-    int queue_pairs, i, has_cvq = 0;
+    int queue_pairs, r, i, has_cvq = 0;
     g_autoptr(VhostIOVATree) iova_tree = NULL;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
@@ -296,7 +301,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         return -errno;
     }
 
-    queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd,
+    r = vhost_vdpa_get_features(vdpa_device_fd, &features, errp);
+    if (r) {
+        return r;
+    }
+
+    queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd, features,
                                                  &has_cvq, errp);
     if (queue_pairs < 0) {
         qemu_close(vdpa_device_fd);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 08/19] virtio: Make virtqueue_alloc_element non-static
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (6 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 07/19] vdpa: Extract get geatures part from vhost_vdpa_get_max_queue_pairs Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 09/19] vhost: Add SVQElement Eugenio Pérez
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

So SVQ can allocate elements using it

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/virtio.h | 1 +
 hw/virtio/virtio.c         | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index b31c4507f5..1e85833897 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -195,6 +195,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
                     unsigned int len, unsigned int idx);
 
 void virtqueue_map(VirtIODevice *vdev, VirtQueueElement *elem);
+void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);
 void *virtqueue_pop(VirtQueue *vq, size_t sz);
 unsigned int virtqueue_drop_all(VirtQueue *vq);
 void *qemu_get_virtqueue_element(VirtIODevice *vdev, QEMUFile *f, size_t sz);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 9d637e043e..17cbbb5fca 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1376,7 +1376,7 @@ void virtqueue_map(VirtIODevice *vdev, VirtQueueElement *elem)
                                                                         false);
 }
 
-static void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num)
+void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num)
 {
     VirtQueueElement *elem;
     size_t in_addr_ofs = QEMU_ALIGN_UP(sz, __alignof__(elem->in_addr[0]));
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 09/19] vhost: Add SVQElement
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (7 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 08/19] virtio: Make virtqueue_alloc_element non-static Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 10/19] vhost: Add custom used buffer callback Eugenio Pérez
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

This allows SVQ to add metadata to the different queue elements

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.h |  8 ++++--
 hw/virtio/vhost-shadow-virtqueue.c | 42 ++++++++++++++++--------------
 2 files changed, 29 insertions(+), 21 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
index e5e24c536d..72aadb0aec 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -15,6 +15,10 @@
 #include "standard-headers/linux/vhost_types.h"
 #include "hw/virtio/vhost-iova-tree.h"
 
+typedef struct SVQElement {
+    VirtQueueElement elem;
+} SVQElement;
+
 /* Shadow virtqueue to relay notifications */
 typedef struct VhostShadowVirtqueue {
     /* Shadow vring */
@@ -48,10 +52,10 @@ typedef struct VhostShadowVirtqueue {
     VhostIOVATree *iova_tree;
 
     /* Map for use the guest's descriptors */
-    VirtQueueElement **ring_id_maps;
+    SVQElement **ring_id_maps;
 
     /* Next VirtQueue element that guest made available */
-    VirtQueueElement *next_guest_avail_elem;
+    SVQElement *next_guest_avail_elem;
 
     /* Next head to expose to the device */
     uint16_t shadow_avail_idx;
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 349255525f..37e80c5ee0 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -158,9 +158,10 @@ static bool vhost_svq_vring_write_descs(VhostShadowVirtqueue *svq, hwaddr *sg,
     return true;
 }
 
-static bool vhost_svq_add_split(VhostShadowVirtqueue *svq,
-                                VirtQueueElement *elem, unsigned *head)
+static bool vhost_svq_add_split(VhostShadowVirtqueue *svq, SVQElement *svq_elem,
+                                unsigned *head)
 {
+    const VirtQueueElement *elem = &svq_elem->elem;
     unsigned avail_idx;
     vring_avail_t *avail = svq->vring.avail;
     bool ok;
@@ -202,7 +203,7 @@ static bool vhost_svq_add_split(VhostShadowVirtqueue *svq,
     return true;
 }
 
-static bool vhost_svq_add(VhostShadowVirtqueue *svq, VirtQueueElement *elem)
+static bool vhost_svq_add(VhostShadowVirtqueue *svq, SVQElement *elem)
 {
     unsigned qemu_head;
     bool ok = vhost_svq_add_split(svq, elem, &qemu_head);
@@ -251,19 +252,21 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
         virtio_queue_set_notification(svq->vq, false);
 
         while (true) {
+            SVQElement *svq_elem;
             VirtQueueElement *elem;
             bool ok;
 
             if (svq->next_guest_avail_elem) {
-                elem = g_steal_pointer(&svq->next_guest_avail_elem);
+                svq_elem = g_steal_pointer(&svq->next_guest_avail_elem);
             } else {
-                elem = virtqueue_pop(svq->vq, sizeof(*elem));
+                svq_elem = virtqueue_pop(svq->vq, sizeof(*svq_elem));
             }
 
-            if (!elem) {
+            if (!svq_elem) {
                 break;
             }
 
+            elem = &svq_elem->elem;
             if (elem->out_num + elem->in_num > vhost_svq_available_slots(svq)) {
                 /*
                  * This condition is possible since a contiguous buffer in GPA
@@ -276,11 +279,11 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
                  * queue the current guest descriptor and ignore further kicks
                  * until some elements are used.
                  */
-                svq->next_guest_avail_elem = elem;
+                svq->next_guest_avail_elem = svq_elem;
                 return;
             }
 
-            ok = vhost_svq_add(svq, elem);
+            ok = vhost_svq_add(svq, svq_elem);
             if (unlikely(!ok)) {
                 /* VQ is broken, just return and ignore any other kicks */
                 return;
@@ -337,8 +340,7 @@ static void vhost_svq_disable_notification(VhostShadowVirtqueue *svq)
     svq->vring.avail->flags |= cpu_to_le16(VRING_AVAIL_F_NO_INTERRUPT);
 }
 
-static VirtQueueElement *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
-                                           uint32_t *len)
+static SVQElement *vhost_svq_get_buf(VhostShadowVirtqueue *svq, uint32_t *len)
 {
     vring_desc_t *descs = svq->vring.desc;
     const vring_used_t *used = svq->vring.used;
@@ -388,11 +390,13 @@ static void vhost_svq_flush(VhostShadowVirtqueue *svq,
         vhost_svq_disable_notification(svq);
         while (true) {
             uint32_t len;
-            g_autofree VirtQueueElement *elem = vhost_svq_get_buf(svq, &len);
-            if (!elem) {
+            g_autofree SVQElement *svq_elem = vhost_svq_get_buf(svq, &len);
+            VirtQueueElement *elem;
+            if (!svq_elem) {
                 break;
             }
 
+            elem = &svq_elem->elem;
             if (unlikely(i >= svq->vring.num)) {
                 qemu_log_mask(LOG_GUEST_ERROR,
                          "More than %u used buffers obtained in a %u size SVQ",
@@ -543,7 +547,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
     memset(svq->vring.desc, 0, driver_size);
     svq->vring.used = qemu_memalign(qemu_real_host_page_size, device_size);
     memset(svq->vring.used, 0, device_size);
-    svq->ring_id_maps = g_new0(VirtQueueElement *, svq->vring.num);
+    svq->ring_id_maps = g_new0(SVQElement *, svq->vring.num);
     for (unsigned i = 0; i < svq->vring.num - 1; i++) {
         svq->vring.desc[i].next = cpu_to_le16(i + 1);
     }
@@ -556,7 +560,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
 void vhost_svq_stop(VhostShadowVirtqueue *svq)
 {
     event_notifier_set_handler(&svq->svq_kick, NULL);
-    g_autofree VirtQueueElement *next_avail_elem = NULL;
+    g_autofree SVQElement *next_avail_elem = NULL;
 
     if (!svq->vq) {
         return;
@@ -566,16 +570,16 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
     vhost_svq_flush(svq, false);
 
     for (unsigned i = 0; i < svq->vring.num; ++i) {
-        g_autofree VirtQueueElement *elem = NULL;
-        elem = g_steal_pointer(&svq->ring_id_maps[i]);
-        if (elem) {
-            virtqueue_detach_element(svq->vq, elem, 0);
+        g_autofree SVQElement *svq_elem = NULL;
+        svq_elem = g_steal_pointer(&svq->ring_id_maps[i]);
+        if (svq_elem) {
+            virtqueue_detach_element(svq->vq, &svq_elem->elem, 0);
         }
     }
 
     next_avail_elem = g_steal_pointer(&svq->next_guest_avail_elem);
     if (next_avail_elem) {
-        virtqueue_detach_element(svq->vq, next_avail_elem, 0);
+        virtqueue_detach_element(svq->vq, &next_avail_elem->elem, 0);
     }
     svq->vq = NULL;
     g_free(svq->ring_id_maps);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 10/19] vhost: Add custom used buffer callback
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (8 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 09/19] vhost: Add SVQElement Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 11/19] vdpa: control virtqueue support on shadow virtqueue Eugenio Pérez
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

The callback allows SVQ users to know the VirtQueue requests and
responses. QEMU can use this to synchronize virtio device model state,
allowing to migrate it with minimum changes to the migration code.

In the case of networking, this will be used to inspect control
virtqueue messages.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.h | 16 +++++++++++++++-
 include/hw/virtio/vhost-vdpa.h     |  2 ++
 hw/virtio/vhost-shadow-virtqueue.c |  9 ++++++++-
 hw/virtio/vhost-vdpa.c             |  3 ++-
 4 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
index 72aadb0aec..4ff6a0cda0 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -19,6 +19,13 @@ typedef struct SVQElement {
     VirtQueueElement elem;
 } SVQElement;
 
+typedef void (*VirtQueueElementCallback)(VirtIODevice *vdev,
+                                         const VirtQueueElement *elem);
+
+typedef struct VhostShadowVirtqueueOps {
+    VirtQueueElementCallback used_elem_handler;
+} VhostShadowVirtqueueOps;
+
 /* Shadow virtqueue to relay notifications */
 typedef struct VhostShadowVirtqueue {
     /* Shadow vring */
@@ -57,6 +64,12 @@ typedef struct VhostShadowVirtqueue {
     /* Next VirtQueue element that guest made available */
     SVQElement *next_guest_avail_elem;
 
+    /* Optional callbacks */
+    const VhostShadowVirtqueueOps *ops;
+
+    /* Optional custom used virtqueue element handler */
+    VirtQueueElementCallback used_elem_cb;
+
     /* Next head to expose to the device */
     uint16_t shadow_avail_idx;
 
@@ -83,7 +96,8 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
                      VirtQueue *vq);
 void vhost_svq_stop(VhostShadowVirtqueue *svq);
 
-VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree);
+VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
+                                    const VhostShadowVirtqueueOps *ops);
 
 void vhost_svq_free(gpointer vq);
 G_DEFINE_AUTOPTR_CLEANUP_FUNC(VhostShadowVirtqueue, vhost_svq_free);
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index a29dbb3f53..f1ba46a860 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -17,6 +17,7 @@
 #include "hw/virtio/vhost-iova-tree.h"
 #include "hw/virtio/virtio.h"
 #include "standard-headers/linux/vhost_types.h"
+#include "hw/virtio/vhost-shadow-virtqueue.h"
 
 typedef struct VhostVDPAHostNotifier {
     MemoryRegion mr;
@@ -35,6 +36,7 @@ typedef struct vhost_vdpa {
     /* IOVA mapping used by the Shadow Virtqueue */
     VhostIOVATree *iova_tree;
     GPtrArray *shadow_vqs;
+    const VhostShadowVirtqueueOps *shadow_vq_ops;
     struct vhost_dev *dev;
     VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
 } VhostVDPA;
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 37e80c5ee0..112d0daf20 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -406,6 +406,10 @@ static void vhost_svq_flush(VhostShadowVirtqueue *svq,
                 return;
             }
             virtqueue_fill(vq, elem, len, i++);
+
+            if (svq->ops && svq->ops->used_elem_handler) {
+                svq->ops->used_elem_handler(svq->vdev, elem);
+            }
         }
 
         virtqueue_flush(vq, i);
@@ -592,12 +596,14 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
  * shadow methods and file descriptors.
  *
  * @iova_tree: Tree to perform descriptors translations
+ * @ops: SVQ operations hooks
  *
  * Returns the new virtqueue or NULL.
  *
  * In case of error, reason is reported through error_report.
  */
-VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree)
+VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
+                                    const VhostShadowVirtqueueOps *ops)
 {
     g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
     int r;
@@ -619,6 +625,7 @@ VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree)
     event_notifier_init_fd(&svq->svq_kick, VHOST_FILE_UNBIND);
     event_notifier_set_handler(&svq->hdev_call, vhost_svq_handle_call);
     svq->iova_tree = iova_tree;
+    svq->ops = ops;
     return g_steal_pointer(&svq);
 
 err_init_hdev_call:
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 9eeac8fa8e..ebd17b6185 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -409,7 +409,8 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
 
     shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free);
     for (unsigned n = 0; n < hdev->nvqs; ++n) {
-        g_autoptr(VhostShadowVirtqueue) svq = vhost_svq_new(v->iova_tree);
+        g_autoptr(VhostShadowVirtqueue) svq = vhost_svq_new(v->iova_tree,
+                                                            v->shadow_vq_ops);
 
         if (unlikely(!svq)) {
             error_setg(errp, "Cannot create svq %u", n);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 11/19] vdpa: control virtqueue support on shadow virtqueue
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (9 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 10/19] vhost: Add custom used buffer callback Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 12/19] vhost: Add vhost_iova_tree_find Eugenio Pérez
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

Introduce the control virtqueue support for vDPA shadow virtqueue. This
is needed for advanced networking features like multiqueue.

To demonstrate command handling, VIRTIO_NET_F_CTROL_MACADDR is
implemented. If vDPA device is started with SVQ support and virtio-net
driver changes MAC, virtio-net device model will be updated with the new
one.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 70 +++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 67 insertions(+), 3 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 290aa01e13..585d2f60f8 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -11,6 +11,7 @@
 
 #include "qemu/osdep.h"
 #include "clients.h"
+#include "hw/virtio/virtio-net.h"
 #include "net/vhost_net.h"
 #include "net/vhost-vdpa.h"
 #include "hw/virtio/vhost-vdpa.h"
@@ -69,6 +70,29 @@ const int vdpa_feature_bits[] = {
     VHOST_INVALID_FEATURE_BIT
 };
 
+/** Supported device specific feature bits with SVQ */
+static const uint64_t vdpa_svq_device_features =
+    BIT_ULL(VIRTIO_NET_F_CSUM) |
+    BIT_ULL(VIRTIO_NET_F_GUEST_CSUM) |
+    BIT_ULL(VIRTIO_NET_F_CTRL_GUEST_OFFLOADS) |
+    BIT_ULL(VIRTIO_NET_F_MTU) |
+    BIT_ULL(VIRTIO_NET_F_MAC) |
+    BIT_ULL(VIRTIO_NET_F_GUEST_TSO4) |
+    BIT_ULL(VIRTIO_NET_F_GUEST_TSO6) |
+    BIT_ULL(VIRTIO_NET_F_GUEST_ECN) |
+    BIT_ULL(VIRTIO_NET_F_GUEST_UFO) |
+    BIT_ULL(VIRTIO_NET_F_HOST_TSO4) |
+    BIT_ULL(VIRTIO_NET_F_HOST_TSO6) |
+    BIT_ULL(VIRTIO_NET_F_HOST_ECN) |
+    BIT_ULL(VIRTIO_NET_F_HOST_UFO) |
+    BIT_ULL(VIRTIO_NET_F_MRG_RXBUF) |
+    BIT_ULL(VIRTIO_NET_F_STATUS) |
+    BIT_ULL(VIRTIO_NET_F_CTRL_VQ) |
+    BIT_ULL(VIRTIO_F_ANY_LAYOUT) |
+    BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR) |
+    BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
+    BIT_ULL(VIRTIO_NET_F_STANDBY);
+
 VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
 {
     VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
@@ -199,6 +223,37 @@ static int vhost_vdpa_get_iova_range(int fd,
     return ret < 0 ? -errno : 0;
 }
 
+static void vhost_vdpa_net_handle_ctrl(VirtIODevice *vdev,
+                                       const VirtQueueElement *elem)
+{
+    struct virtio_net_ctrl_hdr ctrl;
+    virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
+    size_t s;
+    struct iovec in = {
+        .iov_base = &status,
+        .iov_len = sizeof(status),
+    };
+
+    s = iov_to_buf(elem->out_sg, elem->out_num, 0, &ctrl, sizeof(ctrl.class));
+    if (s != sizeof(ctrl.class) || ctrl.class != VIRTIO_NET_CTRL_MAC_ADDR_SET) {
+        return;
+    }
+    s = iov_to_buf(elem->in_sg, elem->in_num, 0, &status, sizeof(status));
+    if (s != sizeof(status) || status != VIRTIO_NET_OK) {
+        return;
+    }
+
+    status = VIRTIO_NET_ERR;
+    virtio_net_handle_ctrl_iov(vdev, &in, 1, elem->out_sg, elem->out_num);
+    if (status != VIRTIO_NET_OK) {
+        error_report("Bad CVQ processing in model");
+    }
+}
+
+static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
+    .used_elem_handler = vhost_vdpa_net_handle_ctrl,
+};
+
 static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
                                        const char *device,
                                        const char *name,
@@ -226,6 +281,9 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     s->vhost_vdpa.index = queue_pair_index;
     s->vhost_vdpa.shadow_vqs_enabled = svq;
+    if (!is_datapath) {
+        s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
+    }
     s->vhost_vdpa.iova_tree = iova_tree;
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
     if (ret) {
@@ -314,9 +372,15 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
     }
     if (opts->x_svq) {
         struct vhost_vdpa_iova_range iova_range;
-
-        if (has_cvq) {
-            error_setg(errp, "vdpa svq does not work with cvq");
+        uint64_t invalid_dev_features =
+            features & ~vdpa_svq_device_features &
+            /* Transport are all accepted at this point */
+            ~MAKE_64BIT_MASK(VIRTIO_TRANSPORT_F_START,
+                             VIRTIO_TRANSPORT_F_END - VIRTIO_TRANSPORT_F_START);
+
+        if (invalid_dev_features) {
+            error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
+                       invalid_dev_features);
             goto err_svq;
         }
         vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 12/19] vhost: Add vhost_iova_tree_find
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (10 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 11/19] vdpa: control virtqueue support on shadow virtqueue Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 13/19] vdpa: Add map/unmap operation callback to SVQ Eugenio Pérez
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

Just a simple wrapper so we can find DMAMap entries based on iova

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-iova-tree.h |  2 ++
 hw/virtio/vhost-iova-tree.c | 14 ++++++++++++++
 2 files changed, 16 insertions(+)

diff --git a/hw/virtio/vhost-iova-tree.h b/hw/virtio/vhost-iova-tree.h
index 6a4f24e0f9..1ffcdc5b57 100644
--- a/hw/virtio/vhost-iova-tree.h
+++ b/hw/virtio/vhost-iova-tree.h
@@ -19,6 +19,8 @@ VhostIOVATree *vhost_iova_tree_new(uint64_t iova_first, uint64_t iova_last);
 void vhost_iova_tree_delete(VhostIOVATree *iova_tree);
 G_DEFINE_AUTOPTR_CLEANUP_FUNC(VhostIOVATree, vhost_iova_tree_delete);
 
+const DMAMap *vhost_iova_tree_find(const VhostIOVATree *iova_tree,
+                                   const DMAMap *map);
 const DMAMap *vhost_iova_tree_find_iova(const VhostIOVATree *iova_tree,
                                         const DMAMap *map);
 int vhost_iova_tree_map_alloc(VhostIOVATree *iova_tree, DMAMap *map);
diff --git a/hw/virtio/vhost-iova-tree.c b/hw/virtio/vhost-iova-tree.c
index 55fed1fefb..7d4e8ac499 100644
--- a/hw/virtio/vhost-iova-tree.c
+++ b/hw/virtio/vhost-iova-tree.c
@@ -56,6 +56,20 @@ void vhost_iova_tree_delete(VhostIOVATree *iova_tree)
     g_free(iova_tree);
 }
 
+/**
+ * Find a mapping in the tree that matches map
+ *
+ * @iova_tree  The iova tree
+ * @map        The map
+ *
+ * Return a matching map that contains argument map or NULL
+ */
+const DMAMap *vhost_iova_tree_find(const VhostIOVATree *iova_tree,
+                                   const DMAMap *map)
+{
+    return iova_tree_find(iova_tree->iova_taddr_map, map);
+}
+
 /**
  * Find the IOVA address stored from a memory address
  *
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 13/19] vdpa: Add map/unmap operation callback to SVQ
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (11 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 12/19] vhost: Add vhost_iova_tree_find Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 14/19] vhost: Add vhost_svq_inject Eugenio Pérez
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.h | 21 +++++++++++++++++++--
 hw/virtio/vhost-shadow-virtqueue.c |  8 +++++++-
 hw/virtio/vhost-vdpa.c             | 20 +++++++++++++++++++-
 3 files changed, 45 insertions(+), 4 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
index 4ff6a0cda0..6e61d9bfef 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -26,6 +26,15 @@ typedef struct VhostShadowVirtqueueOps {
     VirtQueueElementCallback used_elem_handler;
 } VhostShadowVirtqueueOps;
 
+typedef int (*vhost_svq_map_op)(hwaddr iova, hwaddr size, void *vaddr,
+                                bool readonly, void *opaque);
+typedef int (*vhost_svq_unmap_op)(hwaddr iova, hwaddr size, void *opaque);
+
+typedef struct VhostShadowVirtqueueMapOps {
+    vhost_svq_map_op map;
+    vhost_svq_unmap_op unmap;
+} VhostShadowVirtqueueMapOps;
+
 /* Shadow virtqueue to relay notifications */
 typedef struct VhostShadowVirtqueue {
     /* Shadow vring */
@@ -67,6 +76,12 @@ typedef struct VhostShadowVirtqueue {
     /* Optional callbacks */
     const VhostShadowVirtqueueOps *ops;
 
+    /* Device memory mapping callbacks */
+    const VhostShadowVirtqueueMapOps *map_ops;
+
+    /* Device memory mapping callbacks opaque */
+    void *map_ops_opaque;
+
     /* Optional custom used virtqueue element handler */
     VirtQueueElementCallback used_elem_cb;
 
@@ -96,8 +111,10 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
                      VirtQueue *vq);
 void vhost_svq_stop(VhostShadowVirtqueue *svq);
 
-VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
-                                    const VhostShadowVirtqueueOps *ops);
+VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_map,
+                                    const VhostShadowVirtqueueOps *ops,
+                                    const VhostShadowVirtqueueMapOps *map_ops,
+                                    void *map_ops_opaque);
 
 void vhost_svq_free(gpointer vq);
 G_DEFINE_AUTOPTR_CLEANUP_FUNC(VhostShadowVirtqueue, vhost_svq_free);
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 112d0daf20..714c820698 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -597,13 +597,17 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
  *
  * @iova_tree: Tree to perform descriptors translations
  * @ops: SVQ operations hooks
+ * @map_ops: SVQ mapping operation hooks
+ * @map_ops_opaque: Opaque data to pass to mapping operations
  *
  * Returns the new virtqueue or NULL.
  *
  * In case of error, reason is reported through error_report.
  */
 VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
-                                    const VhostShadowVirtqueueOps *ops)
+                                    const VhostShadowVirtqueueOps *ops,
+                                    const VhostShadowVirtqueueMapOps *map_ops,
+                                    void *map_ops_opaque)
 {
     g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
     int r;
@@ -626,6 +630,8 @@ VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
     event_notifier_set_handler(&svq->hdev_call, vhost_svq_handle_call);
     svq->iova_tree = iova_tree;
     svq->ops = ops;
+    svq->map_ops = map_ops;
+    svq->map_ops_opaque = map_ops_opaque;
     return g_steal_pointer(&svq);
 
 err_init_hdev_call:
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index ebd17b6185..600d006d6e 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -383,6 +383,22 @@ static int vhost_vdpa_get_dev_features(struct vhost_dev *dev,
     return ret;
 }
 
+static int vhost_vdpa_svq_map(hwaddr iova, hwaddr size, void *vaddr,
+                              bool readonly, void *opaque)
+{
+    return vhost_vdpa_dma_map(opaque, iova, size, vaddr, readonly);
+}
+
+static int vhost_vdpa_svq_unmap(hwaddr iova, hwaddr size, void *opaque)
+{
+    return vhost_vdpa_dma_unmap(opaque, iova, size);
+}
+
+static const VhostShadowVirtqueueMapOps vhost_vdpa_svq_map_ops = {
+    .map = vhost_vdpa_svq_map,
+    .unmap = vhost_vdpa_svq_unmap,
+};
+
 static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
                                Error **errp)
 {
@@ -410,7 +426,9 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
     shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free);
     for (unsigned n = 0; n < hdev->nvqs; ++n) {
         g_autoptr(VhostShadowVirtqueue) svq = vhost_svq_new(v->iova_tree,
-                                                            v->shadow_vq_ops);
+                                                       v->shadow_vq_ops,
+                                                       &vhost_vdpa_svq_map_ops,
+                                                       v);
 
         if (unlikely(!svq)) {
             error_setg(errp, "Cannot create svq %u", n);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 14/19] vhost: Add vhost_svq_inject
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (12 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 13/19] vdpa: Add map/unmap operation callback to SVQ Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 15/19] vdpa: add NetClientState->start() callback Eugenio Pérez
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

This allows qemu to inject packets to the device without guest's notice.

This will be use to inject net CVQ messages to restore status in the destination

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.h |   5 +
 hw/virtio/vhost-shadow-virtqueue.c | 179 +++++++++++++++++++++++++----
 2 files changed, 160 insertions(+), 24 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
index 6e61d9bfef..d82a64d566 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -17,6 +17,9 @@
 
 typedef struct SVQElement {
     VirtQueueElement elem;
+    hwaddr in_iova;
+    hwaddr out_iova;
+    bool not_from_guest;
 } SVQElement;
 
 typedef void (*VirtQueueElementCallback)(VirtIODevice *vdev,
@@ -100,6 +103,8 @@ typedef struct VhostShadowVirtqueue {
 
 bool vhost_svq_valid_features(uint64_t features, Error **errp);
 
+bool vhost_svq_inject(VhostShadowVirtqueue *svq, const struct iovec *iov,
+                      size_t out_num, size_t in_num);
 void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int svq_kick_fd);
 void vhost_svq_set_svq_call_fd(VhostShadowVirtqueue *svq, int call_fd);
 void vhost_svq_get_vring_addr(const VhostShadowVirtqueue *svq,
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 714c820698..dc2f194e24 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -16,6 +16,7 @@
 #include "qemu/log.h"
 #include "qemu/memalign.h"
 #include "linux-headers/linux/vhost.h"
+#include "qemu/iov.h"
 
 /**
  * Validate the transport device features that both guests can use with the SVQ
@@ -122,7 +123,8 @@ static bool vhost_svq_translate_addr(const VhostShadowVirtqueue *svq,
     return true;
 }
 
-static bool vhost_svq_vring_write_descs(VhostShadowVirtqueue *svq, hwaddr *sg,
+static bool vhost_svq_vring_write_descs(VhostShadowVirtqueue *svq,
+                                        SVQElement *svq_elem, hwaddr *sg,
                                         const struct iovec *iovec, size_t num,
                                         bool more_descs, bool write)
 {
@@ -130,15 +132,39 @@ static bool vhost_svq_vring_write_descs(VhostShadowVirtqueue *svq, hwaddr *sg,
     unsigned n;
     uint16_t flags = write ? cpu_to_le16(VRING_DESC_F_WRITE) : 0;
     vring_desc_t *descs = svq->vring.desc;
-    bool ok;
 
     if (num == 0) {
         return true;
     }
 
-    ok = vhost_svq_translate_addr(svq, sg, iovec, num);
-    if (unlikely(!ok)) {
-        return false;
+    if (svq_elem->not_from_guest) {
+        DMAMap map = {
+            .translated_addr = (hwaddr)iovec->iov_base,
+            .size = ROUND_UP(iovec->iov_len, 4096) - 1,
+            .perm = write ? IOMMU_RW : IOMMU_RO,
+        };
+        int r;
+
+        if (unlikely(num != 1)) {
+            error_report("Unexpected chain of element injected");
+            return false;
+        }
+        r = vhost_iova_tree_map_alloc(svq->iova_tree, &map);
+        if (unlikely(r != IOVA_OK)) {
+            error_report("Cannot map injected element");
+            return false;
+        }
+
+        r = svq->map_ops->map(map.iova, map.size + 1,
+                              (void *)map.translated_addr, !write,
+                              svq->map_ops_opaque);
+        assert(r == 0);
+        sg[0] = map.iova;
+    } else {
+        bool ok = vhost_svq_translate_addr(svq, sg, iovec, num);
+        if (unlikely(!ok)) {
+            return false;
+        }
     }
 
     for (n = 0; n < num; n++) {
@@ -165,7 +191,8 @@ static bool vhost_svq_add_split(VhostShadowVirtqueue *svq, SVQElement *svq_elem,
     unsigned avail_idx;
     vring_avail_t *avail = svq->vring.avail;
     bool ok;
-    g_autofree hwaddr *sgs = g_new(hwaddr, MAX(elem->out_num, elem->in_num));
+    g_autofree hwaddr *sgs = NULL;
+    hwaddr *in_sgs, *out_sgs;
 
     *head = svq->free_head;
 
@@ -176,15 +203,23 @@ static bool vhost_svq_add_split(VhostShadowVirtqueue *svq, SVQElement *svq_elem,
         return false;
     }
 
-    ok = vhost_svq_vring_write_descs(svq, sgs, elem->out_sg, elem->out_num,
-                                     elem->in_num > 0, false);
+    if (!svq_elem->not_from_guest) {
+        sgs = g_new(hwaddr, MAX(elem->out_num, elem->in_num));
+        in_sgs = out_sgs = sgs;
+    } else {
+        in_sgs = &svq_elem->in_iova;
+        out_sgs = &svq_elem->out_iova;
+    }
+    ok = vhost_svq_vring_write_descs(svq, svq_elem, out_sgs, elem->out_sg,
+                                     elem->out_num, elem->in_num > 0, false);
     if (unlikely(!ok)) {
         return false;
     }
 
-    ok = vhost_svq_vring_write_descs(svq, sgs, elem->in_sg, elem->in_num, false,
-                                     true);
+    ok = vhost_svq_vring_write_descs(svq, svq_elem, in_sgs, elem->in_sg,
+                                     elem->in_num, false, true);
     if (unlikely(!ok)) {
+        /* TODO unwind out_sg */
         return false;
     }
 
@@ -229,6 +264,43 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
     event_notifier_set(&svq->hdev_kick);
 }
 
+bool vhost_svq_inject(VhostShadowVirtqueue *svq, const struct iovec *iov,
+                      size_t out_num, size_t in_num)
+{
+    size_t out_size = iov_size(iov, out_num);
+    size_t out_buf_size = ROUND_UP(out_size, 4096);
+    size_t in_size = iov_size(iov + out_num, in_num);
+    size_t in_buf_size = ROUND_UP(in_size, 4096);
+    SVQElement *svq_elem;
+    uint16_t num_slots = (in_num ? 1 : 0) + (out_num ? 1 : 0);
+
+    if (unlikely(num_slots == 0 || svq->next_guest_avail_elem ||
+                 vhost_svq_available_slots(svq) < num_slots)) {
+        return false;
+    }
+
+    svq_elem = virtqueue_alloc_element(sizeof(SVQElement), 1, 1);
+    if (out_num) {
+        void *out = qemu_memalign(4096, out_buf_size);
+        svq_elem->elem.out_sg[0].iov_base = out;
+        svq_elem->elem.out_sg[0].iov_len = out_size;
+        iov_to_buf(iov, out_num, 0, out, out_size);
+        memset(out + out_size, 0, out_buf_size - out_size);
+    }
+    if (in_num) {
+        void *in = qemu_memalign(4096, in_buf_size);
+        svq_elem->elem.in_sg[0].iov_base = in;
+        svq_elem->elem.in_sg[0].iov_len = in_size;
+        memset(in, 0, in_buf_size);
+    }
+
+    svq_elem->not_from_guest = true;
+    vhost_svq_add(svq, svq_elem);
+    vhost_svq_kick(svq);
+
+    return true;
+}
+
 /**
  * Forward available buffers.
  *
@@ -266,6 +338,7 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
                 break;
             }
 
+            svq_elem->not_from_guest = false;
             elem = &svq_elem->elem;
             if (elem->out_num + elem->in_num > vhost_svq_available_slots(svq)) {
                 /*
@@ -378,6 +451,31 @@ static SVQElement *vhost_svq_get_buf(VhostShadowVirtqueue *svq, uint32_t *len)
     return g_steal_pointer(&svq->ring_id_maps[used_elem.id]);
 }
 
+static int vhost_svq_unmap(VhostShadowVirtqueue *svq, hwaddr iova, size_t size)
+{
+    DMAMap needle = {
+        .iova = iova,
+        .size = size,
+    };
+    const DMAMap *overlap;
+
+    while ((overlap = vhost_iova_tree_find(svq->iova_tree, &needle))) {
+        DMAMap needle = *overlap;
+
+        if (svq->map_ops->unmap) {
+            int r = svq->map_ops->unmap(overlap->iova, overlap->size + 1,
+                                        svq->map_ops_opaque);
+            if (unlikely(r != 0)) {
+                return r;
+            }
+        }
+        qemu_vfree((void *)overlap->translated_addr);
+        vhost_iova_tree_remove(svq->iova_tree, &needle);
+    }
+
+    return 0;
+}
+
 static void vhost_svq_flush(VhostShadowVirtqueue *svq,
                             bool check_for_avail_queue)
 {
@@ -397,23 +495,56 @@ static void vhost_svq_flush(VhostShadowVirtqueue *svq,
             }
 
             elem = &svq_elem->elem;
-            if (unlikely(i >= svq->vring.num)) {
-                qemu_log_mask(LOG_GUEST_ERROR,
-                         "More than %u used buffers obtained in a %u size SVQ",
-                         i, svq->vring.num);
-                virtqueue_fill(vq, elem, len, i);
-                virtqueue_flush(vq, i);
-                return;
-            }
-            virtqueue_fill(vq, elem, len, i++);
-
             if (svq->ops && svq->ops->used_elem_handler) {
                 svq->ops->used_elem_handler(svq->vdev, elem);
             }
+
+            if (svq_elem->not_from_guest) {
+                if (unlikely(!elem->out_num && elem->out_num != 1)) {
+                    error_report("Unexpected out_num > 1");
+                    return;
+                }
+
+                if (elem->out_num) {
+                    int r = vhost_svq_unmap(svq, svq_elem->out_iova,
+                                            elem->out_sg[0].iov_len);
+                    if (unlikely(r != 0)) {
+                        error_report("Cannot unmap out buffer");
+                        return;
+                    }
+                }
+
+                if (unlikely(!elem->in_num && elem->in_num != 1)) {
+                    error_report("Unexpected in_num > 1");
+                    return;
+                }
+
+                if (elem->in_num) {
+                    int r = vhost_svq_unmap(svq, svq_elem->in_iova,
+                                            elem->in_sg[0].iov_len);
+                    if (unlikely(r != 0)) {
+                        error_report("Cannot unmap out buffer");
+                        return;
+                    }
+                }
+            } else {
+                if (unlikely(i >= svq->vring.num)) {
+                    qemu_log_mask(
+                        LOG_GUEST_ERROR,
+                        "More than %u used buffers obtained in a %u size SVQ",
+                        i, svq->vring.num);
+                    virtqueue_fill(vq, elem, len, i);
+                    virtqueue_flush(vq, i);
+                    return;
+                }
+                virtqueue_fill(vq, elem, len, i++);
+            }
         }
 
-        virtqueue_flush(vq, i);
-        event_notifier_set(&svq->svq_call);
+        if (i > 0) {
+            virtqueue_flush(vq, i);
+            event_notifier_set(&svq->svq_call);
+        }
 
         if (check_for_avail_queue && svq->next_guest_avail_elem) {
             /*
@@ -576,13 +707,13 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
     for (unsigned i = 0; i < svq->vring.num; ++i) {
         g_autofree SVQElement *svq_elem = NULL;
         svq_elem = g_steal_pointer(&svq->ring_id_maps[i]);
-        if (svq_elem) {
+        if (svq_elem && !svq_elem->not_from_guest) {
             virtqueue_detach_element(svq->vq, &svq_elem->elem, 0);
         }
     }
 
     next_avail_elem = g_steal_pointer(&svq->next_guest_avail_elem);
-    if (next_avail_elem) {
+    if (next_avail_elem && !next_avail_elem->not_from_guest) {
         virtqueue_detach_element(svq->vq, &next_avail_elem->elem, 0);
     }
     svq->vq = NULL;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 15/19] vdpa: add NetClientState->start() callback
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (13 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 14/19] vhost: Add vhost_svq_inject Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 16/19] vdpa: Add vhost_vdpa_start_control_svq Eugenio Pérez
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

It allows to inject custom code on device success start, right before
release lock.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/net/net.h  | 2 ++
 hw/net/vhost_net.c | 4 ++++
 2 files changed, 6 insertions(+)

diff --git a/include/net/net.h b/include/net/net.h
index 523136c7ac..2fc3002ab4 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -44,6 +44,7 @@ typedef struct NICConf {
 
 typedef void (NetPoll)(NetClientState *, bool enable);
 typedef bool (NetCanReceive)(NetClientState *);
+typedef void (NetStart)(NetClientState *);
 typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
 typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
 typedef void (NetCleanup) (NetClientState *);
@@ -71,6 +72,7 @@ typedef struct NetClientInfo {
     NetReceive *receive_raw;
     NetReceiveIOV *receive_iov;
     NetCanReceive *can_receive;
+    NetStart *start;
     NetCleanup *cleanup;
     LinkStatusChanged *link_status_changed;
     QueryRxFilter *query_rx_filter;
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 30379d2ca4..44a105ec29 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -274,6 +274,10 @@ static int vhost_net_start_one(struct vhost_net *net,
             }
         }
     }
+
+    if (net->nc->info->start) {
+        net->nc->info->start(net->nc);
+    }
     return 0;
 fail:
     file.fd = -1;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 16/19] vdpa: Add vhost_vdpa_start_control_svq
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (14 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 15/19] vdpa: add NetClientState->start() callback Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 17/19] vhost: Update kernel headers Eugenio Pérez
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

This will send CVQ commands in the destination machine, seting up
everything o there is no guest-visible change.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 585d2f60f8..6dc0ae8614 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -205,10 +205,73 @@ static ssize_t vhost_vdpa_receive(NetClientState *nc, const uint8_t *buf,
     return 0;
 }
 
+static bool vhost_vdpa_start_control_svq(VhostShadowVirtqueue *svq,
+                                         VirtIODevice *vdev)
+{
+    VirtIONet *n = VIRTIO_NET(vdev);
+    uint64_t features = vdev->host_features;
+
+    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
+        const struct virtio_net_ctrl_hdr ctrl = {
+            .class = VIRTIO_NET_CTRL_MAC,
+            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
+        };
+        uint8_t mac[6];
+        const struct iovec data[] = {
+            {
+                .iov_base = (void *)&ctrl,
+                .iov_len = sizeof(ctrl),
+            },{
+                .iov_base = mac,
+                .iov_len = sizeof(mac),
+            },{
+                .iov_base = NULL,
+                .iov_len = sizeof(virtio_net_ctrl_ack),
+            }
+        };
+        bool ret;
+
+        /* TODO: Only best effort? */
+        memcpy(mac, n->mac, sizeof(mac));
+        ret = vhost_svq_inject(svq, data, 2, 1);
+        if (!ret) {
+            return false;
+        }
+    }
+
+    return true;
+}
+
+static void vhost_vdpa_start(NetClientState *nc)
+{
+    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
+    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
+    struct vhost_vdpa *v = &s->vhost_vdpa;
+    struct vhost_dev *dev = &s->vhost_net->dev;
+    VhostShadowVirtqueue *svq;
+
+    if (nc->is_datapath) {
+        /* This is not the cvq dev */
+        return;
+    }
+
+    if (dev->vq_index + dev->nvqs != dev->vq_index_end) {
+        return;
+    }
+
+    if (!v->shadow_vqs_enabled) {
+        return;
+    }
+
+    svq = g_ptr_array_index(v->shadow_vqs, 0);
+    vhost_vdpa_start_control_svq(svq, dev->vdev);
+}
+
 static NetClientInfo net_vhost_vdpa_info = {
         .type = NET_CLIENT_DRIVER_VHOST_VDPA,
         .size = sizeof(VhostVDPAState),
         .receive = vhost_vdpa_receive,
+        .start = vhost_vdpa_start,
         .cleanup = vhost_vdpa_cleanup,
         .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
         .has_ufo = vhost_vdpa_has_ufo,
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 17/19] vhost: Update kernel headers
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (15 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 16/19] vdpa: Add vhost_vdpa_start_control_svq Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 18/19] vdpa: Add asid attribute to vdpa device Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 19/19] vdpa: Add x-cvq-svq Eugenio Pérez
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/standard-headers/linux/vhost_types.h | 11 ++++++++-
 linux-headers/linux/vhost.h                  | 25 ++++++++++++++++----
 2 files changed, 30 insertions(+), 6 deletions(-)

diff --git a/include/standard-headers/linux/vhost_types.h b/include/standard-headers/linux/vhost_types.h
index 0bd2684a2a..ce78551b0f 100644
--- a/include/standard-headers/linux/vhost_types.h
+++ b/include/standard-headers/linux/vhost_types.h
@@ -87,7 +87,7 @@ struct vhost_msg {
 
 struct vhost_msg_v2 {
 	uint32_t type;
-	uint32_t reserved;
+	uint32_t asid;
 	union {
 		struct vhost_iotlb_msg iotlb;
 		uint8_t padding[64];
@@ -153,4 +153,13 @@ struct vhost_vdpa_iova_range {
 /* vhost-net should add virtio_net_hdr for RX, and strip for TX packets. */
 #define VHOST_NET_F_VIRTIO_NET_HDR 27
 
+/* Use message type V2 */
+#define VHOST_BACKEND_F_IOTLB_MSG_V2 0x1
+/* IOTLB can accept batching hints */
+#define VHOST_BACKEND_F_IOTLB_BATCH  0x2
+/* IOTLB can accept address space identifier through V2 type of IOTLB
+ * message
+ */
+#define VHOST_BACKEND_F_IOTLB_ASID  0x3
+
 #endif
diff --git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
index c998860d7b..5e083490f1 100644
--- a/linux-headers/linux/vhost.h
+++ b/linux-headers/linux/vhost.h
@@ -89,11 +89,6 @@
 
 /* Set or get vhost backend capability */
 
-/* Use message type V2 */
-#define VHOST_BACKEND_F_IOTLB_MSG_V2 0x1
-/* IOTLB can accept batching hints */
-#define VHOST_BACKEND_F_IOTLB_BATCH  0x2
-
 #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
 #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
 
@@ -150,4 +145,24 @@
 /* Get the valid iova range */
 #define VHOST_VDPA_GET_IOVA_RANGE	_IOR(VHOST_VIRTIO, 0x78, \
 					     struct vhost_vdpa_iova_range)
+/* Get the number of virtqueue groups. */
+#define VHOST_VDPA_GET_GROUP_NUM	_IOR(VHOST_VIRTIO, 0x79, unsigned int)
+
+/* Get the number of address spaces. */
+#define VHOST_VDPA_GET_AS_NUM		_IOR(VHOST_VIRTIO, 0x7A, unsigned int)
+
+/* Get the group for a virtqueue: read index, write group in num,
+ * The virtqueue index is stored in the index field of
+ * vhost_vring_state. The group for this specific virtqueue is
+ * returned via num field of vhost_vring_state.
+ */
+#define VHOST_VDPA_GET_VRING_GROUP	_IOWR(VHOST_VIRTIO, 0x7B,	\
+					      struct vhost_vring_state)
+/* Set the ASID for a virtqueue group. The group index is stored in
+ * the index field of vhost_vring_state, the ASID associated with this
+ * group is stored at num field of vhost_vring_state.
+ */
+#define VHOST_VDPA_SET_GROUP_ASID	_IOW(VHOST_VIRTIO, 0x7C, \
+					     struct vhost_vring_state)
+
 #endif
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 18/19] vdpa: Add asid attribute to vdpa device
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (16 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 17/19] vhost: Update kernel headers Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  2022-03-30 18:31 ` [RFC PATCH v3 19/19] vdpa: Add x-cvq-svq Eugenio Pérez
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

We can configure ASID per group, but we still use asid 0 for every vdpa
device. Multiple asid support for cvq will be introduced in next
patches.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/vhost-vdpa.h |  3 +++
 hw/virtio/vhost-vdpa.c         | 47 ++++++++++++++++++++++++----------
 net/vhost-vdpa.c               | 10 ++++++--
 3 files changed, 45 insertions(+), 15 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index f1ba46a860..921edbf77b 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -28,10 +28,13 @@ typedef struct vhost_vdpa {
     int device_fd;
     int index;
     uint32_t msg_type;
+    uint32_t asid;
     bool iotlb_batch_begin_sent;
     MemoryListener listener;
     struct vhost_vdpa_iova_range iova_range;
     uint64_t acked_features;
+    /* one past the last vq index of this virtqueue group */
+    int vq_group_index_end;
     bool shadow_vqs_enabled;
     /* IOVA mapping used by the Shadow Virtqueue */
     VhostIOVATree *iova_tree;
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 600d006d6e..bd06662cee 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -79,6 +79,9 @@ static int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
     int ret = 0;
 
     msg.type = v->msg_type;
+    if (v->dev->backend_cap & BIT_ULL(VHOST_BACKEND_F_IOTLB_ASID)) {
+        msg.asid = v->asid;
+    }
     msg.iotlb.iova = iova;
     msg.iotlb.size = size;
     msg.iotlb.uaddr = (uint64_t)(uintptr_t)vaddr;
@@ -104,6 +107,9 @@ static int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova,
     int fd = v->device_fd;
     int ret = 0;
 
+    if (v->dev->backend_cap & BIT_ULL(VHOST_BACKEND_F_IOTLB_ASID)) {
+        msg.asid = v->asid;
+    }
     msg.type = v->msg_type;
     msg.iotlb.iova = iova;
     msg.iotlb.size = size;
@@ -129,6 +135,10 @@ static void vhost_vdpa_listener_begin_batch(struct vhost_vdpa *v)
         .iotlb.type = VHOST_IOTLB_BATCH_BEGIN,
     };
 
+    if (v->dev->backend_cap & BIT_ULL(VHOST_BACKEND_F_IOTLB_ASID)) {
+        msg.asid = v->asid;
+    }
+
     if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
         error_report("failed to write, fd=%d, errno=%d (%s)",
                      fd, errno, strerror(errno));
@@ -161,6 +171,9 @@ static void vhost_vdpa_listener_commit(MemoryListener *listener)
     }
 
     msg.type = v->msg_type;
+    if (dev->backend_cap & (0x1ULL << VHOST_BACKEND_F_IOTLB_ASID)) {
+        msg.asid = v->asid;
+    }
     msg.iotlb.type = VHOST_IOTLB_BATCH_END;
 
     if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
@@ -675,7 +688,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
 {
     uint64_t features;
     uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
-        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
+        0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
+        0x1ULL << VHOST_BACKEND_F_IOTLB_ASID;
     int r;
 
     if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
@@ -1098,7 +1112,9 @@ static bool vhost_vdpa_svqs_stop(struct vhost_dev *dev)
 static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
 {
     struct vhost_vdpa *v = dev->opaque;
-    bool ok;
+    bool vq_group_end, ok;
+    int r = 0;
+
     trace_vhost_vdpa_dev_start(dev, started);
 
     if (started) {
@@ -1116,21 +1132,26 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
         vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
     }
 
-    if (dev->vq_index + dev->nvqs != dev->vq_index_end) {
-        return 0;
+    vq_group_end = dev->vq_index + dev->nvqs == v->vq_group_index_end;
+    if (vq_group_end && started) {
+        memory_listener_register(&v->listener, &address_space_memory);
     }
 
-    if (started) {
-        memory_listener_register(&v->listener, &address_space_memory);
-        return vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
-    } else {
-        vhost_vdpa_reset_device(dev);
-        vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
-                                   VIRTIO_CONFIG_S_DRIVER);
-        memory_listener_unregister(&v->listener);
+    if (dev->vq_index + dev->nvqs == dev->vq_index_end) {
+        if (started) {
+            r = vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
+        } else {
+            vhost_vdpa_reset_device(dev);
+            vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
+                                       VIRTIO_CONFIG_S_DRIVER);
+        }
+    }
 
-        return 0;
+    if (vq_group_end && !started) {
+        memory_listener_unregister(&v->listener);
     }
+
+    return r;
 }
 
 static int vhost_vdpa_set_log_base(struct vhost_dev *dev, uint64_t base,
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 6dc0ae8614..fae9a43b86 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -323,6 +323,8 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
                                        int vdpa_device_fd,
                                        int queue_pair_index,
                                        int nvqs,
+                                       uint32_t asid,
+                                       int vq_group_end,
                                        bool is_datapath,
                                        bool svq,
                                        VhostIOVATree *iova_tree)
@@ -344,6 +346,8 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     s->vhost_vdpa.index = queue_pair_index;
     s->vhost_vdpa.shadow_vqs_enabled = svq;
+    s->vhost_vdpa.vq_group_index_end = vq_group_end;
+    s->vhost_vdpa.asid = asid;
     if (!is_datapath) {
         s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
     }
@@ -454,7 +458,8 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
 
     for (i = 0; i < queue_pairs; i++) {
         ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
-                                     vdpa_device_fd, i, 2, true, opts->x_svq,
+                                     vdpa_device_fd, i, 2, 0,
+                                     queue_pairs + has_cvq, true, opts->x_svq,
                                      iova_tree);
         if (!ncs[i])
             goto err;
@@ -462,7 +467,8 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
 
     if (has_cvq) {
         nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
-                                 vdpa_device_fd, i, 1, false, opts->x_svq,
+                                 vdpa_device_fd, i, 1, 0,
+                                 queue_pairs + has_cvq, false, opts->x_svq,
                                  iova_tree);
         if (!nc)
             goto err;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v3 19/19] vdpa: Add x-cvq-svq
  2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
                   ` (17 preceding siblings ...)
  2022-03-30 18:31 ` [RFC PATCH v3 18/19] vdpa: Add asid attribute to vdpa device Eugenio Pérez
@ 2022-03-30 18:31 ` Eugenio Pérez
  18 siblings, 0 replies; 20+ messages in thread
From: Eugenio Pérez @ 2022-03-30 18:31 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Jason Wang, Cornelia Huck, Markus Armbruster, Gautam Dawar,
	Harpreet Singh Anand, Peter Xu, Eli Cohen, Paolo Bonzini,
	Zhu Lingshan, Eric Blake, Liuxiangdong

This isolates shadow cvq in its own group.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 qapi/net.json    |   8 ++-
 net/vhost-vdpa.c | 179 +++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 171 insertions(+), 16 deletions(-)

diff --git a/qapi/net.json b/qapi/net.json
index 6a5460ce56..d54a137581 100644
--- a/qapi/net.json
+++ b/qapi/net.json
@@ -447,9 +447,12 @@
 #
 # @x-svq: Start device with (experimental) shadow virtqueue. (Since 7.1)
 #         (default: false)
+# @x-cvq-svq: Start device with (experimental) shadow virtqueue in its own
+#             virtqueue group. (Since 7.1)
+#             (default: false)
 #
 # Features:
-# @unstable: Member @x-svq is experimental.
+# @unstable: Members @x-svq and x-cvq-svq are experimental.
 #
 # Since: 5.1
 ##
@@ -457,7 +460,8 @@
   'data': {
     '*vhostdev':     'str',
     '*queues':       'int',
-    '*x-svq':        {'type': 'bool', 'features' : [ 'unstable'] } } }
+    '*x-svq':        {'type': 'bool', 'features' : [ 'unstable'] },
+    '*x-cvq-svq':    {'type': 'bool', 'features' : [ 'unstable'] } } }
 
 ##
 # @NetClientDriver:
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index fae9a43b86..13767e6d3c 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -354,10 +354,13 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
     s->vhost_vdpa.iova_tree = iova_tree;
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
     if (ret) {
-        qemu_del_net_client(nc);
-        return NULL;
+        goto err;
     }
     return nc;
+
+err:
+    qemu_del_net_client(nc);
+    return NULL;
 }
 
 static int vhost_vdpa_get_features(int fd, uint64_t *features, Error **errp)
@@ -370,6 +373,17 @@ static int vhost_vdpa_get_features(int fd, uint64_t *features, Error **errp)
     return ret;
 }
 
+static int vhost_vdpa_get_backend_features(int fd, uint64_t *features,
+                                           Error **errp)
+{
+    int ret = ioctl(fd, VHOST_GET_BACKEND_FEATURES, features);
+    if (ret) {
+        error_setg_errno(errp, errno,
+            "Fail to query backend features from vhost-vDPA device");
+    }
+    return ret;
+}
+
 static int vhost_vdpa_get_max_queue_pairs(int fd, uint64_t features,
                                           int *has_cvq, Error **errp)
 {
@@ -403,16 +417,112 @@ static int vhost_vdpa_get_max_queue_pairs(int fd, uint64_t features,
     return 1;
 }
 
+/**
+ * Check vdpa device to support CVQ group asid 1
+ *
+ * @vdpa_device_fd: Vdpa device fd
+ * @queue_pairs: Queue pairs
+ * @errp: Error
+ */
+static int vhost_vdpa_check_cvq_svq(int vdpa_device_fd, int queue_pairs,
+                                    Error **errp)
+{
+    uint64_t backend_features;
+    unsigned num_as;
+    int r;
+
+    r = vhost_vdpa_get_backend_features(vdpa_device_fd, &backend_features,
+                                        errp);
+    if (unlikely(r)) {
+        return -1;
+    }
+
+    if (unlikely(!(backend_features & VHOST_BACKEND_F_IOTLB_ASID))) {
+        error_setg(errp, "Device without IOTLB_ASID feature");
+        return -1;
+    }
+
+    r = ioctl(vdpa_device_fd, VHOST_VDPA_GET_AS_NUM, &num_as);
+    if (unlikely(r)) {
+        error_setg_errno(errp, errno,
+                         "Cannot retrieve number of supported ASs");
+        return -1;
+    }
+    if (unlikely(num_as < 2)) {
+        error_setg(errp, "Insufficient number of ASs (%u, min: 2)", num_as);
+    }
+
+    return 0;
+}
+
+/**
+ * Check if CVQ lives in an isolated group.
+ *
+ * Note that vdpa QEMU needs to be the owner of vdpa device (in other words, to
+ * have called VHOST_SET_OWNER) for this to succeed.
+ *
+ * @vdpa_device_fd: vdpa device fd
+ * @vq_index: vq index to start asking for group
+ * @nvq: Number of vqs to check
+ * @cvq_device_index: cvq device index
+ * @cvq_group: cvq group
+ * @errp: Error
+ */
+static bool vhost_vdpa_is_cvq_isolated_group(int vdpa_device_fd,
+                                           unsigned vq_index,
+                                           unsigned nvq,
+                                           unsigned cvq_device_index,
+                                           struct vhost_vring_state *cvq_group,
+                                           Error **errp)
+{
+    int r;
+
+    if (cvq_group->index == 0) {
+        cvq_group->index = cvq_device_index;
+        r = ioctl(vdpa_device_fd, VHOST_VDPA_GET_VRING_GROUP, cvq_group);
+        if (unlikely(r)) {
+            error_setg_errno(errp, errno,
+                             "Cannot get control vq index %d group",
+                             cvq_group->index);
+            false;
+        }
+    }
+
+    for (int k = vq_index; k < vq_index + nvq; ++k) {
+        struct vhost_vring_state s = {
+            .index = k,
+        };
+
+        r = ioctl(vdpa_device_fd, VHOST_VDPA_GET_VRING_GROUP, &s);
+        if (unlikely(r)) {
+            error_setg_errno(errp, errno, "Cannot get vq %d group", k);
+            return false;
+        }
+
+        if (unlikely(s.num == cvq_group->num)) {
+            error_setg(errp, "Data virtqueue %d has the same group as cvq (%d)",
+                       k, s.num);
+            return false;
+        }
+    }
+
+    return true;
+}
+
 int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
+    struct vhost_vdpa_iova_range iova_range;
+    struct vhost_vring_state cvq_group = {};
     uint64_t features;
     int vdpa_device_fd;
     g_autofree NetClientState **ncs = NULL;
     NetClientState *nc;
     int queue_pairs, r, i, has_cvq = 0;
     g_autoptr(VhostIOVATree) iova_tree = NULL;
+    g_autoptr(VhostIOVATree) cvq_iova_tree = NULL;
+    ERRP_GUARD();
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -437,8 +547,9 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         qemu_close(vdpa_device_fd);
         return queue_pairs;
     }
-    if (opts->x_svq) {
-        struct vhost_vdpa_iova_range iova_range;
+    if (opts->x_cvq_svq || opts->x_svq) {
+        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
+
         uint64_t invalid_dev_features =
             features & ~vdpa_svq_device_features &
             /* Transport are all accepted at this point */
@@ -448,9 +559,25 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         if (invalid_dev_features) {
             error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
                        invalid_dev_features);
-            goto err_svq;
+            goto err_svq_features;
         }
-        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
+    }
+
+    if (opts->x_cvq_svq) {
+        if (!has_cvq) {
+            error_setg(errp, "Cannot use x-cvq-svq with a device without cvq");
+            goto err_cvq_svq;
+        }
+
+        r = vhost_vdpa_check_cvq_svq(vdpa_device_fd, queue_pairs, errp);
+        if (unlikely(r)) {
+            error_prepend(errp, "Cannot configure CVQ SVQ: ");
+            goto err_cvq_svq;
+        }
+
+        cvq_iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
+    }
+    if (opts->x_svq) {
         iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
     }
 
@@ -458,31 +585,55 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
 
     for (i = 0; i < queue_pairs; i++) {
         ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
-                                     vdpa_device_fd, i, 2, 0,
-                                     queue_pairs + has_cvq, true, opts->x_svq,
-                                     iova_tree);
+                                     vdpa_device_fd, i, 2, 0, 2 * queue_pairs,
+                                     true, opts->x_svq, iova_tree);
         if (!ncs[i])
             goto err;
+
+        if (opts->x_cvq_svq &&
+            !vhost_vdpa_is_cvq_isolated_group(vdpa_device_fd, i * 2, 2,
+                                              queue_pairs * 2, &cvq_group,
+                                              errp)) {
+            goto err_cvq_svq;
+        }
     }
 
     if (has_cvq) {
-        nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
-                                 vdpa_device_fd, i, 1, 0,
-                                 queue_pairs + has_cvq, false, opts->x_svq,
-                                 iova_tree);
+        nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd,
+                                 i, 1, !!opts->x_cvq_svq,
+                                 2 * queue_pairs + 1, false,
+                                 opts->x_cvq_svq || opts->x_svq,
+                                 cvq_iova_tree);
         if (!nc)
             goto err;
+
+        if (opts->x_cvq_svq) {
+            struct vhost_vring_state asid = {
+                .index = 1,
+                .num = 1,
+            };
+
+            r = ioctl(vdpa_device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid);
+            if (unlikely(r)) {
+                error_setg_errno(errp, errno,
+                                 "Cannot set cvq group independent asid");
+                goto err;
+            }
+        }
+
+        cvq_iova_tree = NULL;
     }
 
     iova_tree = NULL;
     return 0;
 
 err:
+err_cvq_svq:
     if (i) {
         qemu_del_net_client(ncs[0]);
     }
 
-err_svq:
+err_svq_features:
     qemu_close(vdpa_device_fd);
 
     return -1;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2022-03-30 19:10 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-30 18:30 [RFC PATCH v3 00/19] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
2022-03-30 18:30 ` [RFC PATCH v3 01/19] util: Return void on iova_tree_remove Eugenio Pérez
2022-03-30 18:30 ` [RFC PATCH v3 02/19] vdpa: Add x-svq to NetdevVhostVDPAOptions Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 03/19] vhost: move descriptor translation to vhost_svq_vring_write_descs Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 04/19] vdpa: Fix index calculus at vhost_vdpa_svqs_start Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 05/19] virtio-net: use g_memdup2() instead of unsafe g_memdup() Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 06/19] virtio-net: Expose ctrl virtqueue logic Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 07/19] vdpa: Extract get geatures part from vhost_vdpa_get_max_queue_pairs Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 08/19] virtio: Make virtqueue_alloc_element non-static Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 09/19] vhost: Add SVQElement Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 10/19] vhost: Add custom used buffer callback Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 11/19] vdpa: control virtqueue support on shadow virtqueue Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 12/19] vhost: Add vhost_iova_tree_find Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 13/19] vdpa: Add map/unmap operation callback to SVQ Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 14/19] vhost: Add vhost_svq_inject Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 15/19] vdpa: add NetClientState->start() callback Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 16/19] vdpa: Add vhost_vdpa_start_control_svq Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 17/19] vhost: Update kernel headers Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 18/19] vdpa: Add asid attribute to vdpa device Eugenio Pérez
2022-03-30 18:31 ` [RFC PATCH v3 19/19] vdpa: Add x-cvq-svq Eugenio Pérez

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.