All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ
@ 2022-08-04 18:28 Eugenio Pérez
  2022-08-04 18:28 ` [PATCH v7 01/12] vhost: stop transfer elem ownership in vhost_handle_guest_kick Eugenio Pérez
                   ` (11 more replies)
  0 siblings, 12 replies; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

CVQ of net vhost-vdpa devices can be intercepted since the work of [1]. The
virtio-net device model is updated. The migration was blocked because although
the state can be megrated between VMM it was not possible to restore on the
destination NIC.

This series add support for SVQ to inject external messages without the guest's
knowledge, so before the guest is resumed all the guest visible state is
restored. It is done using standard CVQ messages, so the vhost-vdpa device does
not need to learn how to restore it: As long as they have the feature, they
know how to handle it.

This series needs fix [1] to be applied to achieve full live
migration.

Thanks!

[1] https://lists.nongnu.org/archive/html/qemu-devel/2022-08/msg00325.html

v7:
- Remove accidental double free.

v6:
- Move map and unmap of the buffers to the start and stop of the device. This
  implies more callbacks on NetClientInfo, but simplifies the SVQ CVQ code.
- Not assume that in buffer is sizeof(virtio_net_ctrl_ack) in
  vhost_vdpa_net_cvq_add
- Reduce the number of changes from previous versions
- Delete unused memory barrier

v5:
- Rename s/start/load/
- Use independent NetClientInfo to only add load callback on cvq.
- Accept out sg instead of dev_buffers[] at vhost_vdpa_net_cvq_map_elem
- Use only out size instead of iovec dev_buffers to know if the descriptor is
  effectively available, allowing to delete artificial !NULL VirtQueueElement
  on vhost_svq_add call.

v4:
- Actually use NetClientInfo callback.

v3:
- Route vhost-vdpa start code through NetClientInfo callback.
- Delete extra vhost_net_stop_one() call.

v2:
- Fix SIGSEGV dereferencing SVQ when not in svq mode

v1 from RFC:
- Do not reorder DRIVER_OK & enable patches.
- Delete leftovers

Eugenio Pérez (12):
  vhost: stop transfer elem ownership in vhost_handle_guest_kick
  vhost: use SVQ element ndescs instead of opaque data for desc
    validation
  vhost: Delete useless read memory barrier
  vhost: Do not depend on !NULL VirtQueueElement on vhost_svq_flush
  vhost_net: Add NetClientInfo prepare callback
  vhost_net: Add NetClientInfo stop callback
  vdpa: add net_vhost_vdpa_cvq_info NetClientInfo
  vdpa: Move command buffers map to start of net device
  vdpa: Extract vhost_vdpa_net_cvq_add from
    vhost_vdpa_net_handle_ctrl_avail
  vhost_net: add NetClientState->load() callback
  vdpa: Add virtio-net mac address via CVQ at start
  vdpa: Delete CVQ migration blocker

 include/hw/virtio/vhost-vdpa.h     |   1 -
 include/net/net.h                  |   6 +
 hw/net/vhost_net.c                 |  17 +++
 hw/virtio/vhost-shadow-virtqueue.c |  27 ++--
 hw/virtio/vhost-vdpa.c             |  14 --
 net/vhost-vdpa.c                   | 227 ++++++++++++++++++-----------
 6 files changed, 180 insertions(+), 112 deletions(-)

-- 
2.31.1




^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v7 01/12] vhost: stop transfer elem ownership in vhost_handle_guest_kick
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-05  3:48   ` Jason Wang
  2022-08-04 18:28 ` [PATCH v7 02/12] vhost: use SVQ element ndescs instead of opaque data for desc validation Eugenio Pérez
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

It was easier to allow vhost_svq_add to handle the memory. Now that we
will allow qemu to add elements to a SVQ without the guest's knowledge,
it's better to handle it in the caller.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index e4956728dd..ffd2b2c972 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -233,9 +233,6 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
 /**
  * Add an element to a SVQ.
  *
- * The caller must check that there is enough slots for the new element. It
- * takes ownership of the element: In case of failure not ENOSPC, it is free.
- *
  * Return -EINVAL if element is invalid, -ENOSPC if dev queue is full
  */
 int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
@@ -252,7 +249,6 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
 
     ok = vhost_svq_add_split(svq, out_sg, out_num, in_sg, in_num, &qemu_head);
     if (unlikely(!ok)) {
-        g_free(elem);
         return -EINVAL;
     }
 
@@ -293,7 +289,7 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
         virtio_queue_set_notification(svq->vq, false);
 
         while (true) {
-            VirtQueueElement *elem;
+            g_autofree VirtQueueElement *elem;
             int r;
 
             if (svq->next_guest_avail_elem) {
@@ -324,12 +320,14 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
                      * queue the current guest descriptor and ignore kicks
                      * until some elements are used.
                      */
-                    svq->next_guest_avail_elem = elem;
+                    svq->next_guest_avail_elem = g_steal_pointer(&elem);
                 }
 
                 /* VQ is full or broken, just return and ignore kicks */
                 return;
             }
+            /* elem belongs to SVQ or external caller now */
+            elem = NULL;
         }
 
         virtio_queue_set_notification(svq->vq, true);
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 02/12] vhost: use SVQ element ndescs instead of opaque data for desc validation
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
  2022-08-04 18:28 ` [PATCH v7 01/12] vhost: stop transfer elem ownership in vhost_handle_guest_kick Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-05  3:48   ` Jason Wang
  2022-08-04 18:28 ` [PATCH v7 03/12] vhost: Delete useless read memory barrier Eugenio Pérez
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

Since we're going to allow SVQ to add elements without the guest's
knowledge and without its own VirtQueueElement, it's easier to check if
an element is a valid head checking a different thing than the
VirtQueueElement.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index ffd2b2c972..e6eebd0e8d 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -414,7 +414,7 @@ static VirtQueueElement *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
         return NULL;
     }
 
-    if (unlikely(!svq->desc_state[used_elem.id].elem)) {
+    if (unlikely(!svq->desc_state[used_elem.id].ndescs)) {
         qemu_log_mask(LOG_GUEST_ERROR,
             "Device %s says index %u is used, but it was not available",
             svq->vdev->name, used_elem.id);
@@ -422,6 +422,7 @@ static VirtQueueElement *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
     }
 
     num = svq->desc_state[used_elem.id].ndescs;
+    svq->desc_state[used_elem.id].ndescs = 0;
     last_used_chain = vhost_svq_last_desc_of_chain(svq, num, used_elem.id);
     svq->desc_next[last_used_chain] = svq->free_head;
     svq->free_head = used_elem.id;
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 03/12] vhost: Delete useless read memory barrier
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
  2022-08-04 18:28 ` [PATCH v7 01/12] vhost: stop transfer elem ownership in vhost_handle_guest_kick Eugenio Pérez
  2022-08-04 18:28 ` [PATCH v7 02/12] vhost: use SVQ element ndescs instead of opaque data for desc validation Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-05  3:48   ` Jason Wang
  2022-08-04 18:28 ` [PATCH v7 04/12] vhost: Do not depend on !NULL VirtQueueElement on vhost_svq_flush Eugenio Pérez
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

As discussed in previous series [1], this memory barrier is useless with
the atomic read of used idx at vhost_svq_more_used. Deleting it.

[1] https://lists.nongnu.org/archive/html/qemu-devel/2022-07/msg02616.html

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index e6eebd0e8d..1b49bf54f2 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -509,9 +509,6 @@ size_t vhost_svq_poll(VhostShadowVirtqueue *svq)
         if (unlikely(g_get_monotonic_time() - start_us > 10e6)) {
             return 0;
         }
-
-        /* Make sure we read new used_idx */
-        smp_rmb();
     } while (true);
 }
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 04/12] vhost: Do not depend on !NULL VirtQueueElement on vhost_svq_flush
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (2 preceding siblings ...)
  2022-08-04 18:28 ` [PATCH v7 03/12] vhost: Delete useless read memory barrier Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-05  3:50   ` Jason Wang
  2022-08-04 18:28 ` [PATCH v7 05/12] vhost_net: Add NetClientInfo prepare callback Eugenio Pérez
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

Since QEMU will be able to inject new elements on CVQ to restore the
state, we need not to depend on a VirtQueueElement to know if a new
element has been used by the device or not. Instead of check that, check
if there are new elements only using used idx on vhost_svq_flush.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v6: Change less from the previous function
---
 hw/virtio/vhost-shadow-virtqueue.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 1b49bf54f2..f863b08627 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -499,17 +499,20 @@ static void vhost_svq_flush(VhostShadowVirtqueue *svq,
 size_t vhost_svq_poll(VhostShadowVirtqueue *svq)
 {
     int64_t start_us = g_get_monotonic_time();
+    uint32_t len;
+
     do {
-        uint32_t len;
-        VirtQueueElement *elem = vhost_svq_get_buf(svq, &len);
-        if (elem) {
-            return len;
+        if (vhost_svq_more_used(svq)) {
+            break;
         }
 
         if (unlikely(g_get_monotonic_time() - start_us > 10e6)) {
             return 0;
         }
     } while (true);
+
+    vhost_svq_get_buf(svq, &len);
+    return len;
 }
 
 /**
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 05/12] vhost_net: Add NetClientInfo prepare callback
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (3 preceding siblings ...)
  2022-08-04 18:28 ` [PATCH v7 04/12] vhost: Do not depend on !NULL VirtQueueElement on vhost_svq_flush Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-09  6:53   ` Jason Wang
  2022-08-04 18:28 ` [PATCH v7 06/12] vhost_net: Add NetClientInfo stop callback Eugenio Pérez
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

This is used by the backend to perform actions before the device is
started.

In particular, vdpa net use it to map CVQ buffers to the device, so it
can send control commands using them.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/net/net.h  | 2 ++
 hw/net/vhost_net.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/include/net/net.h b/include/net/net.h
index 523136c7ac..3416bb3d46 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -44,6 +44,7 @@ typedef struct NICConf {
 
 typedef void (NetPoll)(NetClientState *, bool enable);
 typedef bool (NetCanReceive)(NetClientState *);
+typedef int (NetPrepare)(NetClientState *);
 typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
 typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
 typedef void (NetCleanup) (NetClientState *);
@@ -71,6 +72,7 @@ typedef struct NetClientInfo {
     NetReceive *receive_raw;
     NetReceiveIOV *receive_iov;
     NetCanReceive *can_receive;
+    NetPrepare *prepare;
     NetCleanup *cleanup;
     LinkStatusChanged *link_status_changed;
     QueryRxFilter *query_rx_filter;
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index ccac5b7a64..e1150d7532 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -244,6 +244,13 @@ static int vhost_net_start_one(struct vhost_net *net,
     struct vhost_vring_file file = { };
     int r;
 
+    if (net->nc->info->prepare) {
+        r = net->nc->info->prepare(net->nc);
+        if (r < 0) {
+            return r;
+        }
+    }
+
     r = vhost_dev_enable_notifiers(&net->dev, dev);
     if (r < 0) {
         goto fail_notifiers;
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 06/12] vhost_net: Add NetClientInfo stop callback
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (4 preceding siblings ...)
  2022-08-04 18:28 ` [PATCH v7 05/12] vhost_net: Add NetClientInfo prepare callback Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-04 18:28 ` [PATCH v7 07/12] vdpa: add net_vhost_vdpa_cvq_info NetClientInfo Eugenio Pérez
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

Used by the backend to perform actions after the device is stopped.

In particular, vdpa net use it to unmap CVQ buffers to the device,
cleaning the actions performend in prepare().

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/net/net.h  | 2 ++
 hw/net/vhost_net.c | 3 +++
 2 files changed, 5 insertions(+)

diff --git a/include/net/net.h b/include/net/net.h
index 3416bb3d46..7aa1ec0974 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -45,6 +45,7 @@ typedef struct NICConf {
 typedef void (NetPoll)(NetClientState *, bool enable);
 typedef bool (NetCanReceive)(NetClientState *);
 typedef int (NetPrepare)(NetClientState *);
+typedef void (NetStop)(NetClientState *);
 typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
 typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
 typedef void (NetCleanup) (NetClientState *);
@@ -73,6 +74,7 @@ typedef struct NetClientInfo {
     NetReceiveIOV *receive_iov;
     NetCanReceive *can_receive;
     NetPrepare *prepare;
+    NetStop *stop;
     NetCleanup *cleanup;
     LinkStatusChanged *link_status_changed;
     QueryRxFilter *query_rx_filter;
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index e1150d7532..10bca15446 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -320,6 +320,9 @@ static void vhost_net_stop_one(struct vhost_net *net,
         net->nc->info->poll(net->nc, true);
     }
     vhost_dev_stop(&net->dev, dev);
+    if (net->nc->info->stop) {
+        net->nc->info->stop(net->nc);
+    }
     vhost_dev_disable_notifiers(&net->dev, dev);
 }
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 07/12] vdpa: add net_vhost_vdpa_cvq_info NetClientInfo
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (5 preceding siblings ...)
  2022-08-04 18:28 ` [PATCH v7 06/12] vhost_net: Add NetClientInfo stop callback Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-04 18:28 ` [PATCH v7 08/12] vdpa: Move command buffers map to start of net device Eugenio Pérez
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

Next patches will add a new info callback to restore NIC status through
CVQ. Since only the CVQ vhost device is needed, create it with a new
NetClientInfo.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v5: Create a new NetClientInfo instead of reusing the dataplane one.
---
 net/vhost-vdpa.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index ac1810723c..55e8a39a56 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -334,6 +334,16 @@ static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
     return true;
 }
 
+static NetClientInfo net_vhost_vdpa_cvq_info = {
+    .type = NET_CLIENT_DRIVER_VHOST_VDPA,
+    .size = sizeof(VhostVDPAState),
+    .receive = vhost_vdpa_receive,
+    .cleanup = vhost_vdpa_cleanup,
+    .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
+    .has_ufo = vhost_vdpa_has_ufo,
+    .check_peer_type = vhost_vdpa_check_peer_type,
+};
+
 /**
  * Do not forward commands not supported by SVQ. Otherwise, the device could
  * accept it and qemu would not know how to update the device model.
@@ -475,7 +485,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
         nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device,
                                  name);
     } else {
-        nc = qemu_new_net_control_client(&net_vhost_vdpa_info, peer,
+        nc = qemu_new_net_control_client(&net_vhost_vdpa_cvq_info, peer,
                                          device, name);
     }
     snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 08/12] vdpa: Move command buffers map to start of net device
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (6 preceding siblings ...)
  2022-08-04 18:28 ` [PATCH v7 07/12] vdpa: add net_vhost_vdpa_cvq_info NetClientInfo Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-09  7:03   ` Jason Wang
  2022-08-04 18:28 ` [PATCH v7 09/12] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail Eugenio Pérez
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

As this series will reuse them to restore the device state at the end of
a migration (or a device start), let's allocate only once at the device
start so we don't duplicate their map and unmap.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 123 ++++++++++++++++++++++-------------------------
 1 file changed, 58 insertions(+), 65 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 55e8a39a56..2c6a26cca0 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -263,29 +263,20 @@ static size_t vhost_vdpa_net_cvq_cmd_page_len(void)
     return ROUND_UP(vhost_vdpa_net_cvq_cmd_len(), qemu_real_host_page_size());
 }
 
-/** Copy and map a guest buffer. */
-static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
-                                   const struct iovec *out_data,
-                                   size_t out_num, size_t data_len, void *buf,
-                                   size_t *written, bool write)
+/** Map CVQ buffer. */
+static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
+                                  bool write)
 {
     DMAMap map = {};
     int r;
 
-    if (unlikely(!data_len)) {
-        qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid legnth of %s buffer\n",
-                      __func__, write ? "in" : "out");
-        return false;
-    }
-
-    *written = iov_to_buf(out_data, out_num, 0, buf, data_len);
     map.translated_addr = (hwaddr)(uintptr_t)buf;
-    map.size = vhost_vdpa_net_cvq_cmd_page_len() - 1;
+    map.size = size - 1;
     map.perm = write ? IOMMU_RW : IOMMU_RO,
     r = vhost_iova_tree_map_alloc(v->iova_tree, &map);
     if (unlikely(r != IOVA_OK)) {
         error_report("Cannot map injected element");
-        return false;
+        return r;
     }
 
     r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
@@ -294,50 +285,58 @@ static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
         goto dma_map_err;
     }
 
-    return true;
+    return 0;
 
 dma_map_err:
     vhost_iova_tree_remove(v->iova_tree, &map);
-    return false;
+    return r;
 }
 
-/**
- * Copy the guest element into a dedicated buffer suitable to be sent to NIC
- *
- * @iov: [0] is the out buffer, [1] is the in one
- */
-static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
-                                        VirtQueueElement *elem,
-                                        struct iovec *iov)
+static int vhost_vdpa_net_cvq_prepare(NetClientState *nc)
 {
-    size_t in_copied;
-    bool ok;
+    VhostVDPAState *s;
+    int r;
 
-    iov[0].iov_base = s->cvq_cmd_out_buffer;
-    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, elem->out_num,
-                                vhost_vdpa_net_cvq_cmd_len(), iov[0].iov_base,
-                                &iov[0].iov_len, false);
-    if (unlikely(!ok)) {
-        return false;
+    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
+
+    s = DO_UPCAST(VhostVDPAState, nc, nc);
+    if (!s->vhost_vdpa.shadow_vqs_enabled) {
+        return 0;
     }
 
-    iov[1].iov_base = s->cvq_cmd_in_buffer;
-    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
-                                sizeof(virtio_net_ctrl_ack), iov[1].iov_base,
-                                &in_copied, true);
-    if (unlikely(!ok)) {
+    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer,
+                               vhost_vdpa_net_cvq_cmd_page_len(), false);
+    if (unlikely(r < 0)) {
+        return r;
+    }
+
+    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer,
+                               vhost_vdpa_net_cvq_cmd_page_len(), true);
+    if (unlikely(r < 0)) {
         vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
-        return false;
     }
 
-    iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
-    return true;
+    return r;
+}
+
+static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
+{
+    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
+
+    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
+
+    if (s->vhost_vdpa.shadow_vqs_enabled) {
+        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
+        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer);
+    }
 }
 
 static NetClientInfo net_vhost_vdpa_cvq_info = {
     .type = NET_CLIENT_DRIVER_VHOST_VDPA,
     .size = sizeof(VhostVDPAState),
     .receive = vhost_vdpa_receive,
+    .prepare = vhost_vdpa_net_cvq_prepare,
+    .stop = vhost_vdpa_net_cvq_stop,
     .cleanup = vhost_vdpa_cleanup,
     .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
     .has_ufo = vhost_vdpa_has_ufo,
@@ -348,19 +347,17 @@ static NetClientInfo net_vhost_vdpa_cvq_info = {
  * Do not forward commands not supported by SVQ. Otherwise, the device could
  * accept it and qemu would not know how to update the device model.
  */
-static bool vhost_vdpa_net_cvq_validate_cmd(const struct iovec *out,
-                                            size_t out_num)
+static bool vhost_vdpa_net_cvq_validate_cmd(const void *out_buf, size_t len)
 {
     struct virtio_net_ctrl_hdr ctrl;
-    size_t n;
 
-    n = iov_to_buf(out, out_num, 0, &ctrl, sizeof(ctrl));
-    if (unlikely(n < sizeof(ctrl))) {
+    if (unlikely(len < sizeof(ctrl))) {
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "%s: invalid legnth of out buffer %zu\n", __func__, n);
+                      "%s: invalid legnth of out buffer %zu\n", __func__, len);
         return false;
     }
 
+    memcpy(&ctrl, out_buf, sizeof(ctrl));
     switch (ctrl.class) {
     case VIRTIO_NET_CTRL_MAC:
         switch (ctrl.cmd) {
@@ -392,10 +389,14 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
     VhostVDPAState *s = opaque;
     size_t in_len, dev_written;
     virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
-    /* out and in buffers sent to the device */
-    struct iovec dev_buffers[2] = {
-        { .iov_base = s->cvq_cmd_out_buffer },
-        { .iov_base = s->cvq_cmd_in_buffer },
+    /* Out buffer sent to both the vdpa device and the device model */
+    struct iovec out = {
+        .iov_base = s->cvq_cmd_out_buffer,
+    };
+    /* In buffer sent to the device */
+    const struct iovec dev_in = {
+        .iov_base = s->cvq_cmd_in_buffer,
+        .iov_len = sizeof(virtio_net_ctrl_ack),
     };
     /* in buffer used for device model */
     const struct iovec in = {
@@ -405,17 +406,15 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
     int r = -EINVAL;
     bool ok;
 
-    ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
-    if (unlikely(!ok)) {
-        goto out;
-    }
-
-    ok = vhost_vdpa_net_cvq_validate_cmd(&dev_buffers[0], 1);
+    out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
+                             s->cvq_cmd_out_buffer,
+                             vhost_vdpa_net_cvq_cmd_len());
+    ok = vhost_vdpa_net_cvq_validate_cmd(s->cvq_cmd_out_buffer, out.iov_len);
     if (unlikely(!ok)) {
         goto out;
     }
 
-    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, elem);
+    r = vhost_svq_add(svq, &out, 1, &dev_in, 1, elem);
     if (unlikely(r != 0)) {
         if (unlikely(r == -ENOSPC)) {
             qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
@@ -435,13 +434,13 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
         goto out;
     }
 
-    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
+    memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
     if (status != VIRTIO_NET_OK) {
         goto out;
     }
 
     status = VIRTIO_NET_ERR;
-    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
+    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1);
     if (status != VIRTIO_NET_OK) {
         error_report("Bad CVQ processing in model");
     }
@@ -454,12 +453,6 @@ out:
     }
     vhost_svq_push_elem(svq, elem, MIN(in_len, sizeof(status)));
     g_free(elem);
-    if (dev_buffers[0].iov_base) {
-        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[0].iov_base);
-    }
-    if (dev_buffers[1].iov_base) {
-        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[1].iov_base);
-    }
     return r;
 }
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 09/12] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (7 preceding siblings ...)
  2022-08-04 18:28 ` [PATCH v7 08/12] vdpa: Move command buffers map to start of net device Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-09  7:11   ` Jason Wang
  2022-08-04 18:28 ` [PATCH v7 10/12] vhost_net: add NetClientState->load() callback Eugenio Pérez
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

So we can reuse it to inject state messages.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
--
v7:
* Remove double free error

v6:
* Do not assume in buffer sent to the device is sizeof(virtio_net_ctrl_ack)

v5:
* Do not use an artificial !NULL VirtQueueElement
* Use only out size instead of iovec dev_buffers for these functions.
---
 net/vhost-vdpa.c | 59 +++++++++++++++++++++++++++++++-----------------
 1 file changed, 38 insertions(+), 21 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 2c6a26cca0..10843e6d97 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -331,6 +331,38 @@ static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
     }
 }
 
+static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
+                                      size_t in_len)
+{
+    /* Buffers for the device */
+    const struct iovec out = {
+        .iov_base = s->cvq_cmd_out_buffer,
+        .iov_len = out_len,
+    };
+    const struct iovec in = {
+        .iov_base = s->cvq_cmd_in_buffer,
+        .iov_len = sizeof(virtio_net_ctrl_ack),
+    };
+    VhostShadowVirtqueue *svq = g_ptr_array_index(s->vhost_vdpa.shadow_vqs, 0);
+    int r;
+
+    r = vhost_svq_add(svq, &out, 1, &in, 1, NULL);
+    if (unlikely(r != 0)) {
+        if (unlikely(r == -ENOSPC)) {
+            qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
+                          __func__);
+        }
+        return r;
+    }
+
+    /*
+     * We can poll here since we've had BQL from the time we sent the
+     * descriptor. Also, we need to take the answer before SVQ pulls by itself,
+     * when BQL is released
+     */
+    return vhost_svq_poll(svq);
+}
+
 static NetClientInfo net_vhost_vdpa_cvq_info = {
     .type = NET_CLIENT_DRIVER_VHOST_VDPA,
     .size = sizeof(VhostVDPAState),
@@ -387,23 +419,18 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
                                             void *opaque)
 {
     VhostVDPAState *s = opaque;
-    size_t in_len, dev_written;
+    size_t in_len;
     virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
     /* Out buffer sent to both the vdpa device and the device model */
     struct iovec out = {
         .iov_base = s->cvq_cmd_out_buffer,
     };
-    /* In buffer sent to the device */
-    const struct iovec dev_in = {
-        .iov_base = s->cvq_cmd_in_buffer,
-        .iov_len = sizeof(virtio_net_ctrl_ack),
-    };
     /* in buffer used for device model */
     const struct iovec in = {
         .iov_base = &status,
         .iov_len = sizeof(status),
     };
-    int r = -EINVAL;
+    ssize_t dev_written = -EINVAL;
     bool ok;
 
     out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
@@ -414,21 +441,11 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
         goto out;
     }
 
-    r = vhost_svq_add(svq, &out, 1, &dev_in, 1, elem);
-    if (unlikely(r != 0)) {
-        if (unlikely(r == -ENOSPC)) {
-            qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
-                          __func__);
-        }
+    dev_written = vhost_vdpa_net_cvq_add(s, out.iov_len, sizeof(status));
+    if (unlikely(dev_written < 0)) {
         goto out;
     }
 
-    /*
-     * We can poll here since we've had BQL from the time we sent the
-     * descriptor. Also, we need to take the answer before SVQ pulls by itself,
-     * when BQL is released
-     */
-    dev_written = vhost_svq_poll(svq);
     if (unlikely(dev_written < sizeof(status))) {
         error_report("Insufficient written data (%zu)", dev_written);
         goto out;
@@ -436,7 +453,7 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
 
     memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
     if (status != VIRTIO_NET_OK) {
-        goto out;
+        return VIRTIO_NET_ERR;
     }
 
     status = VIRTIO_NET_ERR;
@@ -453,7 +470,7 @@ out:
     }
     vhost_svq_push_elem(svq, elem, MIN(in_len, sizeof(status)));
     g_free(elem);
-    return r;
+    return dev_written < 0 ? dev_written : 0;
 }
 
 static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 10/12] vhost_net: add NetClientState->load() callback
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (8 preceding siblings ...)
  2022-08-04 18:28 ` [PATCH v7 09/12] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-04 18:28 ` [PATCH v7 11/12] vdpa: Add virtio-net mac address via CVQ at start Eugenio Pérez
  2022-08-04 18:28 ` [PATCH v7 12/12] vdpa: Delete CVQ migration blocker Eugenio Pérez
  11 siblings, 0 replies; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

It allows per-net client operations right after device's successful
start. In particular, to load the device status.

Vhost-vdpa net will use it to add the CVQ buffers to restore the device
status.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v5: Rename start / load, naming it more specifically.
---
 include/net/net.h  | 2 ++
 hw/net/vhost_net.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/include/net/net.h b/include/net/net.h
index 7aa1ec0974..356e682ab6 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -45,6 +45,7 @@ typedef struct NICConf {
 typedef void (NetPoll)(NetClientState *, bool enable);
 typedef bool (NetCanReceive)(NetClientState *);
 typedef int (NetPrepare)(NetClientState *);
+typedef int (NetLoad)(NetClientState *);
 typedef void (NetStop)(NetClientState *);
 typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
 typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
@@ -74,6 +75,7 @@ typedef struct NetClientInfo {
     NetReceiveIOV *receive_iov;
     NetCanReceive *can_receive;
     NetPrepare *prepare;
+    NetLoad *load;
     NetStop *stop;
     NetCleanup *cleanup;
     LinkStatusChanged *link_status_changed;
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 10bca15446..6b83d5503f 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -281,6 +281,13 @@ static int vhost_net_start_one(struct vhost_net *net,
             }
         }
     }
+
+    if (net->nc->info->load) {
+        r = net->nc->info->load(net->nc);
+        if (r < 0) {
+            goto fail;
+        }
+    }
     return 0;
 fail:
     file.fd = -1;
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 11/12] vdpa: Add virtio-net mac address via CVQ at start
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (9 preceding siblings ...)
  2022-08-04 18:28 ` [PATCH v7 10/12] vhost_net: add NetClientState->load() callback Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-09  7:16   ` Jason Wang
  2022-08-04 18:28 ` [PATCH v7 12/12] vdpa: Delete CVQ migration blocker Eugenio Pérez
  11 siblings, 1 reply; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

This is needed so the destination vdpa device see the same state a the
guest set in the source.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v6:
* Map and unmap command buffers at the start and end of device usage.

v5:
* Rename s/start/load/
* Use independent NetClientInfo to only add load callback on cvq.
---
 net/vhost-vdpa.c | 43 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 10843e6d97..4f1524c2e9 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -363,11 +363,54 @@ static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
     return vhost_svq_poll(svq);
 }
 
+static int vhost_vdpa_net_load(NetClientState *nc)
+{
+    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
+    struct vhost_vdpa *v = &s->vhost_vdpa;
+    VirtIONet *n;
+    uint64_t features;
+
+    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
+
+    if (!v->shadow_vqs_enabled) {
+        return 0;
+    }
+
+    n = VIRTIO_NET(v->dev->vdev);
+    features = v->dev->vdev->host_features;
+    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
+        const struct virtio_net_ctrl_hdr ctrl = {
+            .class = VIRTIO_NET_CTRL_MAC,
+            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
+        };
+        char *cursor = s->cvq_cmd_out_buffer;
+        ssize_t dev_written;
+        virtio_net_ctrl_ack state;
+
+        memcpy(cursor, &ctrl, sizeof(ctrl));
+        cursor += sizeof(ctrl);
+        memcpy(cursor, n->mac, sizeof(n->mac));
+        cursor += sizeof(n->mac);
+
+        dev_written = vhost_vdpa_net_cvq_add(s, sizeof(ctrl) + sizeof(n->mac),
+                                             sizeof(state));
+        if (unlikely(dev_written < 0)) {
+            return dev_written;
+        }
+
+        memcpy(&state, s->cvq_cmd_in_buffer, sizeof(state));
+        return state == VIRTIO_NET_OK ? 0 : -1;
+    }
+
+    return 0;
+}
+
 static NetClientInfo net_vhost_vdpa_cvq_info = {
     .type = NET_CLIENT_DRIVER_VHOST_VDPA,
     .size = sizeof(VhostVDPAState),
     .receive = vhost_vdpa_receive,
     .prepare = vhost_vdpa_net_cvq_prepare,
+    .load = vhost_vdpa_net_load,
     .stop = vhost_vdpa_net_cvq_stop,
     .cleanup = vhost_vdpa_cleanup,
     .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v7 12/12] vdpa: Delete CVQ migration blocker
  2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (10 preceding siblings ...)
  2022-08-04 18:28 ` [PATCH v7 11/12] vdpa: Add virtio-net mac address via CVQ at start Eugenio Pérez
@ 2022-08-04 18:28 ` Eugenio Pérez
  2022-08-09  7:17   ` Jason Wang
  11 siblings, 1 reply; 29+ messages in thread
From: Eugenio Pérez @ 2022-08-04 18:28 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi, Liuxiangdong,
	Eli Cohen, Cornelia Huck, Zhu Lingshan

We can restore the device state in the destination via CVQ now. Remove
the migration blocker.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/vhost-vdpa.h |  1 -
 hw/virtio/vhost-vdpa.c         | 14 --------------
 net/vhost-vdpa.c               |  2 --
 3 files changed, 17 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index d10a89303e..1111d85643 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -35,7 +35,6 @@ typedef struct vhost_vdpa {
     bool shadow_vqs_enabled;
     /* IOVA mapping used by the Shadow Virtqueue */
     VhostIOVATree *iova_tree;
-    Error *migration_blocker;
     GPtrArray *shadow_vqs;
     const VhostShadowVirtqueueOps *shadow_vq_ops;
     void *shadow_vq_ops_opaque;
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 7e28d2f674..4b0cfc0f56 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1033,13 +1033,6 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
         return true;
     }
 
-    if (v->migration_blocker) {
-        int r = migrate_add_blocker(v->migration_blocker, &err);
-        if (unlikely(r < 0)) {
-            return false;
-        }
-    }
-
     for (i = 0; i < v->shadow_vqs->len; ++i) {
         VirtQueue *vq = virtio_get_queue(dev->vdev, dev->vq_index + i);
         VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i);
@@ -1082,10 +1075,6 @@ err:
         vhost_svq_stop(svq);
     }
 
-    if (v->migration_blocker) {
-        migrate_del_blocker(v->migration_blocker);
-    }
-
     return false;
 }
 
@@ -1105,9 +1094,6 @@ static bool vhost_vdpa_svqs_stop(struct vhost_dev *dev)
         }
     }
 
-    if (v->migration_blocker) {
-        migrate_del_blocker(v->migration_blocker);
-    }
     return true;
 }
 
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 4f1524c2e9..7c0d600aea 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -558,8 +558,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
 
         s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
         s->vhost_vdpa.shadow_vq_ops_opaque = s;
-        error_setg(&s->vhost_vdpa.migration_blocker,
-                   "Migration disabled: vhost-vdpa uses CVQ.");
     }
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
     if (ret) {
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 01/12] vhost: stop transfer elem ownership in vhost_handle_guest_kick
  2022-08-04 18:28 ` [PATCH v7 01/12] vhost: stop transfer elem ownership in vhost_handle_guest_kick Eugenio Pérez
@ 2022-08-05  3:48   ` Jason Wang
  0 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2022-08-05  3:48 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> It was easier to allow vhost_svq_add to handle the memory. Now that we
> will allow qemu to add elements to a SVQ without the guest's knowledge,
> it's better to handle it in the caller.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  hw/virtio/vhost-shadow-virtqueue.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index e4956728dd..ffd2b2c972 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -233,9 +233,6 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
>  /**
>   * Add an element to a SVQ.
>   *
> - * The caller must check that there is enough slots for the new element. It
> - * takes ownership of the element: In case of failure not ENOSPC, it is free.
> - *
>   * Return -EINVAL if element is invalid, -ENOSPC if dev queue is full
>   */
>  int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
> @@ -252,7 +249,6 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
>
>      ok = vhost_svq_add_split(svq, out_sg, out_num, in_sg, in_num, &qemu_head);
>      if (unlikely(!ok)) {
> -        g_free(elem);
>          return -EINVAL;
>      }
>
> @@ -293,7 +289,7 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
>          virtio_queue_set_notification(svq->vq, false);
>
>          while (true) {
> -            VirtQueueElement *elem;
> +            g_autofree VirtQueueElement *elem;
>              int r;
>
>              if (svq->next_guest_avail_elem) {
> @@ -324,12 +320,14 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
>                       * queue the current guest descriptor and ignore kicks
>                       * until some elements are used.
>                       */
> -                    svq->next_guest_avail_elem = elem;
> +                    svq->next_guest_avail_elem = g_steal_pointer(&elem);
>                  }
>
>                  /* VQ is full or broken, just return and ignore kicks */
>                  return;
>              }
> +            /* elem belongs to SVQ or external caller now */
> +            elem = NULL;
>          }
>
>          virtio_queue_set_notification(svq->vq, true);
> --
> 2.31.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 02/12] vhost: use SVQ element ndescs instead of opaque data for desc validation
  2022-08-04 18:28 ` [PATCH v7 02/12] vhost: use SVQ element ndescs instead of opaque data for desc validation Eugenio Pérez
@ 2022-08-05  3:48   ` Jason Wang
  0 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2022-08-05  3:48 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Since we're going to allow SVQ to add elements without the guest's
> knowledge and without its own VirtQueueElement, it's easier to check if
> an element is a valid head checking a different thing than the
> VirtQueueElement.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---

Acked-by: Jason Wang <jasowang@redhat.com>

>  hw/virtio/vhost-shadow-virtqueue.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index ffd2b2c972..e6eebd0e8d 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -414,7 +414,7 @@ static VirtQueueElement *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
>          return NULL;
>      }
>
> -    if (unlikely(!svq->desc_state[used_elem.id].elem)) {
> +    if (unlikely(!svq->desc_state[used_elem.id].ndescs)) {
>          qemu_log_mask(LOG_GUEST_ERROR,
>              "Device %s says index %u is used, but it was not available",
>              svq->vdev->name, used_elem.id);
> @@ -422,6 +422,7 @@ static VirtQueueElement *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
>      }
>
>      num = svq->desc_state[used_elem.id].ndescs;
> +    svq->desc_state[used_elem.id].ndescs = 0;
>      last_used_chain = vhost_svq_last_desc_of_chain(svq, num, used_elem.id);
>      svq->desc_next[last_used_chain] = svq->free_head;
>      svq->free_head = used_elem.id;
> --
> 2.31.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 03/12] vhost: Delete useless read memory barrier
  2022-08-04 18:28 ` [PATCH v7 03/12] vhost: Delete useless read memory barrier Eugenio Pérez
@ 2022-08-05  3:48   ` Jason Wang
  0 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2022-08-05  3:48 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> As discussed in previous series [1], this memory barrier is useless with
> the atomic read of used idx at vhost_svq_more_used. Deleting it.
>
> [1] https://lists.nongnu.org/archive/html/qemu-devel/2022-07/msg02616.html
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  hw/virtio/vhost-shadow-virtqueue.c | 3 ---
>  1 file changed, 3 deletions(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index e6eebd0e8d..1b49bf54f2 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -509,9 +509,6 @@ size_t vhost_svq_poll(VhostShadowVirtqueue *svq)
>          if (unlikely(g_get_monotonic_time() - start_us > 10e6)) {
>              return 0;
>          }
> -
> -        /* Make sure we read new used_idx */
> -        smp_rmb();
>      } while (true);
>  }
>
> --
> 2.31.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 04/12] vhost: Do not depend on !NULL VirtQueueElement on vhost_svq_flush
  2022-08-04 18:28 ` [PATCH v7 04/12] vhost: Do not depend on !NULL VirtQueueElement on vhost_svq_flush Eugenio Pérez
@ 2022-08-05  3:50   ` Jason Wang
  0 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2022-08-05  3:50 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Since QEMU will be able to inject new elements on CVQ to restore the
> state, we need not to depend on a VirtQueueElement to know if a new
> element has been used by the device or not. Instead of check that, check
> if there are new elements only using used idx on vhost_svq_flush.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---

Acked-by: Jason Wang <jasowang@redhat.com>

> v6: Change less from the previous function
> ---
>  hw/virtio/vhost-shadow-virtqueue.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index 1b49bf54f2..f863b08627 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -499,17 +499,20 @@ static void vhost_svq_flush(VhostShadowVirtqueue *svq,
>  size_t vhost_svq_poll(VhostShadowVirtqueue *svq)
>  {
>      int64_t start_us = g_get_monotonic_time();
> +    uint32_t len;
> +
>      do {
> -        uint32_t len;
> -        VirtQueueElement *elem = vhost_svq_get_buf(svq, &len);
> -        if (elem) {
> -            return len;
> +        if (vhost_svq_more_used(svq)) {
> +            break;
>          }
>
>          if (unlikely(g_get_monotonic_time() - start_us > 10e6)) {
>              return 0;
>          }
>      } while (true);
> +
> +    vhost_svq_get_buf(svq, &len);
> +    return len;
>  }
>
>  /**
> --
> 2.31.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 05/12] vhost_net: Add NetClientInfo prepare callback
  2022-08-04 18:28 ` [PATCH v7 05/12] vhost_net: Add NetClientInfo prepare callback Eugenio Pérez
@ 2022-08-09  6:53   ` Jason Wang
  2022-08-09  7:34     ` Eugenio Perez Martin
  0 siblings, 1 reply; 29+ messages in thread
From: Jason Wang @ 2022-08-09  6:53 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> This is used by the backend to perform actions before the device is
> started.
>
> In particular, vdpa net use it to map CVQ buffers to the device, so it
> can send control commands using them.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  include/net/net.h  | 2 ++
>  hw/net/vhost_net.c | 7 +++++++
>  2 files changed, 9 insertions(+)
>
> diff --git a/include/net/net.h b/include/net/net.h
> index 523136c7ac..3416bb3d46 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -44,6 +44,7 @@ typedef struct NICConf {
>
>  typedef void (NetPoll)(NetClientState *, bool enable);
>  typedef bool (NetCanReceive)(NetClientState *);
> +typedef int (NetPrepare)(NetClientState *);
>  typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
>  typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
>  typedef void (NetCleanup) (NetClientState *);
> @@ -71,6 +72,7 @@ typedef struct NetClientInfo {
>      NetReceive *receive_raw;
>      NetReceiveIOV *receive_iov;
>      NetCanReceive *can_receive;
> +    NetPrepare *prepare;

So it looks to me the function is paired with a stop that is
introduced in the following patch.

Maybe we should use "start/stop" instead of "prepare/stop"?

Thanks

>      NetCleanup *cleanup;
>      LinkStatusChanged *link_status_changed;
>      QueryRxFilter *query_rx_filter;
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index ccac5b7a64..e1150d7532 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -244,6 +244,13 @@ static int vhost_net_start_one(struct vhost_net *net,
>      struct vhost_vring_file file = { };
>      int r;
>
> +    if (net->nc->info->prepare) {
> +        r = net->nc->info->prepare(net->nc);
> +        if (r < 0) {
> +            return r;
> +        }
> +    }
> +
>      r = vhost_dev_enable_notifiers(&net->dev, dev);
>      if (r < 0) {
>          goto fail_notifiers;
> --
> 2.31.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 08/12] vdpa: Move command buffers map to start of net device
  2022-08-04 18:28 ` [PATCH v7 08/12] vdpa: Move command buffers map to start of net device Eugenio Pérez
@ 2022-08-09  7:03   ` Jason Wang
  2022-08-09  7:33     ` Eugenio Perez Martin
  0 siblings, 1 reply; 29+ messages in thread
From: Jason Wang @ 2022-08-09  7:03 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> As this series will reuse them to restore the device state at the end of
> a migration (or a device start), let's allocate only once at the device
> start so we don't duplicate their map and unmap.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  net/vhost-vdpa.c | 123 ++++++++++++++++++++++-------------------------
>  1 file changed, 58 insertions(+), 65 deletions(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 55e8a39a56..2c6a26cca0 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -263,29 +263,20 @@ static size_t vhost_vdpa_net_cvq_cmd_page_len(void)
>      return ROUND_UP(vhost_vdpa_net_cvq_cmd_len(), qemu_real_host_page_size());
>  }
>
> -/** Copy and map a guest buffer. */
> -static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
> -                                   const struct iovec *out_data,
> -                                   size_t out_num, size_t data_len, void *buf,
> -                                   size_t *written, bool write)
> +/** Map CVQ buffer. */
> +static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
> +                                  bool write)
>  {
>      DMAMap map = {};
>      int r;
>
> -    if (unlikely(!data_len)) {
> -        qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid legnth of %s buffer\n",
> -                      __func__, write ? "in" : "out");
> -        return false;
> -    }
> -
> -    *written = iov_to_buf(out_data, out_num, 0, buf, data_len);
>      map.translated_addr = (hwaddr)(uintptr_t)buf;
> -    map.size = vhost_vdpa_net_cvq_cmd_page_len() - 1;
> +    map.size = size - 1;

Just noticed this, I think I've asked for the reason before but I
don't remember the answer.

But it looks like a hint of a defect of the current API design.

Thanks

>      map.perm = write ? IOMMU_RW : IOMMU_RO,
>      r = vhost_iova_tree_map_alloc(v->iova_tree, &map);
>      if (unlikely(r != IOVA_OK)) {
>          error_report("Cannot map injected element");
> -        return false;
> +        return r;
>      }
>
>      r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
> @@ -294,50 +285,58 @@ static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
>          goto dma_map_err;
>      }
>
> -    return true;
> +    return 0;
>
>  dma_map_err:
>      vhost_iova_tree_remove(v->iova_tree, &map);
> -    return false;
> +    return r;
>  }
>
> -/**
> - * Copy the guest element into a dedicated buffer suitable to be sent to NIC
> - *
> - * @iov: [0] is the out buffer, [1] is the in one
> - */
> -static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
> -                                        VirtQueueElement *elem,
> -                                        struct iovec *iov)
> +static int vhost_vdpa_net_cvq_prepare(NetClientState *nc)
>  {
> -    size_t in_copied;
> -    bool ok;
> +    VhostVDPAState *s;
> +    int r;
>
> -    iov[0].iov_base = s->cvq_cmd_out_buffer;
> -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, elem->out_num,
> -                                vhost_vdpa_net_cvq_cmd_len(), iov[0].iov_base,
> -                                &iov[0].iov_len, false);
> -    if (unlikely(!ok)) {
> -        return false;
> +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> +
> +    s = DO_UPCAST(VhostVDPAState, nc, nc);
> +    if (!s->vhost_vdpa.shadow_vqs_enabled) {
> +        return 0;
>      }
>
> -    iov[1].iov_base = s->cvq_cmd_in_buffer;
> -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
> -                                sizeof(virtio_net_ctrl_ack), iov[1].iov_base,
> -                                &in_copied, true);
> -    if (unlikely(!ok)) {
> +    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer,
> +                               vhost_vdpa_net_cvq_cmd_page_len(), false);
> +    if (unlikely(r < 0)) {
> +        return r;
> +    }
> +
> +    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer,
> +                               vhost_vdpa_net_cvq_cmd_page_len(), true);
> +    if (unlikely(r < 0)) {
>          vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> -        return false;
>      }
>
> -    iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
> -    return true;
> +    return r;
> +}
> +
> +static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
> +{
> +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> +
> +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> +
> +    if (s->vhost_vdpa.shadow_vqs_enabled) {
> +        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> +        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer);
> +    }
>  }
>
>  static NetClientInfo net_vhost_vdpa_cvq_info = {
>      .type = NET_CLIENT_DRIVER_VHOST_VDPA,
>      .size = sizeof(VhostVDPAState),
>      .receive = vhost_vdpa_receive,
> +    .prepare = vhost_vdpa_net_cvq_prepare,
> +    .stop = vhost_vdpa_net_cvq_stop,
>      .cleanup = vhost_vdpa_cleanup,
>      .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
>      .has_ufo = vhost_vdpa_has_ufo,
> @@ -348,19 +347,17 @@ static NetClientInfo net_vhost_vdpa_cvq_info = {
>   * Do not forward commands not supported by SVQ. Otherwise, the device could
>   * accept it and qemu would not know how to update the device model.
>   */
> -static bool vhost_vdpa_net_cvq_validate_cmd(const struct iovec *out,
> -                                            size_t out_num)
> +static bool vhost_vdpa_net_cvq_validate_cmd(const void *out_buf, size_t len)
>  {
>      struct virtio_net_ctrl_hdr ctrl;
> -    size_t n;
>
> -    n = iov_to_buf(out, out_num, 0, &ctrl, sizeof(ctrl));
> -    if (unlikely(n < sizeof(ctrl))) {
> +    if (unlikely(len < sizeof(ctrl))) {
>          qemu_log_mask(LOG_GUEST_ERROR,
> -                      "%s: invalid legnth of out buffer %zu\n", __func__, n);
> +                      "%s: invalid legnth of out buffer %zu\n", __func__, len);
>          return false;
>      }
>
> +    memcpy(&ctrl, out_buf, sizeof(ctrl));
>      switch (ctrl.class) {
>      case VIRTIO_NET_CTRL_MAC:
>          switch (ctrl.cmd) {
> @@ -392,10 +389,14 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
>      VhostVDPAState *s = opaque;
>      size_t in_len, dev_written;
>      virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
> -    /* out and in buffers sent to the device */
> -    struct iovec dev_buffers[2] = {
> -        { .iov_base = s->cvq_cmd_out_buffer },
> -        { .iov_base = s->cvq_cmd_in_buffer },
> +    /* Out buffer sent to both the vdpa device and the device model */
> +    struct iovec out = {
> +        .iov_base = s->cvq_cmd_out_buffer,
> +    };
> +    /* In buffer sent to the device */
> +    const struct iovec dev_in = {
> +        .iov_base = s->cvq_cmd_in_buffer,
> +        .iov_len = sizeof(virtio_net_ctrl_ack),
>      };
>      /* in buffer used for device model */
>      const struct iovec in = {
> @@ -405,17 +406,15 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
>      int r = -EINVAL;
>      bool ok;
>
> -    ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
> -    if (unlikely(!ok)) {
> -        goto out;
> -    }
> -
> -    ok = vhost_vdpa_net_cvq_validate_cmd(&dev_buffers[0], 1);
> +    out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
> +                             s->cvq_cmd_out_buffer,
> +                             vhost_vdpa_net_cvq_cmd_len());
> +    ok = vhost_vdpa_net_cvq_validate_cmd(s->cvq_cmd_out_buffer, out.iov_len);
>      if (unlikely(!ok)) {
>          goto out;
>      }
>
> -    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, elem);
> +    r = vhost_svq_add(svq, &out, 1, &dev_in, 1, elem);
>      if (unlikely(r != 0)) {
>          if (unlikely(r == -ENOSPC)) {
>              qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> @@ -435,13 +434,13 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
>          goto out;
>      }
>
> -    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
> +    memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
>      if (status != VIRTIO_NET_OK) {
>          goto out;
>      }
>
>      status = VIRTIO_NET_ERR;
> -    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
> +    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1);
>      if (status != VIRTIO_NET_OK) {
>          error_report("Bad CVQ processing in model");
>      }
> @@ -454,12 +453,6 @@ out:
>      }
>      vhost_svq_push_elem(svq, elem, MIN(in_len, sizeof(status)));
>      g_free(elem);
> -    if (dev_buffers[0].iov_base) {
> -        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[0].iov_base);
> -    }
> -    if (dev_buffers[1].iov_base) {
> -        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[1].iov_base);
> -    }
>      return r;
>  }
>
> --
> 2.31.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 09/12] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail
  2022-08-04 18:28 ` [PATCH v7 09/12] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail Eugenio Pérez
@ 2022-08-09  7:11   ` Jason Wang
  0 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2022-08-09  7:11 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> So we can reuse it to inject state messages.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> --
> v7:
> * Remove double free error
>
> v6:
> * Do not assume in buffer sent to the device is sizeof(virtio_net_ctrl_ack)
>
> v5:
> * Do not use an artificial !NULL VirtQueueElement
> * Use only out size instead of iovec dev_buffers for these functions.
> ---
>  net/vhost-vdpa.c | 59 +++++++++++++++++++++++++++++++-----------------
>  1 file changed, 38 insertions(+), 21 deletions(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 2c6a26cca0..10843e6d97 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -331,6 +331,38 @@ static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
>      }
>  }
>
> +static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
> +                                      size_t in_len)
> +{
> +    /* Buffers for the device */
> +    const struct iovec out = {
> +        .iov_base = s->cvq_cmd_out_buffer,
> +        .iov_len = out_len,
> +    };
> +    const struct iovec in = {
> +        .iov_base = s->cvq_cmd_in_buffer,
> +        .iov_len = sizeof(virtio_net_ctrl_ack),
> +    };
> +    VhostShadowVirtqueue *svq = g_ptr_array_index(s->vhost_vdpa.shadow_vqs, 0);
> +    int r;
> +
> +    r = vhost_svq_add(svq, &out, 1, &in, 1, NULL);
> +    if (unlikely(r != 0)) {
> +        if (unlikely(r == -ENOSPC)) {
> +            qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> +                          __func__);
> +        }
> +        return r;
> +    }
> +
> +    /*
> +     * We can poll here since we've had BQL from the time we sent the
> +     * descriptor. Also, we need to take the answer before SVQ pulls by itself,
> +     * when BQL is released
> +     */
> +    return vhost_svq_poll(svq);
> +}
> +
>  static NetClientInfo net_vhost_vdpa_cvq_info = {
>      .type = NET_CLIENT_DRIVER_VHOST_VDPA,
>      .size = sizeof(VhostVDPAState),
> @@ -387,23 +419,18 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
>                                              void *opaque)
>  {
>      VhostVDPAState *s = opaque;
> -    size_t in_len, dev_written;
> +    size_t in_len;
>      virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
>      /* Out buffer sent to both the vdpa device and the device model */
>      struct iovec out = {
>          .iov_base = s->cvq_cmd_out_buffer,
>      };
> -    /* In buffer sent to the device */
> -    const struct iovec dev_in = {
> -        .iov_base = s->cvq_cmd_in_buffer,
> -        .iov_len = sizeof(virtio_net_ctrl_ack),
> -    };
>      /* in buffer used for device model */
>      const struct iovec in = {
>          .iov_base = &status,
>          .iov_len = sizeof(status),
>      };
> -    int r = -EINVAL;
> +    ssize_t dev_written = -EINVAL;
>      bool ok;
>
>      out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
> @@ -414,21 +441,11 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
>          goto out;
>      }
>
> -    r = vhost_svq_add(svq, &out, 1, &dev_in, 1, elem);
> -    if (unlikely(r != 0)) {
> -        if (unlikely(r == -ENOSPC)) {
> -            qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> -                          __func__);
> -        }
> +    dev_written = vhost_vdpa_net_cvq_add(s, out.iov_len, sizeof(status));
> +    if (unlikely(dev_written < 0)) {
>          goto out;
>      }
>
> -    /*
> -     * We can poll here since we've had BQL from the time we sent the
> -     * descriptor. Also, we need to take the answer before SVQ pulls by itself,
> -     * when BQL is released
> -     */
> -    dev_written = vhost_svq_poll(svq);
>      if (unlikely(dev_written < sizeof(status))) {
>          error_report("Insufficient written data (%zu)", dev_written);
>          goto out;
> @@ -436,7 +453,7 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
>
>      memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
>      if (status != VIRTIO_NET_OK) {
> -        goto out;
> +        return VIRTIO_NET_ERR;
>      }
>
>      status = VIRTIO_NET_ERR;
> @@ -453,7 +470,7 @@ out:
>      }
>      vhost_svq_push_elem(svq, elem, MIN(in_len, sizeof(status)));
>      g_free(elem);
> -    return r;
> +    return dev_written < 0 ? dev_written : 0;
>  }
>
>  static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
> --
> 2.31.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 11/12] vdpa: Add virtio-net mac address via CVQ at start
  2022-08-04 18:28 ` [PATCH v7 11/12] vdpa: Add virtio-net mac address via CVQ at start Eugenio Pérez
@ 2022-08-09  7:16   ` Jason Wang
  2022-08-09  7:35     ` Eugenio Perez Martin
  0 siblings, 1 reply; 29+ messages in thread
From: Jason Wang @ 2022-08-09  7:16 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> This is needed so the destination vdpa device see the same state a the
> guest set in the source.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
> v6:
> * Map and unmap command buffers at the start and end of device usage.
>
> v5:
> * Rename s/start/load/
> * Use independent NetClientInfo to only add load callback on cvq.
> ---
>  net/vhost-vdpa.c | 43 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 43 insertions(+)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 10843e6d97..4f1524c2e9 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -363,11 +363,54 @@ static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
>      return vhost_svq_poll(svq);
>  }
>
> +static int vhost_vdpa_net_load(NetClientState *nc)
> +{
> +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> +    struct vhost_vdpa *v = &s->vhost_vdpa;
> +    VirtIONet *n;
> +    uint64_t features;
> +
> +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> +
> +    if (!v->shadow_vqs_enabled) {
> +        return 0;
> +    }
> +
> +    n = VIRTIO_NET(v->dev->vdev);
> +    features = v->dev->vdev->host_features;
> +    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
> +        const struct virtio_net_ctrl_hdr ctrl = {
> +            .class = VIRTIO_NET_CTRL_MAC,
> +            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
> +        };

Can we build this directly from the cmd_out_buffer?

> +        char *cursor = s->cvq_cmd_out_buffer;
> +        ssize_t dev_written;
> +        virtio_net_ctrl_ack state;

I think we can read the status directly from the cmd_in_buffer.

Thanks

> +
> +        memcpy(cursor, &ctrl, sizeof(ctrl));
> +        cursor += sizeof(ctrl);
> +        memcpy(cursor, n->mac, sizeof(n->mac));
> +        cursor += sizeof(n->mac);
> +
> +        dev_written = vhost_vdpa_net_cvq_add(s, sizeof(ctrl) + sizeof(n->mac),
> +                                             sizeof(state));
> +        if (unlikely(dev_written < 0)) {
> +            return dev_written;
> +        }
> +
> +        memcpy(&state, s->cvq_cmd_in_buffer, sizeof(state));
> +        return state == VIRTIO_NET_OK ? 0 : -1;
> +    }
> +
> +    return 0;
> +}
> +
>  static NetClientInfo net_vhost_vdpa_cvq_info = {
>      .type = NET_CLIENT_DRIVER_VHOST_VDPA,
>      .size = sizeof(VhostVDPAState),
>      .receive = vhost_vdpa_receive,
>      .prepare = vhost_vdpa_net_cvq_prepare,
> +    .load = vhost_vdpa_net_load,
>      .stop = vhost_vdpa_net_cvq_stop,
>      .cleanup = vhost_vdpa_cleanup,
>      .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
> --
> 2.31.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 12/12] vdpa: Delete CVQ migration blocker
  2022-08-04 18:28 ` [PATCH v7 12/12] vdpa: Delete CVQ migration blocker Eugenio Pérez
@ 2022-08-09  7:17   ` Jason Wang
  0 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2022-08-09  7:17 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> We can restore the device state in the destination via CVQ now. Remove
> the migration blocker.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  include/hw/virtio/vhost-vdpa.h |  1 -
>  hw/virtio/vhost-vdpa.c         | 14 --------------
>  net/vhost-vdpa.c               |  2 --
>  3 files changed, 17 deletions(-)
>
> diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> index d10a89303e..1111d85643 100644
> --- a/include/hw/virtio/vhost-vdpa.h
> +++ b/include/hw/virtio/vhost-vdpa.h
> @@ -35,7 +35,6 @@ typedef struct vhost_vdpa {
>      bool shadow_vqs_enabled;
>      /* IOVA mapping used by the Shadow Virtqueue */
>      VhostIOVATree *iova_tree;
> -    Error *migration_blocker;
>      GPtrArray *shadow_vqs;
>      const VhostShadowVirtqueueOps *shadow_vq_ops;
>      void *shadow_vq_ops_opaque;
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 7e28d2f674..4b0cfc0f56 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -1033,13 +1033,6 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
>          return true;
>      }
>
> -    if (v->migration_blocker) {
> -        int r = migrate_add_blocker(v->migration_blocker, &err);
> -        if (unlikely(r < 0)) {
> -            return false;
> -        }
> -    }
> -
>      for (i = 0; i < v->shadow_vqs->len; ++i) {
>          VirtQueue *vq = virtio_get_queue(dev->vdev, dev->vq_index + i);
>          VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i);
> @@ -1082,10 +1075,6 @@ err:
>          vhost_svq_stop(svq);
>      }
>
> -    if (v->migration_blocker) {
> -        migrate_del_blocker(v->migration_blocker);
> -    }
> -
>      return false;
>  }
>
> @@ -1105,9 +1094,6 @@ static bool vhost_vdpa_svqs_stop(struct vhost_dev *dev)
>          }
>      }
>
> -    if (v->migration_blocker) {
> -        migrate_del_blocker(v->migration_blocker);
> -    }
>      return true;
>  }
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 4f1524c2e9..7c0d600aea 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -558,8 +558,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>
>          s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
>          s->vhost_vdpa.shadow_vq_ops_opaque = s;
> -        error_setg(&s->vhost_vdpa.migration_blocker,
> -                   "Migration disabled: vhost-vdpa uses CVQ.");
>      }
>      ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
>      if (ret) {
> --
> 2.31.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 08/12] vdpa: Move command buffers map to start of net device
  2022-08-09  7:03   ` Jason Wang
@ 2022-08-09  7:33     ` Eugenio Perez Martin
  2022-08-09  7:48       ` Jason Wang
  0 siblings, 1 reply; 29+ messages in thread
From: Eugenio Perez Martin @ 2022-08-09  7:33 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Tue, Aug 9, 2022 at 9:04 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > As this series will reuse them to restore the device state at the end of
> > a migration (or a device start), let's allocate only once at the device
> > start so we don't duplicate their map and unmap.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  net/vhost-vdpa.c | 123 ++++++++++++++++++++++-------------------------
> >  1 file changed, 58 insertions(+), 65 deletions(-)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 55e8a39a56..2c6a26cca0 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -263,29 +263,20 @@ static size_t vhost_vdpa_net_cvq_cmd_page_len(void)
> >      return ROUND_UP(vhost_vdpa_net_cvq_cmd_len(), qemu_real_host_page_size());
> >  }
> >
> > -/** Copy and map a guest buffer. */
> > -static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
> > -                                   const struct iovec *out_data,
> > -                                   size_t out_num, size_t data_len, void *buf,
> > -                                   size_t *written, bool write)
> > +/** Map CVQ buffer. */
> > +static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
> > +                                  bool write)
> >  {
> >      DMAMap map = {};
> >      int r;
> >
> > -    if (unlikely(!data_len)) {
> > -        qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid legnth of %s buffer\n",
> > -                      __func__, write ? "in" : "out");
> > -        return false;
> > -    }
> > -
> > -    *written = iov_to_buf(out_data, out_num, 0, buf, data_len);
> >      map.translated_addr = (hwaddr)(uintptr_t)buf;
> > -    map.size = vhost_vdpa_net_cvq_cmd_page_len() - 1;
> > +    map.size = size - 1;
>
> Just noticed this, I think I've asked for the reason before but I
> don't remember the answer.
>
> But it looks like a hint of a defect of the current API design.
>

I can look for it in the mail list, but long story short:
vDPA DMA API is *not* inclusive: To map the first page, you map (.iova
= 0, .size = 4096).
IOVA tree API has been inclusive forever: To map the first page, you
map (.iova = 0, .size = 4095). If we map with .size = 4096, .iova =
4096 is considered mapped too.

To adapt one to the other would have been an API change even before
the introduction of vhost-iova-tree.

Thanks!


> Thanks
>
> >      map.perm = write ? IOMMU_RW : IOMMU_RO,
> >      r = vhost_iova_tree_map_alloc(v->iova_tree, &map);
> >      if (unlikely(r != IOVA_OK)) {
> >          error_report("Cannot map injected element");
> > -        return false;
> > +        return r;
> >      }
> >
> >      r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
> > @@ -294,50 +285,58 @@ static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
> >          goto dma_map_err;
> >      }
> >
> > -    return true;
> > +    return 0;
> >
> >  dma_map_err:
> >      vhost_iova_tree_remove(v->iova_tree, &map);
> > -    return false;
> > +    return r;
> >  }
> >
> > -/**
> > - * Copy the guest element into a dedicated buffer suitable to be sent to NIC
> > - *
> > - * @iov: [0] is the out buffer, [1] is the in one
> > - */
> > -static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
> > -                                        VirtQueueElement *elem,
> > -                                        struct iovec *iov)
> > +static int vhost_vdpa_net_cvq_prepare(NetClientState *nc)
> >  {
> > -    size_t in_copied;
> > -    bool ok;
> > +    VhostVDPAState *s;
> > +    int r;
> >
> > -    iov[0].iov_base = s->cvq_cmd_out_buffer;
> > -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, elem->out_num,
> > -                                vhost_vdpa_net_cvq_cmd_len(), iov[0].iov_base,
> > -                                &iov[0].iov_len, false);
> > -    if (unlikely(!ok)) {
> > -        return false;
> > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > +
> > +    s = DO_UPCAST(VhostVDPAState, nc, nc);
> > +    if (!s->vhost_vdpa.shadow_vqs_enabled) {
> > +        return 0;
> >      }
> >
> > -    iov[1].iov_base = s->cvq_cmd_in_buffer;
> > -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
> > -                                sizeof(virtio_net_ctrl_ack), iov[1].iov_base,
> > -                                &in_copied, true);
> > -    if (unlikely(!ok)) {
> > +    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer,
> > +                               vhost_vdpa_net_cvq_cmd_page_len(), false);
> > +    if (unlikely(r < 0)) {
> > +        return r;
> > +    }
> > +
> > +    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer,
> > +                               vhost_vdpa_net_cvq_cmd_page_len(), true);
> > +    if (unlikely(r < 0)) {
> >          vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> > -        return false;
> >      }
> >
> > -    iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
> > -    return true;
> > +    return r;
> > +}
> > +
> > +static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
> > +{
> > +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > +
> > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > +
> > +    if (s->vhost_vdpa.shadow_vqs_enabled) {
> > +        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> > +        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer);
> > +    }
> >  }
> >
> >  static NetClientInfo net_vhost_vdpa_cvq_info = {
> >      .type = NET_CLIENT_DRIVER_VHOST_VDPA,
> >      .size = sizeof(VhostVDPAState),
> >      .receive = vhost_vdpa_receive,
> > +    .prepare = vhost_vdpa_net_cvq_prepare,
> > +    .stop = vhost_vdpa_net_cvq_stop,
> >      .cleanup = vhost_vdpa_cleanup,
> >      .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
> >      .has_ufo = vhost_vdpa_has_ufo,
> > @@ -348,19 +347,17 @@ static NetClientInfo net_vhost_vdpa_cvq_info = {
> >   * Do not forward commands not supported by SVQ. Otherwise, the device could
> >   * accept it and qemu would not know how to update the device model.
> >   */
> > -static bool vhost_vdpa_net_cvq_validate_cmd(const struct iovec *out,
> > -                                            size_t out_num)
> > +static bool vhost_vdpa_net_cvq_validate_cmd(const void *out_buf, size_t len)
> >  {
> >      struct virtio_net_ctrl_hdr ctrl;
> > -    size_t n;
> >
> > -    n = iov_to_buf(out, out_num, 0, &ctrl, sizeof(ctrl));
> > -    if (unlikely(n < sizeof(ctrl))) {
> > +    if (unlikely(len < sizeof(ctrl))) {
> >          qemu_log_mask(LOG_GUEST_ERROR,
> > -                      "%s: invalid legnth of out buffer %zu\n", __func__, n);
> > +                      "%s: invalid legnth of out buffer %zu\n", __func__, len);
> >          return false;
> >      }
> >
> > +    memcpy(&ctrl, out_buf, sizeof(ctrl));
> >      switch (ctrl.class) {
> >      case VIRTIO_NET_CTRL_MAC:
> >          switch (ctrl.cmd) {
> > @@ -392,10 +389,14 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> >      VhostVDPAState *s = opaque;
> >      size_t in_len, dev_written;
> >      virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
> > -    /* out and in buffers sent to the device */
> > -    struct iovec dev_buffers[2] = {
> > -        { .iov_base = s->cvq_cmd_out_buffer },
> > -        { .iov_base = s->cvq_cmd_in_buffer },
> > +    /* Out buffer sent to both the vdpa device and the device model */
> > +    struct iovec out = {
> > +        .iov_base = s->cvq_cmd_out_buffer,
> > +    };
> > +    /* In buffer sent to the device */
> > +    const struct iovec dev_in = {
> > +        .iov_base = s->cvq_cmd_in_buffer,
> > +        .iov_len = sizeof(virtio_net_ctrl_ack),
> >      };
> >      /* in buffer used for device model */
> >      const struct iovec in = {
> > @@ -405,17 +406,15 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> >      int r = -EINVAL;
> >      bool ok;
> >
> > -    ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
> > -    if (unlikely(!ok)) {
> > -        goto out;
> > -    }
> > -
> > -    ok = vhost_vdpa_net_cvq_validate_cmd(&dev_buffers[0], 1);
> > +    out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
> > +                             s->cvq_cmd_out_buffer,
> > +                             vhost_vdpa_net_cvq_cmd_len());
> > +    ok = vhost_vdpa_net_cvq_validate_cmd(s->cvq_cmd_out_buffer, out.iov_len);
> >      if (unlikely(!ok)) {
> >          goto out;
> >      }
> >
> > -    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, elem);
> > +    r = vhost_svq_add(svq, &out, 1, &dev_in, 1, elem);
> >      if (unlikely(r != 0)) {
> >          if (unlikely(r == -ENOSPC)) {
> >              qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> > @@ -435,13 +434,13 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> >          goto out;
> >      }
> >
> > -    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
> > +    memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
> >      if (status != VIRTIO_NET_OK) {
> >          goto out;
> >      }
> >
> >      status = VIRTIO_NET_ERR;
> > -    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
> > +    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1);
> >      if (status != VIRTIO_NET_OK) {
> >          error_report("Bad CVQ processing in model");
> >      }
> > @@ -454,12 +453,6 @@ out:
> >      }
> >      vhost_svq_push_elem(svq, elem, MIN(in_len, sizeof(status)));
> >      g_free(elem);
> > -    if (dev_buffers[0].iov_base) {
> > -        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[0].iov_base);
> > -    }
> > -    if (dev_buffers[1].iov_base) {
> > -        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[1].iov_base);
> > -    }
> >      return r;
> >  }
> >
> > --
> > 2.31.1
> >
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 05/12] vhost_net: Add NetClientInfo prepare callback
  2022-08-09  6:53   ` Jason Wang
@ 2022-08-09  7:34     ` Eugenio Perez Martin
  0 siblings, 0 replies; 29+ messages in thread
From: Eugenio Perez Martin @ 2022-08-09  7:34 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Tue, Aug 9, 2022 at 8:54 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > This is used by the backend to perform actions before the device is
> > started.
> >
> > In particular, vdpa net use it to map CVQ buffers to the device, so it
> > can send control commands using them.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  include/net/net.h  | 2 ++
> >  hw/net/vhost_net.c | 7 +++++++
> >  2 files changed, 9 insertions(+)
> >
> > diff --git a/include/net/net.h b/include/net/net.h
> > index 523136c7ac..3416bb3d46 100644
> > --- a/include/net/net.h
> > +++ b/include/net/net.h
> > @@ -44,6 +44,7 @@ typedef struct NICConf {
> >
> >  typedef void (NetPoll)(NetClientState *, bool enable);
> >  typedef bool (NetCanReceive)(NetClientState *);
> > +typedef int (NetPrepare)(NetClientState *);
> >  typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
> >  typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
> >  typedef void (NetCleanup) (NetClientState *);
> > @@ -71,6 +72,7 @@ typedef struct NetClientInfo {
> >      NetReceive *receive_raw;
> >      NetReceiveIOV *receive_iov;
> >      NetCanReceive *can_receive;
> > +    NetPrepare *prepare;
>
> So it looks to me the function is paired with a stop that is
> introduced in the following patch.
>
> Maybe we should use "start/stop" instead of "prepare/stop"?
>

Sure, I can prepare the next series with it.

Thanks!

> Thanks
>
> >      NetCleanup *cleanup;
> >      LinkStatusChanged *link_status_changed;
> >      QueryRxFilter *query_rx_filter;
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index ccac5b7a64..e1150d7532 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -244,6 +244,13 @@ static int vhost_net_start_one(struct vhost_net *net,
> >      struct vhost_vring_file file = { };
> >      int r;
> >
> > +    if (net->nc->info->prepare) {
> > +        r = net->nc->info->prepare(net->nc);
> > +        if (r < 0) {
> > +            return r;
> > +        }
> > +    }
> > +
> >      r = vhost_dev_enable_notifiers(&net->dev, dev);
> >      if (r < 0) {
> >          goto fail_notifiers;
> > --
> > 2.31.1
> >
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 11/12] vdpa: Add virtio-net mac address via CVQ at start
  2022-08-09  7:16   ` Jason Wang
@ 2022-08-09  7:35     ` Eugenio Perez Martin
  2022-08-09  7:49       ` Jason Wang
  0 siblings, 1 reply; 29+ messages in thread
From: Eugenio Perez Martin @ 2022-08-09  7:35 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Tue, Aug 9, 2022 at 9:16 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > This is needed so the destination vdpa device see the same state a the
> > guest set in the source.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> > v6:
> > * Map and unmap command buffers at the start and end of device usage.
> >
> > v5:
> > * Rename s/start/load/
> > * Use independent NetClientInfo to only add load callback on cvq.
> > ---
> >  net/vhost-vdpa.c | 43 +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 43 insertions(+)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 10843e6d97..4f1524c2e9 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -363,11 +363,54 @@ static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
> >      return vhost_svq_poll(svq);
> >  }
> >
> > +static int vhost_vdpa_net_load(NetClientState *nc)
> > +{
> > +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > +    struct vhost_vdpa *v = &s->vhost_vdpa;
> > +    VirtIONet *n;
> > +    uint64_t features;
> > +
> > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > +
> > +    if (!v->shadow_vqs_enabled) {
> > +        return 0;
> > +    }
> > +
> > +    n = VIRTIO_NET(v->dev->vdev);
> > +    features = v->dev->vdev->host_features;
> > +    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
> > +        const struct virtio_net_ctrl_hdr ctrl = {
> > +            .class = VIRTIO_NET_CTRL_MAC,
> > +            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
> > +        };
>
> Can we build this directly from the cmd_out_buffer?
>
> > +        char *cursor = s->cvq_cmd_out_buffer;
> > +        ssize_t dev_written;
> > +        virtio_net_ctrl_ack state;
>
> I think we can read the status directly from the cmd_in_buffer.
>

Directly casting it to virtio_net_ctrl_ack? Sure.

Thanks!

> Thanks
>
> > +
> > +        memcpy(cursor, &ctrl, sizeof(ctrl));
> > +        cursor += sizeof(ctrl);
> > +        memcpy(cursor, n->mac, sizeof(n->mac));
> > +        cursor += sizeof(n->mac);
> > +
> > +        dev_written = vhost_vdpa_net_cvq_add(s, sizeof(ctrl) + sizeof(n->mac),
> > +                                             sizeof(state));
> > +        if (unlikely(dev_written < 0)) {
> > +            return dev_written;
> > +        }
> > +
> > +        memcpy(&state, s->cvq_cmd_in_buffer, sizeof(state));
> > +        return state == VIRTIO_NET_OK ? 0 : -1;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> >  static NetClientInfo net_vhost_vdpa_cvq_info = {
> >      .type = NET_CLIENT_DRIVER_VHOST_VDPA,
> >      .size = sizeof(VhostVDPAState),
> >      .receive = vhost_vdpa_receive,
> >      .prepare = vhost_vdpa_net_cvq_prepare,
> > +    .load = vhost_vdpa_net_load,
> >      .stop = vhost_vdpa_net_cvq_stop,
> >      .cleanup = vhost_vdpa_cleanup,
> >      .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
> > --
> > 2.31.1
> >
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 08/12] vdpa: Move command buffers map to start of net device
  2022-08-09  7:33     ` Eugenio Perez Martin
@ 2022-08-09  7:48       ` Jason Wang
  2022-08-09  8:03         ` Eugenio Perez Martin
  0 siblings, 1 reply; 29+ messages in thread
From: Jason Wang @ 2022-08-09  7:48 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Tue, Aug 9, 2022 at 3:34 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Tue, Aug 9, 2022 at 9:04 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > As this series will reuse them to restore the device state at the end of
> > > a migration (or a device start), let's allocate only once at the device
> > > start so we don't duplicate their map and unmap.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >  net/vhost-vdpa.c | 123 ++++++++++++++++++++++-------------------------
> > >  1 file changed, 58 insertions(+), 65 deletions(-)
> > >
> > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > index 55e8a39a56..2c6a26cca0 100644
> > > --- a/net/vhost-vdpa.c
> > > +++ b/net/vhost-vdpa.c
> > > @@ -263,29 +263,20 @@ static size_t vhost_vdpa_net_cvq_cmd_page_len(void)
> > >      return ROUND_UP(vhost_vdpa_net_cvq_cmd_len(), qemu_real_host_page_size());
> > >  }
> > >
> > > -/** Copy and map a guest buffer. */
> > > -static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
> > > -                                   const struct iovec *out_data,
> > > -                                   size_t out_num, size_t data_len, void *buf,
> > > -                                   size_t *written, bool write)
> > > +/** Map CVQ buffer. */
> > > +static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
> > > +                                  bool write)
> > >  {
> > >      DMAMap map = {};
> > >      int r;
> > >
> > > -    if (unlikely(!data_len)) {
> > > -        qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid legnth of %s buffer\n",
> > > -                      __func__, write ? "in" : "out");
> > > -        return false;
> > > -    }
> > > -
> > > -    *written = iov_to_buf(out_data, out_num, 0, buf, data_len);
> > >      map.translated_addr = (hwaddr)(uintptr_t)buf;
> > > -    map.size = vhost_vdpa_net_cvq_cmd_page_len() - 1;
> > > +    map.size = size - 1;
> >
> > Just noticed this, I think I've asked for the reason before but I
> > don't remember the answer.
> >
> > But it looks like a hint of a defect of the current API design.
> >
>
> I can look for it in the mail list, but long story short:
> vDPA DMA API is *not* inclusive: To map the first page, you map (.iova
> = 0, .size = 4096).
> IOVA tree API has been inclusive forever: To map the first page, you
> map (.iova = 0, .size = 4095). If we map with .size = 4096, .iova =
> 4096 is considered mapped too.

This looks like a bug.

{.iova=0, size=0} should be illegal but if I understand you correctly,
it means [0, 1)?

Thanks

>
> To adapt one to the other would have been an API change even before
> the introduction of vhost-iova-tree.
>
> Thanks!
>
>
> > Thanks
> >
> > >      map.perm = write ? IOMMU_RW : IOMMU_RO,
> > >      r = vhost_iova_tree_map_alloc(v->iova_tree, &map);
> > >      if (unlikely(r != IOVA_OK)) {
> > >          error_report("Cannot map injected element");
> > > -        return false;
> > > +        return r;
> > >      }
> > >
> > >      r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
> > > @@ -294,50 +285,58 @@ static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
> > >          goto dma_map_err;
> > >      }
> > >
> > > -    return true;
> > > +    return 0;
> > >
> > >  dma_map_err:
> > >      vhost_iova_tree_remove(v->iova_tree, &map);
> > > -    return false;
> > > +    return r;
> > >  }
> > >
> > > -/**
> > > - * Copy the guest element into a dedicated buffer suitable to be sent to NIC
> > > - *
> > > - * @iov: [0] is the out buffer, [1] is the in one
> > > - */
> > > -static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
> > > -                                        VirtQueueElement *elem,
> > > -                                        struct iovec *iov)
> > > +static int vhost_vdpa_net_cvq_prepare(NetClientState *nc)
> > >  {
> > > -    size_t in_copied;
> > > -    bool ok;
> > > +    VhostVDPAState *s;
> > > +    int r;
> > >
> > > -    iov[0].iov_base = s->cvq_cmd_out_buffer;
> > > -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, elem->out_num,
> > > -                                vhost_vdpa_net_cvq_cmd_len(), iov[0].iov_base,
> > > -                                &iov[0].iov_len, false);
> > > -    if (unlikely(!ok)) {
> > > -        return false;
> > > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > +
> > > +    s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > +    if (!s->vhost_vdpa.shadow_vqs_enabled) {
> > > +        return 0;
> > >      }
> > >
> > > -    iov[1].iov_base = s->cvq_cmd_in_buffer;
> > > -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
> > > -                                sizeof(virtio_net_ctrl_ack), iov[1].iov_base,
> > > -                                &in_copied, true);
> > > -    if (unlikely(!ok)) {
> > > +    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer,
> > > +                               vhost_vdpa_net_cvq_cmd_page_len(), false);
> > > +    if (unlikely(r < 0)) {
> > > +        return r;
> > > +    }
> > > +
> > > +    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer,
> > > +                               vhost_vdpa_net_cvq_cmd_page_len(), true);
> > > +    if (unlikely(r < 0)) {
> > >          vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> > > -        return false;
> > >      }
> > >
> > > -    iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
> > > -    return true;
> > > +    return r;
> > > +}
> > > +
> > > +static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
> > > +{
> > > +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > +
> > > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > +
> > > +    if (s->vhost_vdpa.shadow_vqs_enabled) {
> > > +        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> > > +        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer);
> > > +    }
> > >  }
> > >
> > >  static NetClientInfo net_vhost_vdpa_cvq_info = {
> > >      .type = NET_CLIENT_DRIVER_VHOST_VDPA,
> > >      .size = sizeof(VhostVDPAState),
> > >      .receive = vhost_vdpa_receive,
> > > +    .prepare = vhost_vdpa_net_cvq_prepare,
> > > +    .stop = vhost_vdpa_net_cvq_stop,
> > >      .cleanup = vhost_vdpa_cleanup,
> > >      .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
> > >      .has_ufo = vhost_vdpa_has_ufo,
> > > @@ -348,19 +347,17 @@ static NetClientInfo net_vhost_vdpa_cvq_info = {
> > >   * Do not forward commands not supported by SVQ. Otherwise, the device could
> > >   * accept it and qemu would not know how to update the device model.
> > >   */
> > > -static bool vhost_vdpa_net_cvq_validate_cmd(const struct iovec *out,
> > > -                                            size_t out_num)
> > > +static bool vhost_vdpa_net_cvq_validate_cmd(const void *out_buf, size_t len)
> > >  {
> > >      struct virtio_net_ctrl_hdr ctrl;
> > > -    size_t n;
> > >
> > > -    n = iov_to_buf(out, out_num, 0, &ctrl, sizeof(ctrl));
> > > -    if (unlikely(n < sizeof(ctrl))) {
> > > +    if (unlikely(len < sizeof(ctrl))) {
> > >          qemu_log_mask(LOG_GUEST_ERROR,
> > > -                      "%s: invalid legnth of out buffer %zu\n", __func__, n);
> > > +                      "%s: invalid legnth of out buffer %zu\n", __func__, len);
> > >          return false;
> > >      }
> > >
> > > +    memcpy(&ctrl, out_buf, sizeof(ctrl));
> > >      switch (ctrl.class) {
> > >      case VIRTIO_NET_CTRL_MAC:
> > >          switch (ctrl.cmd) {
> > > @@ -392,10 +389,14 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> > >      VhostVDPAState *s = opaque;
> > >      size_t in_len, dev_written;
> > >      virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
> > > -    /* out and in buffers sent to the device */
> > > -    struct iovec dev_buffers[2] = {
> > > -        { .iov_base = s->cvq_cmd_out_buffer },
> > > -        { .iov_base = s->cvq_cmd_in_buffer },
> > > +    /* Out buffer sent to both the vdpa device and the device model */
> > > +    struct iovec out = {
> > > +        .iov_base = s->cvq_cmd_out_buffer,
> > > +    };
> > > +    /* In buffer sent to the device */
> > > +    const struct iovec dev_in = {
> > > +        .iov_base = s->cvq_cmd_in_buffer,
> > > +        .iov_len = sizeof(virtio_net_ctrl_ack),
> > >      };
> > >      /* in buffer used for device model */
> > >      const struct iovec in = {
> > > @@ -405,17 +406,15 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> > >      int r = -EINVAL;
> > >      bool ok;
> > >
> > > -    ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
> > > -    if (unlikely(!ok)) {
> > > -        goto out;
> > > -    }
> > > -
> > > -    ok = vhost_vdpa_net_cvq_validate_cmd(&dev_buffers[0], 1);
> > > +    out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
> > > +                             s->cvq_cmd_out_buffer,
> > > +                             vhost_vdpa_net_cvq_cmd_len());
> > > +    ok = vhost_vdpa_net_cvq_validate_cmd(s->cvq_cmd_out_buffer, out.iov_len);
> > >      if (unlikely(!ok)) {
> > >          goto out;
> > >      }
> > >
> > > -    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, elem);
> > > +    r = vhost_svq_add(svq, &out, 1, &dev_in, 1, elem);
> > >      if (unlikely(r != 0)) {
> > >          if (unlikely(r == -ENOSPC)) {
> > >              qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> > > @@ -435,13 +434,13 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> > >          goto out;
> > >      }
> > >
> > > -    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
> > > +    memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
> > >      if (status != VIRTIO_NET_OK) {
> > >          goto out;
> > >      }
> > >
> > >      status = VIRTIO_NET_ERR;
> > > -    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
> > > +    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1);
> > >      if (status != VIRTIO_NET_OK) {
> > >          error_report("Bad CVQ processing in model");
> > >      }
> > > @@ -454,12 +453,6 @@ out:
> > >      }
> > >      vhost_svq_push_elem(svq, elem, MIN(in_len, sizeof(status)));
> > >      g_free(elem);
> > > -    if (dev_buffers[0].iov_base) {
> > > -        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[0].iov_base);
> > > -    }
> > > -    if (dev_buffers[1].iov_base) {
> > > -        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[1].iov_base);
> > > -    }
> > >      return r;
> > >  }
> > >
> > > --
> > > 2.31.1
> > >
> >
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 11/12] vdpa: Add virtio-net mac address via CVQ at start
  2022-08-09  7:35     ` Eugenio Perez Martin
@ 2022-08-09  7:49       ` Jason Wang
  0 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2022-08-09  7:49 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Tue, Aug 9, 2022 at 3:36 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Tue, Aug 9, 2022 at 9:16 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > This is needed so the destination vdpa device see the same state a the
> > > guest set in the source.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > > v6:
> > > * Map and unmap command buffers at the start and end of device usage.
> > >
> > > v5:
> > > * Rename s/start/load/
> > > * Use independent NetClientInfo to only add load callback on cvq.
> > > ---
> > >  net/vhost-vdpa.c | 43 +++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 43 insertions(+)
> > >
> > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > index 10843e6d97..4f1524c2e9 100644
> > > --- a/net/vhost-vdpa.c
> > > +++ b/net/vhost-vdpa.c
> > > @@ -363,11 +363,54 @@ static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
> > >      return vhost_svq_poll(svq);
> > >  }
> > >
> > > +static int vhost_vdpa_net_load(NetClientState *nc)
> > > +{
> > > +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > +    struct vhost_vdpa *v = &s->vhost_vdpa;
> > > +    VirtIONet *n;
> > > +    uint64_t features;
> > > +
> > > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > +
> > > +    if (!v->shadow_vqs_enabled) {
> > > +        return 0;
> > > +    }
> > > +
> > > +    n = VIRTIO_NET(v->dev->vdev);
> > > +    features = v->dev->vdev->host_features;
> > > +    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
> > > +        const struct virtio_net_ctrl_hdr ctrl = {
> > > +            .class = VIRTIO_NET_CTRL_MAC,
> > > +            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
> > > +        };
> >
> > Can we build this directly from the cmd_out_buffer?
> >
> > > +        char *cursor = s->cvq_cmd_out_buffer;
> > > +        ssize_t dev_written;
> > > +        virtio_net_ctrl_ack state;
> >
> > I think we can read the status directly from the cmd_in_buffer.
> >
>
> Directly casting it to virtio_net_ctrl_ack? Sure.

Yes.

Thanks

>
> Thanks!
>
> > Thanks
> >
> > > +
> > > +        memcpy(cursor, &ctrl, sizeof(ctrl));
> > > +        cursor += sizeof(ctrl);
> > > +        memcpy(cursor, n->mac, sizeof(n->mac));
> > > +        cursor += sizeof(n->mac);
> > > +
> > > +        dev_written = vhost_vdpa_net_cvq_add(s, sizeof(ctrl) + sizeof(n->mac),
> > > +                                             sizeof(state));
> > > +        if (unlikely(dev_written < 0)) {
> > > +            return dev_written;
> > > +        }
> > > +
> > > +        memcpy(&state, s->cvq_cmd_in_buffer, sizeof(state));
> > > +        return state == VIRTIO_NET_OK ? 0 : -1;
> > > +    }
> > > +
> > > +    return 0;
> > > +}
> > > +
> > >  static NetClientInfo net_vhost_vdpa_cvq_info = {
> > >      .type = NET_CLIENT_DRIVER_VHOST_VDPA,
> > >      .size = sizeof(VhostVDPAState),
> > >      .receive = vhost_vdpa_receive,
> > >      .prepare = vhost_vdpa_net_cvq_prepare,
> > > +    .load = vhost_vdpa_net_load,
> > >      .stop = vhost_vdpa_net_cvq_stop,
> > >      .cleanup = vhost_vdpa_cleanup,
> > >      .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
> > > --
> > > 2.31.1
> > >
> >
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 08/12] vdpa: Move command buffers map to start of net device
  2022-08-09  7:48       ` Jason Wang
@ 2022-08-09  8:03         ` Eugenio Perez Martin
  2022-08-09  8:13           ` Jason Wang
  0 siblings, 1 reply; 29+ messages in thread
From: Eugenio Perez Martin @ 2022-08-09  8:03 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Tue, Aug 9, 2022 at 9:49 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Aug 9, 2022 at 3:34 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
> >
> > On Tue, Aug 9, 2022 at 9:04 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > > >
> > > > As this series will reuse them to restore the device state at the end of
> > > > a migration (or a device start), let's allocate only once at the device
> > > > start so we don't duplicate their map and unmap.
> > > >
> > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > ---
> > > >  net/vhost-vdpa.c | 123 ++++++++++++++++++++++-------------------------
> > > >  1 file changed, 58 insertions(+), 65 deletions(-)
> > > >
> > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > index 55e8a39a56..2c6a26cca0 100644
> > > > --- a/net/vhost-vdpa.c
> > > > +++ b/net/vhost-vdpa.c
> > > > @@ -263,29 +263,20 @@ static size_t vhost_vdpa_net_cvq_cmd_page_len(void)
> > > >      return ROUND_UP(vhost_vdpa_net_cvq_cmd_len(), qemu_real_host_page_size());
> > > >  }
> > > >
> > > > -/** Copy and map a guest buffer. */
> > > > -static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
> > > > -                                   const struct iovec *out_data,
> > > > -                                   size_t out_num, size_t data_len, void *buf,
> > > > -                                   size_t *written, bool write)
> > > > +/** Map CVQ buffer. */
> > > > +static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
> > > > +                                  bool write)
> > > >  {
> > > >      DMAMap map = {};
> > > >      int r;
> > > >
> > > > -    if (unlikely(!data_len)) {
> > > > -        qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid legnth of %s buffer\n",
> > > > -                      __func__, write ? "in" : "out");
> > > > -        return false;
> > > > -    }
> > > > -
> > > > -    *written = iov_to_buf(out_data, out_num, 0, buf, data_len);
> > > >      map.translated_addr = (hwaddr)(uintptr_t)buf;
> > > > -    map.size = vhost_vdpa_net_cvq_cmd_page_len() - 1;
> > > > +    map.size = size - 1;
> > >
> > > Just noticed this, I think I've asked for the reason before but I
> > > don't remember the answer.
> > >
> > > But it looks like a hint of a defect of the current API design.
> > >
> >
> > I can look for it in the mail list, but long story short:
> > vDPA DMA API is *not* inclusive: To map the first page, you map (.iova
> > = 0, .size = 4096).
> > IOVA tree API has been inclusive forever: To map the first page, you
> > map (.iova = 0, .size = 4095). If we map with .size = 4096, .iova =
> > 4096 is considered mapped too.
>
> This looks like a bug.
>
> {.iova=0, size=0} should be illegal but if I understand you correctly,
> it means [0, 1)?
>

On iova_tree it works the way you point here, yes. Maybe the member's
name should have been length or something like that.

On intel_iommu the address *mask* is actually used to fill the size,
not the actual DMA entry length.

For SVQ I think it would be beneficial to declare two different types,
size_inclusive and size_non_inclusive, and check at compile time if
the caller is using the right type. But it's not top priority at the
moment.

Thanks!

> Thanks
>
> >
> > To adapt one to the other would have been an API change even before
> > the introduction of vhost-iova-tree.
> >
> > Thanks!
> >
> >
> > > Thanks
> > >
> > > >      map.perm = write ? IOMMU_RW : IOMMU_RO,
> > > >      r = vhost_iova_tree_map_alloc(v->iova_tree, &map);
> > > >      if (unlikely(r != IOVA_OK)) {
> > > >          error_report("Cannot map injected element");
> > > > -        return false;
> > > > +        return r;
> > > >      }
> > > >
> > > >      r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
> > > > @@ -294,50 +285,58 @@ static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
> > > >          goto dma_map_err;
> > > >      }
> > > >
> > > > -    return true;
> > > > +    return 0;
> > > >
> > > >  dma_map_err:
> > > >      vhost_iova_tree_remove(v->iova_tree, &map);
> > > > -    return false;
> > > > +    return r;
> > > >  }
> > > >
> > > > -/**
> > > > - * Copy the guest element into a dedicated buffer suitable to be sent to NIC
> > > > - *
> > > > - * @iov: [0] is the out buffer, [1] is the in one
> > > > - */
> > > > -static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
> > > > -                                        VirtQueueElement *elem,
> > > > -                                        struct iovec *iov)
> > > > +static int vhost_vdpa_net_cvq_prepare(NetClientState *nc)
> > > >  {
> > > > -    size_t in_copied;
> > > > -    bool ok;
> > > > +    VhostVDPAState *s;
> > > > +    int r;
> > > >
> > > > -    iov[0].iov_base = s->cvq_cmd_out_buffer;
> > > > -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, elem->out_num,
> > > > -                                vhost_vdpa_net_cvq_cmd_len(), iov[0].iov_base,
> > > > -                                &iov[0].iov_len, false);
> > > > -    if (unlikely(!ok)) {
> > > > -        return false;
> > > > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > > +
> > > > +    s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > > +    if (!s->vhost_vdpa.shadow_vqs_enabled) {
> > > > +        return 0;
> > > >      }
> > > >
> > > > -    iov[1].iov_base = s->cvq_cmd_in_buffer;
> > > > -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
> > > > -                                sizeof(virtio_net_ctrl_ack), iov[1].iov_base,
> > > > -                                &in_copied, true);
> > > > -    if (unlikely(!ok)) {
> > > > +    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer,
> > > > +                               vhost_vdpa_net_cvq_cmd_page_len(), false);
> > > > +    if (unlikely(r < 0)) {
> > > > +        return r;
> > > > +    }
> > > > +
> > > > +    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer,
> > > > +                               vhost_vdpa_net_cvq_cmd_page_len(), true);
> > > > +    if (unlikely(r < 0)) {
> > > >          vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> > > > -        return false;
> > > >      }
> > > >
> > > > -    iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
> > > > -    return true;
> > > > +    return r;
> > > > +}
> > > > +
> > > > +static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
> > > > +{
> > > > +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > > +
> > > > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > > +
> > > > +    if (s->vhost_vdpa.shadow_vqs_enabled) {
> > > > +        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> > > > +        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer);
> > > > +    }
> > > >  }
> > > >
> > > >  static NetClientInfo net_vhost_vdpa_cvq_info = {
> > > >      .type = NET_CLIENT_DRIVER_VHOST_VDPA,
> > > >      .size = sizeof(VhostVDPAState),
> > > >      .receive = vhost_vdpa_receive,
> > > > +    .prepare = vhost_vdpa_net_cvq_prepare,
> > > > +    .stop = vhost_vdpa_net_cvq_stop,
> > > >      .cleanup = vhost_vdpa_cleanup,
> > > >      .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
> > > >      .has_ufo = vhost_vdpa_has_ufo,
> > > > @@ -348,19 +347,17 @@ static NetClientInfo net_vhost_vdpa_cvq_info = {
> > > >   * Do not forward commands not supported by SVQ. Otherwise, the device could
> > > >   * accept it and qemu would not know how to update the device model.
> > > >   */
> > > > -static bool vhost_vdpa_net_cvq_validate_cmd(const struct iovec *out,
> > > > -                                            size_t out_num)
> > > > +static bool vhost_vdpa_net_cvq_validate_cmd(const void *out_buf, size_t len)
> > > >  {
> > > >      struct virtio_net_ctrl_hdr ctrl;
> > > > -    size_t n;
> > > >
> > > > -    n = iov_to_buf(out, out_num, 0, &ctrl, sizeof(ctrl));
> > > > -    if (unlikely(n < sizeof(ctrl))) {
> > > > +    if (unlikely(len < sizeof(ctrl))) {
> > > >          qemu_log_mask(LOG_GUEST_ERROR,
> > > > -                      "%s: invalid legnth of out buffer %zu\n", __func__, n);
> > > > +                      "%s: invalid legnth of out buffer %zu\n", __func__, len);
> > > >          return false;
> > > >      }
> > > >
> > > > +    memcpy(&ctrl, out_buf, sizeof(ctrl));
> > > >      switch (ctrl.class) {
> > > >      case VIRTIO_NET_CTRL_MAC:
> > > >          switch (ctrl.cmd) {
> > > > @@ -392,10 +389,14 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> > > >      VhostVDPAState *s = opaque;
> > > >      size_t in_len, dev_written;
> > > >      virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
> > > > -    /* out and in buffers sent to the device */
> > > > -    struct iovec dev_buffers[2] = {
> > > > -        { .iov_base = s->cvq_cmd_out_buffer },
> > > > -        { .iov_base = s->cvq_cmd_in_buffer },
> > > > +    /* Out buffer sent to both the vdpa device and the device model */
> > > > +    struct iovec out = {
> > > > +        .iov_base = s->cvq_cmd_out_buffer,
> > > > +    };
> > > > +    /* In buffer sent to the device */
> > > > +    const struct iovec dev_in = {
> > > > +        .iov_base = s->cvq_cmd_in_buffer,
> > > > +        .iov_len = sizeof(virtio_net_ctrl_ack),
> > > >      };
> > > >      /* in buffer used for device model */
> > > >      const struct iovec in = {
> > > > @@ -405,17 +406,15 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> > > >      int r = -EINVAL;
> > > >      bool ok;
> > > >
> > > > -    ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
> > > > -    if (unlikely(!ok)) {
> > > > -        goto out;
> > > > -    }
> > > > -
> > > > -    ok = vhost_vdpa_net_cvq_validate_cmd(&dev_buffers[0], 1);
> > > > +    out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
> > > > +                             s->cvq_cmd_out_buffer,
> > > > +                             vhost_vdpa_net_cvq_cmd_len());
> > > > +    ok = vhost_vdpa_net_cvq_validate_cmd(s->cvq_cmd_out_buffer, out.iov_len);
> > > >      if (unlikely(!ok)) {
> > > >          goto out;
> > > >      }
> > > >
> > > > -    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, elem);
> > > > +    r = vhost_svq_add(svq, &out, 1, &dev_in, 1, elem);
> > > >      if (unlikely(r != 0)) {
> > > >          if (unlikely(r == -ENOSPC)) {
> > > >              qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> > > > @@ -435,13 +434,13 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> > > >          goto out;
> > > >      }
> > > >
> > > > -    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
> > > > +    memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
> > > >      if (status != VIRTIO_NET_OK) {
> > > >          goto out;
> > > >      }
> > > >
> > > >      status = VIRTIO_NET_ERR;
> > > > -    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
> > > > +    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1);
> > > >      if (status != VIRTIO_NET_OK) {
> > > >          error_report("Bad CVQ processing in model");
> > > >      }
> > > > @@ -454,12 +453,6 @@ out:
> > > >      }
> > > >      vhost_svq_push_elem(svq, elem, MIN(in_len, sizeof(status)));
> > > >      g_free(elem);
> > > > -    if (dev_buffers[0].iov_base) {
> > > > -        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[0].iov_base);
> > > > -    }
> > > > -    if (dev_buffers[1].iov_base) {
> > > > -        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[1].iov_base);
> > > > -    }
> > > >      return r;
> > > >  }
> > > >
> > > > --
> > > > 2.31.1
> > > >
> > >
> >
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v7 08/12] vdpa: Move command buffers map to start of net device
  2022-08-09  8:03         ` Eugenio Perez Martin
@ 2022-08-09  8:13           ` Jason Wang
  0 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2022-08-09  8:13 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: qemu-devel, Cindy Lu, Harpreet Singh Anand, Gonglei (Arei),
	Stefano Garzarella, Parav Pandit, Eric Blake, Gautam Dawar,
	Markus Armbruster, Paolo Bonzini, Laurent Vivier,
	Michael S. Tsirkin, Stefan Hajnoczi, Liuxiangdong, Eli Cohen,
	Cornelia Huck, Zhu Lingshan

On Tue, Aug 9, 2022 at 4:04 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Tue, Aug 9, 2022 at 9:49 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Tue, Aug 9, 2022 at 3:34 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
> > >
> > > On Tue, Aug 9, 2022 at 9:04 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > > On Fri, Aug 5, 2022 at 2:29 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > > > >
> > > > > As this series will reuse them to restore the device state at the end of
> > > > > a migration (or a device start), let's allocate only once at the device
> > > > > start so we don't duplicate their map and unmap.
> > > > >
> > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > > ---
> > > > >  net/vhost-vdpa.c | 123 ++++++++++++++++++++++-------------------------
> > > > >  1 file changed, 58 insertions(+), 65 deletions(-)
> > > > >
> > > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > > index 55e8a39a56..2c6a26cca0 100644
> > > > > --- a/net/vhost-vdpa.c
> > > > > +++ b/net/vhost-vdpa.c
> > > > > @@ -263,29 +263,20 @@ static size_t vhost_vdpa_net_cvq_cmd_page_len(void)
> > > > >      return ROUND_UP(vhost_vdpa_net_cvq_cmd_len(), qemu_real_host_page_size());
> > > > >  }
> > > > >
> > > > > -/** Copy and map a guest buffer. */
> > > > > -static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
> > > > > -                                   const struct iovec *out_data,
> > > > > -                                   size_t out_num, size_t data_len, void *buf,
> > > > > -                                   size_t *written, bool write)
> > > > > +/** Map CVQ buffer. */
> > > > > +static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
> > > > > +                                  bool write)
> > > > >  {
> > > > >      DMAMap map = {};
> > > > >      int r;
> > > > >
> > > > > -    if (unlikely(!data_len)) {
> > > > > -        qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid legnth of %s buffer\n",
> > > > > -                      __func__, write ? "in" : "out");
> > > > > -        return false;
> > > > > -    }
> > > > > -
> > > > > -    *written = iov_to_buf(out_data, out_num, 0, buf, data_len);
> > > > >      map.translated_addr = (hwaddr)(uintptr_t)buf;
> > > > > -    map.size = vhost_vdpa_net_cvq_cmd_page_len() - 1;
> > > > > +    map.size = size - 1;
> > > >
> > > > Just noticed this, I think I've asked for the reason before but I
> > > > don't remember the answer.
> > > >
> > > > But it looks like a hint of a defect of the current API design.
> > > >
> > >
> > > I can look for it in the mail list, but long story short:
> > > vDPA DMA API is *not* inclusive: To map the first page, you map (.iova
> > > = 0, .size = 4096).
> > > IOVA tree API has been inclusive forever: To map the first page, you
> > > map (.iova = 0, .size = 4095). If we map with .size = 4096, .iova =
> > > 4096 is considered mapped too.
> >
> > This looks like a bug.
> >
> > {.iova=0, size=0} should be illegal but if I understand you correctly,
> > it means [0, 1)?
> >
>
> On iova_tree it works the way you point here, yes. Maybe the member's
> name should have been length or something like that.
>
> On intel_iommu the address *mask* is actually used to fill the size,
> not the actual DMA entry length.
>
> For SVQ I think it would be beneficial to declare two different types,
> size_inclusive and size_non_inclusive, and check at compile time if
> the caller is using the right type.

That's sub-optimal, we'd better go with a single type of size or
switch to use [start, end].

> But it's not top priority at the
> moment.

Yes, let's optimize it on top.

Thanks

>
> Thanks!
>
> > Thanks
> >
> > >
> > > To adapt one to the other would have been an API change even before
> > > the introduction of vhost-iova-tree.
> > >
> > > Thanks!
> > >
> > >
> > > > Thanks
> > > >
> > > > >      map.perm = write ? IOMMU_RW : IOMMU_RO,
> > > > >      r = vhost_iova_tree_map_alloc(v->iova_tree, &map);
> > > > >      if (unlikely(r != IOVA_OK)) {
> > > > >          error_report("Cannot map injected element");
> > > > > -        return false;
> > > > > +        return r;
> > > > >      }
> > > > >
> > > > >      r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
> > > > > @@ -294,50 +285,58 @@ static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
> > > > >          goto dma_map_err;
> > > > >      }
> > > > >
> > > > > -    return true;
> > > > > +    return 0;
> > > > >
> > > > >  dma_map_err:
> > > > >      vhost_iova_tree_remove(v->iova_tree, &map);
> > > > > -    return false;
> > > > > +    return r;
> > > > >  }
> > > > >
> > > > > -/**
> > > > > - * Copy the guest element into a dedicated buffer suitable to be sent to NIC
> > > > > - *
> > > > > - * @iov: [0] is the out buffer, [1] is the in one
> > > > > - */
> > > > > -static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
> > > > > -                                        VirtQueueElement *elem,
> > > > > -                                        struct iovec *iov)
> > > > > +static int vhost_vdpa_net_cvq_prepare(NetClientState *nc)
> > > > >  {
> > > > > -    size_t in_copied;
> > > > > -    bool ok;
> > > > > +    VhostVDPAState *s;
> > > > > +    int r;
> > > > >
> > > > > -    iov[0].iov_base = s->cvq_cmd_out_buffer;
> > > > > -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, elem->out_num,
> > > > > -                                vhost_vdpa_net_cvq_cmd_len(), iov[0].iov_base,
> > > > > -                                &iov[0].iov_len, false);
> > > > > -    if (unlikely(!ok)) {
> > > > > -        return false;
> > > > > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > > > +
> > > > > +    s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > > > +    if (!s->vhost_vdpa.shadow_vqs_enabled) {
> > > > > +        return 0;
> > > > >      }
> > > > >
> > > > > -    iov[1].iov_base = s->cvq_cmd_in_buffer;
> > > > > -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
> > > > > -                                sizeof(virtio_net_ctrl_ack), iov[1].iov_base,
> > > > > -                                &in_copied, true);
> > > > > -    if (unlikely(!ok)) {
> > > > > +    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer,
> > > > > +                               vhost_vdpa_net_cvq_cmd_page_len(), false);
> > > > > +    if (unlikely(r < 0)) {
> > > > > +        return r;
> > > > > +    }
> > > > > +
> > > > > +    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer,
> > > > > +                               vhost_vdpa_net_cvq_cmd_page_len(), true);
> > > > > +    if (unlikely(r < 0)) {
> > > > >          vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> > > > > -        return false;
> > > > >      }
> > > > >
> > > > > -    iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
> > > > > -    return true;
> > > > > +    return r;
> > > > > +}
> > > > > +
> > > > > +static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
> > > > > +{
> > > > > +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > > > +
> > > > > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > > > +
> > > > > +    if (s->vhost_vdpa.shadow_vqs_enabled) {
> > > > > +        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> > > > > +        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer);
> > > > > +    }
> > > > >  }
> > > > >
> > > > >  static NetClientInfo net_vhost_vdpa_cvq_info = {
> > > > >      .type = NET_CLIENT_DRIVER_VHOST_VDPA,
> > > > >      .size = sizeof(VhostVDPAState),
> > > > >      .receive = vhost_vdpa_receive,
> > > > > +    .prepare = vhost_vdpa_net_cvq_prepare,
> > > > > +    .stop = vhost_vdpa_net_cvq_stop,
> > > > >      .cleanup = vhost_vdpa_cleanup,
> > > > >      .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
> > > > >      .has_ufo = vhost_vdpa_has_ufo,
> > > > > @@ -348,19 +347,17 @@ static NetClientInfo net_vhost_vdpa_cvq_info = {
> > > > >   * Do not forward commands not supported by SVQ. Otherwise, the device could
> > > > >   * accept it and qemu would not know how to update the device model.
> > > > >   */
> > > > > -static bool vhost_vdpa_net_cvq_validate_cmd(const struct iovec *out,
> > > > > -                                            size_t out_num)
> > > > > +static bool vhost_vdpa_net_cvq_validate_cmd(const void *out_buf, size_t len)
> > > > >  {
> > > > >      struct virtio_net_ctrl_hdr ctrl;
> > > > > -    size_t n;
> > > > >
> > > > > -    n = iov_to_buf(out, out_num, 0, &ctrl, sizeof(ctrl));
> > > > > -    if (unlikely(n < sizeof(ctrl))) {
> > > > > +    if (unlikely(len < sizeof(ctrl))) {
> > > > >          qemu_log_mask(LOG_GUEST_ERROR,
> > > > > -                      "%s: invalid legnth of out buffer %zu\n", __func__, n);
> > > > > +                      "%s: invalid legnth of out buffer %zu\n", __func__, len);
> > > > >          return false;
> > > > >      }
> > > > >
> > > > > +    memcpy(&ctrl, out_buf, sizeof(ctrl));
> > > > >      switch (ctrl.class) {
> > > > >      case VIRTIO_NET_CTRL_MAC:
> > > > >          switch (ctrl.cmd) {
> > > > > @@ -392,10 +389,14 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> > > > >      VhostVDPAState *s = opaque;
> > > > >      size_t in_len, dev_written;
> > > > >      virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
> > > > > -    /* out and in buffers sent to the device */
> > > > > -    struct iovec dev_buffers[2] = {
> > > > > -        { .iov_base = s->cvq_cmd_out_buffer },
> > > > > -        { .iov_base = s->cvq_cmd_in_buffer },
> > > > > +    /* Out buffer sent to both the vdpa device and the device model */
> > > > > +    struct iovec out = {
> > > > > +        .iov_base = s->cvq_cmd_out_buffer,
> > > > > +    };
> > > > > +    /* In buffer sent to the device */
> > > > > +    const struct iovec dev_in = {
> > > > > +        .iov_base = s->cvq_cmd_in_buffer,
> > > > > +        .iov_len = sizeof(virtio_net_ctrl_ack),
> > > > >      };
> > > > >      /* in buffer used for device model */
> > > > >      const struct iovec in = {
> > > > > @@ -405,17 +406,15 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> > > > >      int r = -EINVAL;
> > > > >      bool ok;
> > > > >
> > > > > -    ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
> > > > > -    if (unlikely(!ok)) {
> > > > > -        goto out;
> > > > > -    }
> > > > > -
> > > > > -    ok = vhost_vdpa_net_cvq_validate_cmd(&dev_buffers[0], 1);
> > > > > +    out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
> > > > > +                             s->cvq_cmd_out_buffer,
> > > > > +                             vhost_vdpa_net_cvq_cmd_len());
> > > > > +    ok = vhost_vdpa_net_cvq_validate_cmd(s->cvq_cmd_out_buffer, out.iov_len);
> > > > >      if (unlikely(!ok)) {
> > > > >          goto out;
> > > > >      }
> > > > >
> > > > > -    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, elem);
> > > > > +    r = vhost_svq_add(svq, &out, 1, &dev_in, 1, elem);
> > > > >      if (unlikely(r != 0)) {
> > > > >          if (unlikely(r == -ENOSPC)) {
> > > > >              qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> > > > > @@ -435,13 +434,13 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> > > > >          goto out;
> > > > >      }
> > > > >
> > > > > -    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
> > > > > +    memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
> > > > >      if (status != VIRTIO_NET_OK) {
> > > > >          goto out;
> > > > >      }
> > > > >
> > > > >      status = VIRTIO_NET_ERR;
> > > > > -    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
> > > > > +    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1);
> > > > >      if (status != VIRTIO_NET_OK) {
> > > > >          error_report("Bad CVQ processing in model");
> > > > >      }
> > > > > @@ -454,12 +453,6 @@ out:
> > > > >      }
> > > > >      vhost_svq_push_elem(svq, elem, MIN(in_len, sizeof(status)));
> > > > >      g_free(elem);
> > > > > -    if (dev_buffers[0].iov_base) {
> > > > > -        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[0].iov_base);
> > > > > -    }
> > > > > -    if (dev_buffers[1].iov_base) {
> > > > > -        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[1].iov_base);
> > > > > -    }
> > > > >      return r;
> > > > >  }
> > > > >
> > > > > --
> > > > > 2.31.1
> > > > >
> > > >
> > >
> >
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2022-08-09  8:14 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-04 18:28 [PATCH v7 00/12] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
2022-08-04 18:28 ` [PATCH v7 01/12] vhost: stop transfer elem ownership in vhost_handle_guest_kick Eugenio Pérez
2022-08-05  3:48   ` Jason Wang
2022-08-04 18:28 ` [PATCH v7 02/12] vhost: use SVQ element ndescs instead of opaque data for desc validation Eugenio Pérez
2022-08-05  3:48   ` Jason Wang
2022-08-04 18:28 ` [PATCH v7 03/12] vhost: Delete useless read memory barrier Eugenio Pérez
2022-08-05  3:48   ` Jason Wang
2022-08-04 18:28 ` [PATCH v7 04/12] vhost: Do not depend on !NULL VirtQueueElement on vhost_svq_flush Eugenio Pérez
2022-08-05  3:50   ` Jason Wang
2022-08-04 18:28 ` [PATCH v7 05/12] vhost_net: Add NetClientInfo prepare callback Eugenio Pérez
2022-08-09  6:53   ` Jason Wang
2022-08-09  7:34     ` Eugenio Perez Martin
2022-08-04 18:28 ` [PATCH v7 06/12] vhost_net: Add NetClientInfo stop callback Eugenio Pérez
2022-08-04 18:28 ` [PATCH v7 07/12] vdpa: add net_vhost_vdpa_cvq_info NetClientInfo Eugenio Pérez
2022-08-04 18:28 ` [PATCH v7 08/12] vdpa: Move command buffers map to start of net device Eugenio Pérez
2022-08-09  7:03   ` Jason Wang
2022-08-09  7:33     ` Eugenio Perez Martin
2022-08-09  7:48       ` Jason Wang
2022-08-09  8:03         ` Eugenio Perez Martin
2022-08-09  8:13           ` Jason Wang
2022-08-04 18:28 ` [PATCH v7 09/12] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail Eugenio Pérez
2022-08-09  7:11   ` Jason Wang
2022-08-04 18:28 ` [PATCH v7 10/12] vhost_net: add NetClientState->load() callback Eugenio Pérez
2022-08-04 18:28 ` [PATCH v7 11/12] vdpa: Add virtio-net mac address via CVQ at start Eugenio Pérez
2022-08-09  7:16   ` Jason Wang
2022-08-09  7:35     ` Eugenio Perez Martin
2022-08-09  7:49       ` Jason Wang
2022-08-04 18:28 ` [PATCH v7 12/12] vdpa: Delete CVQ migration blocker Eugenio Pérez
2022-08-09  7:17   ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.