All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/7] NIC vhost-vdpa state restore via Shadow CVQ
@ 2022-07-22 11:12 Eugenio Pérez
  2022-07-22 11:12 ` [PATCH v4 1/7] vhost: stop transfer elem ownership in vhost_handle_guest_kick Eugenio Pérez
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Eugenio Pérez @ 2022-07-22 11:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Jason Wang,
	Michael S. Tsirkin, Stefan Hajnoczi, Cindy Lu, Liuxiangdong

CVQ of net vhost-vdpa devices can be intercepted since the work of [1]. The
virtio-net device model is updated. The migration was blocked because although
the state can be megrated between VMM it was not possible to restore on the
destination NIC.

This series add support for SVQ to inject external messages without the guest's
knowledge, so before the guest is resumed all the guest visible state is
restored. It is done using standard CVQ messages, so the vhost-vdpa device does
not need to learn how to restore it: As long as they have the feature, they
know how to handle it.

This series needs fixes [1] and [2] to be applied to achieve full live
migration.

Thanks!

[1] https://lists.nongnu.org/archive/html/qemu-devel/2022-07/msg02984.html
[2] https://lists.nongnu.org/archive/html/qemu-devel/2022-07/msg03993.html

v4:
- Actually use NetClientInfo callback.

v3:
- Route vhost-vdpa start code through NetClientInfo callback.
- Delete extra vhost_net_stop_one() call.

v2:
- Fix SIGSEGV dereferencing SVQ when not in svq mode

v1 from RFC:
- Do not reorder DRIVER_OK & enable patches.
- Delete leftovers

Eugenio Pérez (7):
  vhost: stop transfer elem ownership in vhost_handle_guest_kick
  vdpa: Extract vhost_vdpa_net_cvq_add from
    vhost_vdpa_net_handle_ctrl_avail
  vdpa: Make vhost_vdpa_net_cvq_map_elem accept any out sg
  vdpa: add NetClientState->start() callback
  vdpa: Reorder net_vhost_vdpa_info
  vdpa: Add virtio-net mac address via CVQ at start
  vdpa: Delete CVQ migration blocker

 include/hw/virtio/vhost-vdpa.h     |   1 -
 include/net/net.h                  |   2 +
 hw/net/vhost_net.c                 |   7 ++
 hw/virtio/vhost-shadow-virtqueue.c |  10 +-
 hw/virtio/vhost-vdpa.c             |  14 ---
 net/vhost-vdpa.c                   | 186 +++++++++++++++++++++--------
 6 files changed, 146 insertions(+), 74 deletions(-)

-- 
2.31.1




^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 1/7] vhost: stop transfer elem ownership in vhost_handle_guest_kick
  2022-07-22 11:12 [PATCH v4 0/7] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
@ 2022-07-22 11:12 ` Eugenio Pérez
  2022-07-22 11:12 ` [PATCH v4 2/7] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail Eugenio Pérez
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Eugenio Pérez @ 2022-07-22 11:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Jason Wang,
	Michael S. Tsirkin, Stefan Hajnoczi, Cindy Lu, Liuxiangdong

It was easier to allow vhost_svq_add to handle the memory. Now that we
will allow qemu to add elements to a SVQ without the guest's knowledge,
it's better to handle it in the caller.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index e4956728dd..ffd2b2c972 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -233,9 +233,6 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
 /**
  * Add an element to a SVQ.
  *
- * The caller must check that there is enough slots for the new element. It
- * takes ownership of the element: In case of failure not ENOSPC, it is free.
- *
  * Return -EINVAL if element is invalid, -ENOSPC if dev queue is full
  */
 int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
@@ -252,7 +249,6 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
 
     ok = vhost_svq_add_split(svq, out_sg, out_num, in_sg, in_num, &qemu_head);
     if (unlikely(!ok)) {
-        g_free(elem);
         return -EINVAL;
     }
 
@@ -293,7 +289,7 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
         virtio_queue_set_notification(svq->vq, false);
 
         while (true) {
-            VirtQueueElement *elem;
+            g_autofree VirtQueueElement *elem;
             int r;
 
             if (svq->next_guest_avail_elem) {
@@ -324,12 +320,14 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
                      * queue the current guest descriptor and ignore kicks
                      * until some elements are used.
                      */
-                    svq->next_guest_avail_elem = elem;
+                    svq->next_guest_avail_elem = g_steal_pointer(&elem);
                 }
 
                 /* VQ is full or broken, just return and ignore kicks */
                 return;
             }
+            /* elem belongs to SVQ or external caller now */
+            elem = NULL;
         }
 
         virtio_queue_set_notification(svq->vq, true);
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 2/7] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail
  2022-07-22 11:12 [PATCH v4 0/7] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
  2022-07-22 11:12 ` [PATCH v4 1/7] vhost: stop transfer elem ownership in vhost_handle_guest_kick Eugenio Pérez
@ 2022-07-22 11:12 ` Eugenio Pérez
  2022-07-26  2:50   ` Jason Wang
  2022-07-22 11:12 ` [PATCH v4 3/7] vdpa: Make vhost_vdpa_net_cvq_map_elem accept any out sg Eugenio Pérez
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Eugenio Pérez @ 2022-07-22 11:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Jason Wang,
	Michael S. Tsirkin, Stefan Hajnoczi, Cindy Lu, Liuxiangdong

So we can reuse to inject state messages.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 74 ++++++++++++++++++++++++++++++------------------
 1 file changed, 47 insertions(+), 27 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 6abad276a6..1b82ac2e07 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -334,6 +334,46 @@ static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
     return true;
 }
 
+static virtio_net_ctrl_ack vhost_vdpa_net_cvq_add(VhostShadowVirtqueue *svq,
+                                               const struct iovec *dev_buffers)
+{
+    /* in buffer used for device model */
+    virtio_net_ctrl_ack status;
+    size_t dev_written;
+    int r;
+
+    /*
+     * Add a fake non-NULL VirtQueueElement since we'll remove before SVQ
+     * event loop can get it.
+     */
+    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, (void *)1);
+    if (unlikely(r != 0)) {
+        if (unlikely(r == -ENOSPC)) {
+            qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
+                          __func__);
+        }
+        return VIRTIO_NET_ERR;
+    }
+
+    /*
+     * We can poll here since we've had BQL from the time we sent the
+     * descriptor. Also, we need to take the answer before SVQ pulls by itself,
+     * when BQL is released
+     */
+    dev_written = vhost_svq_poll(svq);
+    if (unlikely(dev_written < sizeof(status))) {
+        error_report("Insufficient written data (%zu)", dev_written);
+        return VIRTIO_NET_ERR;
+    }
+
+    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
+    if (status != VIRTIO_NET_OK) {
+        return VIRTIO_NET_ERR;
+    }
+
+    return VIRTIO_NET_OK;
+}
+
 /**
  * Do not forward commands not supported by SVQ. Otherwise, the device could
  * accept it and qemu would not know how to update the device model.
@@ -380,19 +420,18 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
                                             void *opaque)
 {
     VhostVDPAState *s = opaque;
-    size_t in_len, dev_written;
+    size_t in_len;
     virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
     /* out and in buffers sent to the device */
     struct iovec dev_buffers[2] = {
         { .iov_base = s->cvq_cmd_out_buffer },
         { .iov_base = s->cvq_cmd_in_buffer },
     };
-    /* in buffer used for device model */
+    /* in buffer seen by virtio-net device model */
     const struct iovec in = {
         .iov_base = &status,
         .iov_len = sizeof(status),
     };
-    int r = -EINVAL;
     bool ok;
 
     ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
@@ -405,35 +444,16 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
         goto out;
     }
 
-    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, elem);
-    if (unlikely(r != 0)) {
-        if (unlikely(r == -ENOSPC)) {
-            qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
-                          __func__);
-        }
-        goto out;
-    }
-
-    /*
-     * We can poll here since we've had BQL from the time we sent the
-     * descriptor. Also, we need to take the answer before SVQ pulls by itself,
-     * when BQL is released
-     */
-    dev_written = vhost_svq_poll(svq);
-    if (unlikely(dev_written < sizeof(status))) {
-        error_report("Insufficient written data (%zu)", dev_written);
-        goto out;
-    }
-
-    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
+    status = vhost_vdpa_net_cvq_add(svq, dev_buffers);
     if (status != VIRTIO_NET_OK) {
         goto out;
     }
 
     status = VIRTIO_NET_ERR;
-    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
-    if (status != VIRTIO_NET_OK) {
+    in_len = virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
+    if (in_len != sizeof(status) || status != VIRTIO_NET_OK) {
         error_report("Bad CVQ processing in model");
+        return VIRTIO_NET_ERR;
     }
 
 out:
@@ -450,7 +470,7 @@ out:
     if (dev_buffers[1].iov_base) {
         vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[1].iov_base);
     }
-    return r;
+    return status == VIRTIO_NET_OK ? 0 : 1;
 }
 
 static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 3/7] vdpa: Make vhost_vdpa_net_cvq_map_elem accept any out sg
  2022-07-22 11:12 [PATCH v4 0/7] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
  2022-07-22 11:12 ` [PATCH v4 1/7] vhost: stop transfer elem ownership in vhost_handle_guest_kick Eugenio Pérez
  2022-07-22 11:12 ` [PATCH v4 2/7] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail Eugenio Pérez
@ 2022-07-22 11:12 ` Eugenio Pérez
  2022-07-25  8:48   ` Jason Wang
  2022-07-22 11:12 ` [PATCH v4 4/7] vdpa: add NetClientState->start() callback Eugenio Pérez
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Eugenio Pérez @ 2022-07-22 11:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Jason Wang,
	Michael S. Tsirkin, Stefan Hajnoczi, Cindy Lu, Liuxiangdong

So its generic enough to accept any out sg buffer and we can inject
NIC state messages.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 29 +++++++++++++++--------------
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 1b82ac2e07..bbe1830824 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -302,35 +302,36 @@ dma_map_err:
 }
 
 /**
- * Copy the guest element into a dedicated buffer suitable to be sent to NIC
+ * Maps out sg and in buffer into dedicated buffers suitable to be sent to NIC
  *
- * @iov: [0] is the out buffer, [1] is the in one
+ * @dev_iov: [0] is the out buffer, [1] is the in one
  */
-static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
-                                        VirtQueueElement *elem,
-                                        struct iovec *iov)
+static bool vhost_vdpa_net_cvq_map_sg(VhostVDPAState *s,
+                                      const struct iovec *out, size_t out_num,
+                                      struct iovec *dev_iov)
 {
     size_t in_copied;
     bool ok;
 
-    iov[0].iov_base = s->cvq_cmd_out_buffer;
-    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, elem->out_num,
-                                vhost_vdpa_net_cvq_cmd_len(), iov[0].iov_base,
-                                &iov[0].iov_len, false);
+    dev_iov[0].iov_base = s->cvq_cmd_out_buffer;
+    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, out, out_num,
+                                vhost_vdpa_net_cvq_cmd_len(),
+                                dev_iov[0].iov_base, &dev_iov[0].iov_len,
+                                false);
     if (unlikely(!ok)) {
         return false;
     }
 
-    iov[1].iov_base = s->cvq_cmd_in_buffer;
+    dev_iov[1].iov_base = s->cvq_cmd_in_buffer;
     ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
-                                sizeof(virtio_net_ctrl_ack), iov[1].iov_base,
-                                &in_copied, true);
+                                sizeof(virtio_net_ctrl_ack),
+                                dev_iov[1].iov_base, &in_copied, true);
     if (unlikely(!ok)) {
         vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
         return false;
     }
 
-    iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
+    dev_iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
     return true;
 }
 
@@ -434,7 +435,7 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
     };
     bool ok;
 
-    ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
+    ok = vhost_vdpa_net_cvq_map_sg(s, elem->out_sg, elem->out_num, dev_buffers);
     if (unlikely(!ok)) {
         goto out;
     }
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 4/7] vdpa: add NetClientState->start() callback
  2022-07-22 11:12 [PATCH v4 0/7] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (2 preceding siblings ...)
  2022-07-22 11:12 ` [PATCH v4 3/7] vdpa: Make vhost_vdpa_net_cvq_map_elem accept any out sg Eugenio Pérez
@ 2022-07-22 11:12 ` Eugenio Pérez
  2022-07-26  2:52   ` Jason Wang
  2022-07-22 11:12 ` [PATCH v4 5/7] vdpa: Reorder net_vhost_vdpa_info Eugenio Pérez
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Eugenio Pérez @ 2022-07-22 11:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Jason Wang,
	Michael S. Tsirkin, Stefan Hajnoczi, Cindy Lu, Liuxiangdong

It allows per-net client operations right after device's successful
start.

Vhost-vdpa net will use it to add the CVQ buffers to restore the device
status.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/net/net.h  | 2 ++
 hw/net/vhost_net.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/include/net/net.h b/include/net/net.h
index 523136c7ac..ad9e80083a 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -44,6 +44,7 @@ typedef struct NICConf {
 
 typedef void (NetPoll)(NetClientState *, bool enable);
 typedef bool (NetCanReceive)(NetClientState *);
+typedef int (NetStart)(NetClientState *);
 typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
 typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
 typedef void (NetCleanup) (NetClientState *);
@@ -71,6 +72,7 @@ typedef struct NetClientInfo {
     NetReceive *receive_raw;
     NetReceiveIOV *receive_iov;
     NetCanReceive *can_receive;
+    NetStart *start;
     NetCleanup *cleanup;
     LinkStatusChanged *link_status_changed;
     QueryRxFilter *query_rx_filter;
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index ccac5b7a64..ddd9ee0441 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -274,6 +274,13 @@ static int vhost_net_start_one(struct vhost_net *net,
             }
         }
     }
+
+    if (net->nc->info->start) {
+        r = net->nc->info->start(net->nc);
+        if (r < 0) {
+            goto fail;
+        }
+    }
     return 0;
 fail:
     file.fd = -1;
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 5/7] vdpa: Reorder net_vhost_vdpa_info
  2022-07-22 11:12 [PATCH v4 0/7] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (3 preceding siblings ...)
  2022-07-22 11:12 ` [PATCH v4 4/7] vdpa: add NetClientState->start() callback Eugenio Pérez
@ 2022-07-22 11:12 ` Eugenio Pérez
  2022-07-22 11:12 ` [PATCH v4 6/7] vdpa: Add virtio-net mac address via CVQ at start Eugenio Pérez
  2022-07-22 11:12 ` [PATCH v4 7/7] vdpa: Delete CVQ migration blocker Eugenio Pérez
  6 siblings, 0 replies; 17+ messages in thread
From: Eugenio Pérez @ 2022-07-22 11:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Jason Wang,
	Michael S. Tsirkin, Stefan Hajnoczi, Cindy Lu, Liuxiangdong

Since we're going to use a new info callback to restore NIC status, we
need that callback to be able to send and receive CVQ commands. Reorder
so all needed functions are available for it.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index bbe1830824..61516b1432 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -211,16 +211,6 @@ static ssize_t vhost_vdpa_receive(NetClientState *nc, const uint8_t *buf,
     return 0;
 }
 
-static NetClientInfo net_vhost_vdpa_info = {
-        .type = NET_CLIENT_DRIVER_VHOST_VDPA,
-        .size = sizeof(VhostVDPAState),
-        .receive = vhost_vdpa_receive,
-        .cleanup = vhost_vdpa_cleanup,
-        .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
-        .has_ufo = vhost_vdpa_has_ufo,
-        .check_peer_type = vhost_vdpa_check_peer_type,
-};
-
 static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
 {
     VhostIOVATree *tree = v->iova_tree;
@@ -375,6 +365,16 @@ static virtio_net_ctrl_ack vhost_vdpa_net_cvq_add(VhostShadowVirtqueue *svq,
     return VIRTIO_NET_OK;
 }
 
+static NetClientInfo net_vhost_vdpa_info = {
+        .type = NET_CLIENT_DRIVER_VHOST_VDPA,
+        .size = sizeof(VhostVDPAState),
+        .receive = vhost_vdpa_receive,
+        .cleanup = vhost_vdpa_cleanup,
+        .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
+        .has_ufo = vhost_vdpa_has_ufo,
+        .check_peer_type = vhost_vdpa_check_peer_type,
+};
+
 /**
  * Do not forward commands not supported by SVQ. Otherwise, the device could
  * accept it and qemu would not know how to update the device model.
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 6/7] vdpa: Add virtio-net mac address via CVQ at start
  2022-07-22 11:12 [PATCH v4 0/7] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (4 preceding siblings ...)
  2022-07-22 11:12 ` [PATCH v4 5/7] vdpa: Reorder net_vhost_vdpa_info Eugenio Pérez
@ 2022-07-22 11:12 ` Eugenio Pérez
  2022-07-25  9:32   ` Jason Wang
  2022-07-22 11:12 ` [PATCH v4 7/7] vdpa: Delete CVQ migration blocker Eugenio Pérez
  6 siblings, 1 reply; 17+ messages in thread
From: Eugenio Pérez @ 2022-07-22 11:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Jason Wang,
	Michael S. Tsirkin, Stefan Hajnoczi, Cindy Lu, Liuxiangdong

This is needed so the destination vdpa device see the same state a the
guest set in the source.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 61516b1432..3e15a42c35 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -365,10 +365,71 @@ static virtio_net_ctrl_ack vhost_vdpa_net_cvq_add(VhostShadowVirtqueue *svq,
     return VIRTIO_NET_OK;
 }
 
+static int vhost_vdpa_net_start(NetClientState *nc)
+{
+    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
+    struct vhost_vdpa *v = &s->vhost_vdpa;
+    VirtIONet *n;
+    uint64_t features;
+    VhostShadowVirtqueue *svq;
+
+    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
+
+    if (!v->shadow_vqs_enabled) {
+        return 0;
+    }
+
+    if (v->dev->nvqs != 1 &&
+        v->dev->vq_index + v->dev->nvqs != v->dev->vq_index_end) {
+        /* Only interested in CVQ */
+        return 0;
+    }
+
+    n = VIRTIO_NET(v->dev->vdev);
+    features = v->dev->vdev->host_features;
+    svq = g_ptr_array_index(v->shadow_vqs, 0);
+    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
+        const struct virtio_net_ctrl_hdr ctrl = {
+            .class = VIRTIO_NET_CTRL_MAC,
+            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
+        };
+        uint8_t mac[6];
+        const struct iovec out[] = {
+            {
+                .iov_base = (void *)&ctrl,
+                .iov_len = sizeof(ctrl),
+            },{
+                .iov_base = mac,
+                .iov_len = sizeof(mac),
+            },
+        };
+        struct iovec dev_buffers[2] = {
+            { .iov_base = s->cvq_cmd_out_buffer },
+            { .iov_base = s->cvq_cmd_in_buffer },
+        };
+        bool ok;
+        virtio_net_ctrl_ack state;
+
+        ok = vhost_vdpa_net_cvq_map_sg(s, out, ARRAY_SIZE(out), dev_buffers);
+        if (unlikely(!ok)) {
+            return -1;
+        }
+
+        memcpy(mac, n->mac, sizeof(mac));
+        state = vhost_vdpa_net_cvq_add(svq, dev_buffers);
+        vhost_vdpa_cvq_unmap_buf(v, dev_buffers[0].iov_base);
+        vhost_vdpa_cvq_unmap_buf(v, dev_buffers[1].iov_base);
+        return state == VIRTIO_NET_OK ? 0 : 1;
+    }
+
+    return 0;
+}
+
 static NetClientInfo net_vhost_vdpa_info = {
         .type = NET_CLIENT_DRIVER_VHOST_VDPA,
         .size = sizeof(VhostVDPAState),
         .receive = vhost_vdpa_receive,
+        .start = vhost_vdpa_net_start,
         .cleanup = vhost_vdpa_cleanup,
         .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
         .has_ufo = vhost_vdpa_has_ufo,
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 7/7] vdpa: Delete CVQ migration blocker
  2022-07-22 11:12 [PATCH v4 0/7] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
                   ` (5 preceding siblings ...)
  2022-07-22 11:12 ` [PATCH v4 6/7] vdpa: Add virtio-net mac address via CVQ at start Eugenio Pérez
@ 2022-07-22 11:12 ` Eugenio Pérez
  6 siblings, 0 replies; 17+ messages in thread
From: Eugenio Pérez @ 2022-07-22 11:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Jason Wang,
	Michael S. Tsirkin, Stefan Hajnoczi, Cindy Lu, Liuxiangdong

We can restore the device state in the destination via CVQ now. Remove
the migration blocker.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/vhost-vdpa.h |  1 -
 hw/virtio/vhost-vdpa.c         | 14 --------------
 net/vhost-vdpa.c               |  2 --
 3 files changed, 17 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index d10a89303e..1111d85643 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -35,7 +35,6 @@ typedef struct vhost_vdpa {
     bool shadow_vqs_enabled;
     /* IOVA mapping used by the Shadow Virtqueue */
     VhostIOVATree *iova_tree;
-    Error *migration_blocker;
     GPtrArray *shadow_vqs;
     const VhostShadowVirtqueueOps *shadow_vq_ops;
     void *shadow_vq_ops_opaque;
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 3ff9ce3501..9a2daef7e3 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1023,13 +1023,6 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
         return true;
     }
 
-    if (v->migration_blocker) {
-        int r = migrate_add_blocker(v->migration_blocker, &err);
-        if (unlikely(r < 0)) {
-            return false;
-        }
-    }
-
     for (i = 0; i < v->shadow_vqs->len; ++i) {
         VirtQueue *vq = virtio_get_queue(dev->vdev, dev->vq_index + i);
         VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i);
@@ -1072,10 +1065,6 @@ err:
         vhost_svq_stop(svq);
     }
 
-    if (v->migration_blocker) {
-        migrate_del_blocker(v->migration_blocker);
-    }
-
     return false;
 }
 
@@ -1095,9 +1084,6 @@ static bool vhost_vdpa_svqs_stop(struct vhost_dev *dev)
         }
     }
 
-    if (v->migration_blocker) {
-        migrate_del_blocker(v->migration_blocker);
-    }
     return true;
 }
 
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 3e15a42c35..75143ded8b 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -577,8 +577,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
 
         s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
         s->vhost_vdpa.shadow_vq_ops_opaque = s;
-        error_setg(&s->vhost_vdpa.migration_blocker,
-                   "Migration disabled: vhost-vdpa uses CVQ.");
     }
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
     if (ret) {
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 3/7] vdpa: Make vhost_vdpa_net_cvq_map_elem accept any out sg
  2022-07-22 11:12 ` [PATCH v4 3/7] vdpa: Make vhost_vdpa_net_cvq_map_elem accept any out sg Eugenio Pérez
@ 2022-07-25  8:48   ` Jason Wang
  2022-08-01  6:42     ` Eugenio Perez Martin
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Wang @ 2022-07-25  8:48 UTC (permalink / raw)
  To: Eugenio Pérez, qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Michael S. Tsirkin,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong


在 2022/7/22 19:12, Eugenio Pérez 写道:
> So its generic enough to accept any out sg buffer and we can inject
> NIC state messages.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>   net/vhost-vdpa.c | 29 +++++++++++++++--------------
>   1 file changed, 15 insertions(+), 14 deletions(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 1b82ac2e07..bbe1830824 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -302,35 +302,36 @@ dma_map_err:
>   }
>   
>   /**
> - * Copy the guest element into a dedicated buffer suitable to be sent to NIC
> + * Maps out sg and in buffer into dedicated buffers suitable to be sent to NIC
>    *
> - * @iov: [0] is the out buffer, [1] is the in one
> + * @dev_iov: [0] is the out buffer, [1] is the in one


This still has assumption on the layout. I wonder if it's better to 
simply use in_sg and out_sg.

Thanks


>    */
> -static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
> -                                        VirtQueueElement *elem,
> -                                        struct iovec *iov)
> +static bool vhost_vdpa_net_cvq_map_sg(VhostVDPAState *s,
> +                                      const struct iovec *out, size_t out_num,
> +                                      struct iovec *dev_iov)
>   {
>       size_t in_copied;
>       bool ok;
>   
> -    iov[0].iov_base = s->cvq_cmd_out_buffer;
> -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, elem->out_num,
> -                                vhost_vdpa_net_cvq_cmd_len(), iov[0].iov_base,
> -                                &iov[0].iov_len, false);
> +    dev_iov[0].iov_base = s->cvq_cmd_out_buffer;
> +    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, out, out_num,
> +                                vhost_vdpa_net_cvq_cmd_len(),
> +                                dev_iov[0].iov_base, &dev_iov[0].iov_len,
> +                                false);
>       if (unlikely(!ok)) {
>           return false;
>       }
>   
> -    iov[1].iov_base = s->cvq_cmd_in_buffer;
> +    dev_iov[1].iov_base = s->cvq_cmd_in_buffer;
>       ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
> -                                sizeof(virtio_net_ctrl_ack), iov[1].iov_base,
> -                                &in_copied, true);
> +                                sizeof(virtio_net_ctrl_ack),
> +                                dev_iov[1].iov_base, &in_copied, true);
>       if (unlikely(!ok)) {
>           vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
>           return false;
>       }
>   
> -    iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
> +    dev_iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
>       return true;
>   }
>   
> @@ -434,7 +435,7 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
>       };
>       bool ok;
>   
> -    ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
> +    ok = vhost_vdpa_net_cvq_map_sg(s, elem->out_sg, elem->out_num, dev_buffers);
>       if (unlikely(!ok)) {
>           goto out;
>       }



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 6/7] vdpa: Add virtio-net mac address via CVQ at start
  2022-07-22 11:12 ` [PATCH v4 6/7] vdpa: Add virtio-net mac address via CVQ at start Eugenio Pérez
@ 2022-07-25  9:32   ` Jason Wang
  2022-08-01  7:09     ` Eugenio Perez Martin
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Wang @ 2022-07-25  9:32 UTC (permalink / raw)
  To: Eugenio Pérez, qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Michael S. Tsirkin,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong


在 2022/7/22 19:12, Eugenio Pérez 写道:
> This is needed so the destination vdpa device see the same state a the
> guest set in the source.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>   net/vhost-vdpa.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 61 insertions(+)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 61516b1432..3e15a42c35 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -365,10 +365,71 @@ static virtio_net_ctrl_ack vhost_vdpa_net_cvq_add(VhostShadowVirtqueue *svq,
>       return VIRTIO_NET_OK;
>   }
>   
> +static int vhost_vdpa_net_start(NetClientState *nc)
> +{
> +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> +    struct vhost_vdpa *v = &s->vhost_vdpa;
> +    VirtIONet *n;
> +    uint64_t features;
> +    VhostShadowVirtqueue *svq;
> +
> +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> +
> +    if (!v->shadow_vqs_enabled) {
> +        return 0;
> +    }
> +
> +    if (v->dev->nvqs != 1 &&
> +        v->dev->vq_index + v->dev->nvqs != v->dev->vq_index_end) {
> +        /* Only interested in CVQ */
> +        return 0;
> +    }


I'd have a dedicated NetClientInfo for cvq.


> +
> +    n = VIRTIO_NET(v->dev->vdev);
> +    features = v->dev->vdev->host_features;
> +    svq = g_ptr_array_index(v->shadow_vqs, 0);
> +    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
> +        const struct virtio_net_ctrl_hdr ctrl = {
> +            .class = VIRTIO_NET_CTRL_MAC,
> +            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
> +        };
> +        uint8_t mac[6];
> +        const struct iovec out[] = {
> +            {
> +                .iov_base = (void *)&ctrl,
> +                .iov_len = sizeof(ctrl),
> +            },{
> +                .iov_base = mac,
> +                .iov_len = sizeof(mac),
> +            },
> +        };
> +        struct iovec dev_buffers[2] = {
> +            { .iov_base = s->cvq_cmd_out_buffer },
> +            { .iov_base = s->cvq_cmd_in_buffer },
> +        };
> +        bool ok;
> +        virtio_net_ctrl_ack state;
> +
> +        ok = vhost_vdpa_net_cvq_map_sg(s, out, ARRAY_SIZE(out), dev_buffers);


To speed up the state recovery, can we map those buffers during svq start?

Thanks


> +        if (unlikely(!ok)) {
> +            return -1;
> +        }
> +
> +        memcpy(mac, n->mac, sizeof(mac));
> +        state = vhost_vdpa_net_cvq_add(svq, dev_buffers);
> +        vhost_vdpa_cvq_unmap_buf(v, dev_buffers[0].iov_base);
> +        vhost_vdpa_cvq_unmap_buf(v, dev_buffers[1].iov_base);
> +        return state == VIRTIO_NET_OK ? 0 : 1;
> +    }
> +
> +    return 0;
> +}
> +
>   static NetClientInfo net_vhost_vdpa_info = {
>           .type = NET_CLIENT_DRIVER_VHOST_VDPA,
>           .size = sizeof(VhostVDPAState),
>           .receive = vhost_vdpa_receive,
> +        .start = vhost_vdpa_net_start,
>           .cleanup = vhost_vdpa_cleanup,
>           .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
>           .has_ufo = vhost_vdpa_has_ufo,



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/7] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail
  2022-07-22 11:12 ` [PATCH v4 2/7] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail Eugenio Pérez
@ 2022-07-26  2:50   ` Jason Wang
  2022-08-01  7:35     ` Eugenio Perez Martin
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Wang @ 2022-07-26  2:50 UTC (permalink / raw)
  To: Eugenio Pérez, qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Michael S. Tsirkin,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong


在 2022/7/22 19:12, Eugenio Pérez 写道:
> So we can reuse to inject state messages.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>   net/vhost-vdpa.c | 74 ++++++++++++++++++++++++++++++------------------
>   1 file changed, 47 insertions(+), 27 deletions(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 6abad276a6..1b82ac2e07 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -334,6 +334,46 @@ static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
>       return true;
>   }
>   
> +static virtio_net_ctrl_ack vhost_vdpa_net_cvq_add(VhostShadowVirtqueue *svq,
> +                                               const struct iovec *dev_buffers)


Let's make this support any layout by accepting in/out sg.


> +{
> +    /* in buffer used for device model */
> +    virtio_net_ctrl_ack status;
> +    size_t dev_written;
> +    int r;
> +
> +    /*
> +     * Add a fake non-NULL VirtQueueElement since we'll remove before SVQ
> +     * event loop can get it.
> +     */
> +    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, (void *)1);


I'd suggest to avoid the trick like (void *)1, which is usually a hint 
of the defect of the API.

We can either:

1) make vhost_svq_get() check ndescs instead of elem

or

2) simple pass sg

Thanks


> +    if (unlikely(r != 0)) {
> +        if (unlikely(r == -ENOSPC)) {
> +            qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> +                          __func__);
> +        }
> +        return VIRTIO_NET_ERR;
> +    }
> +
> +    /*
> +     * We can poll here since we've had BQL from the time we sent the
> +     * descriptor. Also, we need to take the answer before SVQ pulls by itself,
> +     * when BQL is released
> +     */
> +    dev_written = vhost_svq_poll(svq);
> +    if (unlikely(dev_written < sizeof(status))) {
> +        error_report("Insufficient written data (%zu)", dev_written);
> +        return VIRTIO_NET_ERR;
> +    }
> +
> +    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
> +    if (status != VIRTIO_NET_OK) {
> +        return VIRTIO_NET_ERR;
> +    }
> +
> +    return VIRTIO_NET_OK;
> +}
> +
>   /**
>    * Do not forward commands not supported by SVQ. Otherwise, the device could
>    * accept it and qemu would not know how to update the device model.
> @@ -380,19 +420,18 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
>                                               void *opaque)
>   {
>       VhostVDPAState *s = opaque;
> -    size_t in_len, dev_written;
> +    size_t in_len;
>       virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
>       /* out and in buffers sent to the device */
>       struct iovec dev_buffers[2] = {
>           { .iov_base = s->cvq_cmd_out_buffer },
>           { .iov_base = s->cvq_cmd_in_buffer },
>       };
> -    /* in buffer used for device model */
> +    /* in buffer seen by virtio-net device model */
>       const struct iovec in = {
>           .iov_base = &status,
>           .iov_len = sizeof(status),
>       };
> -    int r = -EINVAL;
>       bool ok;
>   
>       ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
> @@ -405,35 +444,16 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
>           goto out;
>       }
>   
> -    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, elem);
> -    if (unlikely(r != 0)) {
> -        if (unlikely(r == -ENOSPC)) {
> -            qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> -                          __func__);
> -        }
> -        goto out;
> -    }
> -
> -    /*
> -     * We can poll here since we've had BQL from the time we sent the
> -     * descriptor. Also, we need to take the answer before SVQ pulls by itself,
> -     * when BQL is released
> -     */
> -    dev_written = vhost_svq_poll(svq);
> -    if (unlikely(dev_written < sizeof(status))) {
> -        error_report("Insufficient written data (%zu)", dev_written);
> -        goto out;
> -    }
> -
> -    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
> +    status = vhost_vdpa_net_cvq_add(svq, dev_buffers);
>       if (status != VIRTIO_NET_OK) {
>           goto out;
>       }
>   
>       status = VIRTIO_NET_ERR;
> -    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
> -    if (status != VIRTIO_NET_OK) {
> +    in_len = virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
> +    if (in_len != sizeof(status) || status != VIRTIO_NET_OK) {
>           error_report("Bad CVQ processing in model");
> +        return VIRTIO_NET_ERR;
>       }
>   
>   out:
> @@ -450,7 +470,7 @@ out:
>       if (dev_buffers[1].iov_base) {
>           vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[1].iov_base);
>       }
> -    return r;
> +    return status == VIRTIO_NET_OK ? 0 : 1;
>   }
>   
>   static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/7] vdpa: add NetClientState->start() callback
  2022-07-22 11:12 ` [PATCH v4 4/7] vdpa: add NetClientState->start() callback Eugenio Pérez
@ 2022-07-26  2:52   ` Jason Wang
  2022-08-01  8:02     ` Eugenio Perez Martin
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Wang @ 2022-07-26  2:52 UTC (permalink / raw)
  To: Eugenio Pérez, qemu-devel
  Cc: Parav Pandit, Zhu Lingshan, Paolo Bonzini, Markus Armbruster,
	Laurent Vivier, Harpreet Singh Anand, Gautam Dawar, Eli Cohen,
	Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Michael S. Tsirkin,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong


在 2022/7/22 19:12, Eugenio Pérez 写道:
> It allows per-net client operations right after device's successful
> start.
>
> Vhost-vdpa net will use it to add the CVQ buffers to restore the device
> status.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>   include/net/net.h  | 2 ++
>   hw/net/vhost_net.c | 7 +++++++
>   2 files changed, 9 insertions(+)
>
> diff --git a/include/net/net.h b/include/net/net.h
> index 523136c7ac..ad9e80083a 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -44,6 +44,7 @@ typedef struct NICConf {
>   
>   typedef void (NetPoll)(NetClientState *, bool enable);
>   typedef bool (NetCanReceive)(NetClientState *);
> +typedef int (NetStart)(NetClientState *);
>   typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
>   typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
>   typedef void (NetCleanup) (NetClientState *);
> @@ -71,6 +72,7 @@ typedef struct NetClientInfo {
>       NetReceive *receive_raw;
>       NetReceiveIOV *receive_iov;
>       NetCanReceive *can_receive;
> +    NetStart *start;


I think we probably need a better name here. (start should go with 
DRIVER_OK or SET_VRING_ENABLE)

How about load or other (not a native speaker).

Thanks


>       NetCleanup *cleanup;
>       LinkStatusChanged *link_status_changed;
>       QueryRxFilter *query_rx_filter;
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index ccac5b7a64..ddd9ee0441 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -274,6 +274,13 @@ static int vhost_net_start_one(struct vhost_net *net,
>               }
>           }
>       }
> +
> +    if (net->nc->info->start) {
> +        r = net->nc->info->start(net->nc);
> +        if (r < 0) {
> +            goto fail;
> +        }
> +    }
>       return 0;
>   fail:
>       file.fd = -1;



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 3/7] vdpa: Make vhost_vdpa_net_cvq_map_elem accept any out sg
  2022-07-25  8:48   ` Jason Wang
@ 2022-08-01  6:42     ` Eugenio Perez Martin
  0 siblings, 0 replies; 17+ messages in thread
From: Eugenio Perez Martin @ 2022-08-01  6:42 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-level, Parav Pandit, Zhu Lingshan, Paolo Bonzini,
	Markus Armbruster, Laurent Vivier, Harpreet Singh Anand,
	Gautam Dawar, Eli Cohen, Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Michael S. Tsirkin,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong

On Mon, Jul 25, 2022 at 10:48 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/7/22 19:12, Eugenio Pérez 写道:
> > So its generic enough to accept any out sg buffer and we can inject
> > NIC state messages.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >   net/vhost-vdpa.c | 29 +++++++++++++++--------------
> >   1 file changed, 15 insertions(+), 14 deletions(-)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 1b82ac2e07..bbe1830824 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -302,35 +302,36 @@ dma_map_err:
> >   }
> >
> >   /**
> > - * Copy the guest element into a dedicated buffer suitable to be sent to NIC
> > + * Maps out sg and in buffer into dedicated buffers suitable to be sent to NIC
> >    *
> > - * @iov: [0] is the out buffer, [1] is the in one
> > + * @dev_iov: [0] is the out buffer, [1] is the in one
>
>
> This still has assumption on the layout. I wonder if it's better to
> simply use in_sg and out_sg.
>

Sure, I can resend that way.

It complicates the code a little bit because of error paths. We
currently send one out sg and one in sg always. But we can make it
more generic for sure.

Thanks!

> Thanks
>
>
> >    */
> > -static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
> > -                                        VirtQueueElement *elem,
> > -                                        struct iovec *iov)
> > +static bool vhost_vdpa_net_cvq_map_sg(VhostVDPAState *s,
> > +                                      const struct iovec *out, size_t out_num,
> > +                                      struct iovec *dev_iov)
> >   {
> >       size_t in_copied;
> >       bool ok;
> >
> > -    iov[0].iov_base = s->cvq_cmd_out_buffer;
> > -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, elem->out_num,
> > -                                vhost_vdpa_net_cvq_cmd_len(), iov[0].iov_base,
> > -                                &iov[0].iov_len, false);
> > +    dev_iov[0].iov_base = s->cvq_cmd_out_buffer;
> > +    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, out, out_num,
> > +                                vhost_vdpa_net_cvq_cmd_len(),
> > +                                dev_iov[0].iov_base, &dev_iov[0].iov_len,
> > +                                false);
> >       if (unlikely(!ok)) {
> >           return false;
> >       }
> >
> > -    iov[1].iov_base = s->cvq_cmd_in_buffer;
> > +    dev_iov[1].iov_base = s->cvq_cmd_in_buffer;
> >       ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
> > -                                sizeof(virtio_net_ctrl_ack), iov[1].iov_base,
> > -                                &in_copied, true);
> > +                                sizeof(virtio_net_ctrl_ack),
> > +                                dev_iov[1].iov_base, &in_copied, true);
> >       if (unlikely(!ok)) {
> >           vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> >           return false;
> >       }
> >
> > -    iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
> > +    dev_iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
> >       return true;
> >   }
> >
> > @@ -434,7 +435,7 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> >       };
> >       bool ok;
> >
> > -    ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
> > +    ok = vhost_vdpa_net_cvq_map_sg(s, elem->out_sg, elem->out_num, dev_buffers);
> >       if (unlikely(!ok)) {
> >           goto out;
> >       }
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 6/7] vdpa: Add virtio-net mac address via CVQ at start
  2022-07-25  9:32   ` Jason Wang
@ 2022-08-01  7:09     ` Eugenio Perez Martin
  2022-08-02 17:37       ` Eugenio Perez Martin
  0 siblings, 1 reply; 17+ messages in thread
From: Eugenio Perez Martin @ 2022-08-01  7:09 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-level, Parav Pandit, Zhu Lingshan, Paolo Bonzini,
	Markus Armbruster, Laurent Vivier, Harpreet Singh Anand,
	Gautam Dawar, Eli Cohen, Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Michael S. Tsirkin,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong

On Mon, Jul 25, 2022 at 11:32 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/7/22 19:12, Eugenio Pérez 写道:
> > This is needed so the destination vdpa device see the same state a the
> > guest set in the source.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >   net/vhost-vdpa.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 61 insertions(+)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 61516b1432..3e15a42c35 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -365,10 +365,71 @@ static virtio_net_ctrl_ack vhost_vdpa_net_cvq_add(VhostShadowVirtqueue *svq,
> >       return VIRTIO_NET_OK;
> >   }
> >
> > +static int vhost_vdpa_net_start(NetClientState *nc)
> > +{
> > +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > +    struct vhost_vdpa *v = &s->vhost_vdpa;
> > +    VirtIONet *n;
> > +    uint64_t features;
> > +    VhostShadowVirtqueue *svq;
> > +
> > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > +
> > +    if (!v->shadow_vqs_enabled) {
> > +        return 0;
> > +    }
> > +
> > +    if (v->dev->nvqs != 1 &&
> > +        v->dev->vq_index + v->dev->nvqs != v->dev->vq_index_end) {
> > +        /* Only interested in CVQ */
> > +        return 0;
> > +    }
>
>
> I'd have a dedicated NetClientInfo for cvq.
>

I'll try and come back to you.

>
> > +
> > +    n = VIRTIO_NET(v->dev->vdev);
> > +    features = v->dev->vdev->host_features;
> > +    svq = g_ptr_array_index(v->shadow_vqs, 0);
> > +    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
> > +        const struct virtio_net_ctrl_hdr ctrl = {
> > +            .class = VIRTIO_NET_CTRL_MAC,
> > +            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
> > +        };
> > +        uint8_t mac[6];
> > +        const struct iovec out[] = {
> > +            {
> > +                .iov_base = (void *)&ctrl,
> > +                .iov_len = sizeof(ctrl),
> > +            },{
> > +                .iov_base = mac,
> > +                .iov_len = sizeof(mac),
> > +            },
> > +        };
> > +        struct iovec dev_buffers[2] = {
> > +            { .iov_base = s->cvq_cmd_out_buffer },
> > +            { .iov_base = s->cvq_cmd_in_buffer },
> > +        };
> > +        bool ok;
> > +        virtio_net_ctrl_ack state;
> > +
> > +        ok = vhost_vdpa_net_cvq_map_sg(s, out, ARRAY_SIZE(out), dev_buffers);
>
>
> To speed up the state recovery, can we map those buffers during svq start?
>

Not sure if I follow you here. This is the callback that is called
during the device startup.

If you mean to make these buffers permanently mapped I think that can
be done for this series, but extra care will be needed when we
introduce ASID support to not make them visible from the guest. I'm ok
if you prefer to make it that way for this series.

Thanks!

> Thanks
>
>
> > +        if (unlikely(!ok)) {
> > +            return -1;
> > +        }
> > +
> > +        memcpy(mac, n->mac, sizeof(mac));
> > +        state = vhost_vdpa_net_cvq_add(svq, dev_buffers);
> > +        vhost_vdpa_cvq_unmap_buf(v, dev_buffers[0].iov_base);
> > +        vhost_vdpa_cvq_unmap_buf(v, dev_buffers[1].iov_base);
> > +        return state == VIRTIO_NET_OK ? 0 : 1;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> >   static NetClientInfo net_vhost_vdpa_info = {
> >           .type = NET_CLIENT_DRIVER_VHOST_VDPA,
> >           .size = sizeof(VhostVDPAState),
> >           .receive = vhost_vdpa_receive,
> > +        .start = vhost_vdpa_net_start,
> >           .cleanup = vhost_vdpa_cleanup,
> >           .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
> >           .has_ufo = vhost_vdpa_has_ufo,
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/7] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail
  2022-07-26  2:50   ` Jason Wang
@ 2022-08-01  7:35     ` Eugenio Perez Martin
  0 siblings, 0 replies; 17+ messages in thread
From: Eugenio Perez Martin @ 2022-08-01  7:35 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-level, Parav Pandit, Zhu Lingshan, Paolo Bonzini,
	Markus Armbruster, Laurent Vivier, Harpreet Singh Anand,
	Gautam Dawar, Eli Cohen, Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Michael S. Tsirkin,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong

On Tue, Jul 26, 2022 at 4:50 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/7/22 19:12, Eugenio Pérez 写道:
> > So we can reuse to inject state messages.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >   net/vhost-vdpa.c | 74 ++++++++++++++++++++++++++++++------------------
> >   1 file changed, 47 insertions(+), 27 deletions(-)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 6abad276a6..1b82ac2e07 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -334,6 +334,46 @@ static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
> >       return true;
> >   }
> >
> > +static virtio_net_ctrl_ack vhost_vdpa_net_cvq_add(VhostShadowVirtqueue *svq,
> > +                                               const struct iovec *dev_buffers)
>
>
> Let's make this support any layout by accepting in/out sg.
>

I'll change for the next version.

>
> > +{
> > +    /* in buffer used for device model */
> > +    virtio_net_ctrl_ack status;
> > +    size_t dev_written;
> > +    int r;
> > +
> > +    /*
> > +     * Add a fake non-NULL VirtQueueElement since we'll remove before SVQ
> > +     * event loop can get it.
> > +     */
> > +    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, (void *)1);
>
>
> I'd suggest to avoid the trick like (void *)1, which is usually a hint
> of the defect of the API.
>
> We can either:
>
> 1) make vhost_svq_get() check ndescs instead of elem
>
> or
>
> 2) simple pass sg
>

Option one sounds great actually, let me try it and I'll send a new version.

Thanks!


> Thanks
>
>
> > +    if (unlikely(r != 0)) {
> > +        if (unlikely(r == -ENOSPC)) {
> > +            qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> > +                          __func__);
> > +        }
> > +        return VIRTIO_NET_ERR;
> > +    }
> > +
> > +    /*
> > +     * We can poll here since we've had BQL from the time we sent the
> > +     * descriptor. Also, we need to take the answer before SVQ pulls by itself,
> > +     * when BQL is released
> > +     */
> > +    dev_written = vhost_svq_poll(svq);
> > +    if (unlikely(dev_written < sizeof(status))) {
> > +        error_report("Insufficient written data (%zu)", dev_written);
> > +        return VIRTIO_NET_ERR;
> > +    }
> > +
> > +    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
> > +    if (status != VIRTIO_NET_OK) {
> > +        return VIRTIO_NET_ERR;
> > +    }
> > +
> > +    return VIRTIO_NET_OK;
> > +}
> > +
> >   /**
> >    * Do not forward commands not supported by SVQ. Otherwise, the device could
> >    * accept it and qemu would not know how to update the device model.
> > @@ -380,19 +420,18 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> >                                               void *opaque)
> >   {
> >       VhostVDPAState *s = opaque;
> > -    size_t in_len, dev_written;
> > +    size_t in_len;
> >       virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
> >       /* out and in buffers sent to the device */
> >       struct iovec dev_buffers[2] = {
> >           { .iov_base = s->cvq_cmd_out_buffer },
> >           { .iov_base = s->cvq_cmd_in_buffer },
> >       };
> > -    /* in buffer used for device model */
> > +    /* in buffer seen by virtio-net device model */
> >       const struct iovec in = {
> >           .iov_base = &status,
> >           .iov_len = sizeof(status),
> >       };
> > -    int r = -EINVAL;
> >       bool ok;
> >
> >       ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
> > @@ -405,35 +444,16 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> >           goto out;
> >       }
> >
> > -    r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, elem);
> > -    if (unlikely(r != 0)) {
> > -        if (unlikely(r == -ENOSPC)) {
> > -            qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
> > -                          __func__);
> > -        }
> > -        goto out;
> > -    }
> > -
> > -    /*
> > -     * We can poll here since we've had BQL from the time we sent the
> > -     * descriptor. Also, we need to take the answer before SVQ pulls by itself,
> > -     * when BQL is released
> > -     */
> > -    dev_written = vhost_svq_poll(svq);
> > -    if (unlikely(dev_written < sizeof(status))) {
> > -        error_report("Insufficient written data (%zu)", dev_written);
> > -        goto out;
> > -    }
> > -
> > -    memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
> > +    status = vhost_vdpa_net_cvq_add(svq, dev_buffers);
> >       if (status != VIRTIO_NET_OK) {
> >           goto out;
> >       }
> >
> >       status = VIRTIO_NET_ERR;
> > -    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
> > -    if (status != VIRTIO_NET_OK) {
> > +    in_len = virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
> > +    if (in_len != sizeof(status) || status != VIRTIO_NET_OK) {
> >           error_report("Bad CVQ processing in model");
> > +        return VIRTIO_NET_ERR;
> >       }
> >
> >   out:
> > @@ -450,7 +470,7 @@ out:
> >       if (dev_buffers[1].iov_base) {
> >           vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[1].iov_base);
> >       }
> > -    return r;
> > +    return status == VIRTIO_NET_OK ? 0 : 1;
> >   }
> >
> >   static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/7] vdpa: add NetClientState->start() callback
  2022-07-26  2:52   ` Jason Wang
@ 2022-08-01  8:02     ` Eugenio Perez Martin
  0 siblings, 0 replies; 17+ messages in thread
From: Eugenio Perez Martin @ 2022-08-01  8:02 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-level, Parav Pandit, Zhu Lingshan, Paolo Bonzini,
	Markus Armbruster, Laurent Vivier, Harpreet Singh Anand,
	Gautam Dawar, Eli Cohen, Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Michael S. Tsirkin,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong

On Tue, Jul 26, 2022 at 4:53 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/7/22 19:12, Eugenio Pérez 写道:
> > It allows per-net client operations right after device's successful
> > start.
> >
> > Vhost-vdpa net will use it to add the CVQ buffers to restore the device
> > status.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >   include/net/net.h  | 2 ++
> >   hw/net/vhost_net.c | 7 +++++++
> >   2 files changed, 9 insertions(+)
> >
> > diff --git a/include/net/net.h b/include/net/net.h
> > index 523136c7ac..ad9e80083a 100644
> > --- a/include/net/net.h
> > +++ b/include/net/net.h
> > @@ -44,6 +44,7 @@ typedef struct NICConf {
> >
> >   typedef void (NetPoll)(NetClientState *, bool enable);
> >   typedef bool (NetCanReceive)(NetClientState *);
> > +typedef int (NetStart)(NetClientState *);
> >   typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
> >   typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
> >   typedef void (NetCleanup) (NetClientState *);
> > @@ -71,6 +72,7 @@ typedef struct NetClientInfo {
> >       NetReceive *receive_raw;
> >       NetReceiveIOV *receive_iov;
> >       NetCanReceive *can_receive;
> > +    NetStart *start;
>
>
> I think we probably need a better name here. (start should go with
> DRIVER_OK or SET_VRING_ENABLE)
>
> How about load or other (not a native speaker).
>

At this moment, the plan is to call SET_VRING_ENABLE in this function
in the future. But I'm ok to call it load(), maybe it better reflects
the intention of the function.

Thanks!

> Thanks
>
>
> >       NetCleanup *cleanup;
> >       LinkStatusChanged *link_status_changed;
> >       QueryRxFilter *query_rx_filter;
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index ccac5b7a64..ddd9ee0441 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -274,6 +274,13 @@ static int vhost_net_start_one(struct vhost_net *net,
> >               }
> >           }
> >       }
> > +
> > +    if (net->nc->info->start) {
> > +        r = net->nc->info->start(net->nc);
> > +        if (r < 0) {
> > +            goto fail;
> > +        }
> > +    }
> >       return 0;
> >   fail:
> >       file.fd = -1;
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 6/7] vdpa: Add virtio-net mac address via CVQ at start
  2022-08-01  7:09     ` Eugenio Perez Martin
@ 2022-08-02 17:37       ` Eugenio Perez Martin
  0 siblings, 0 replies; 17+ messages in thread
From: Eugenio Perez Martin @ 2022-08-02 17:37 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-level, Parav Pandit, Zhu Lingshan, Paolo Bonzini,
	Markus Armbruster, Laurent Vivier, Harpreet Singh Anand,
	Gautam Dawar, Eli Cohen, Eric Blake, Gonglei (Arei),
	Cornelia Huck, Stefano Garzarella, Michael S. Tsirkin,
	Stefan Hajnoczi, Cindy Lu, Liuxiangdong

On Mon, Aug 1, 2022 at 9:09 AM Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Mon, Jul 25, 2022 at 11:32 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> >
> > 在 2022/7/22 19:12, Eugenio Pérez 写道:
> > > This is needed so the destination vdpa device see the same state a the
> > > guest set in the source.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >   net/vhost-vdpa.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++
> > >   1 file changed, 61 insertions(+)
> > >
> > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > index 61516b1432..3e15a42c35 100644
> > > --- a/net/vhost-vdpa.c
> > > +++ b/net/vhost-vdpa.c
> > > @@ -365,10 +365,71 @@ static virtio_net_ctrl_ack vhost_vdpa_net_cvq_add(VhostShadowVirtqueue *svq,
> > >       return VIRTIO_NET_OK;
> > >   }
> > >
> > > +static int vhost_vdpa_net_start(NetClientState *nc)
> > > +{
> > > +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > +    struct vhost_vdpa *v = &s->vhost_vdpa;
> > > +    VirtIONet *n;
> > > +    uint64_t features;
> > > +    VhostShadowVirtqueue *svq;
> > > +
> > > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > +
> > > +    if (!v->shadow_vqs_enabled) {
> > > +        return 0;
> > > +    }
> > > +
> > > +    if (v->dev->nvqs != 1 &&
> > > +        v->dev->vq_index + v->dev->nvqs != v->dev->vq_index_end) {
> > > +        /* Only interested in CVQ */
> > > +        return 0;
> > > +    }
> >
> >
> > I'd have a dedicated NetClientInfo for cvq.
> >
>
> I'll try and come back to you.
>
> >
> > > +
> > > +    n = VIRTIO_NET(v->dev->vdev);
> > > +    features = v->dev->vdev->host_features;
> > > +    svq = g_ptr_array_index(v->shadow_vqs, 0);
> > > +    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
> > > +        const struct virtio_net_ctrl_hdr ctrl = {
> > > +            .class = VIRTIO_NET_CTRL_MAC,
> > > +            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
> > > +        };
> > > +        uint8_t mac[6];
> > > +        const struct iovec out[] = {
> > > +            {
> > > +                .iov_base = (void *)&ctrl,
> > > +                .iov_len = sizeof(ctrl),
> > > +            },{
> > > +                .iov_base = mac,
> > > +                .iov_len = sizeof(mac),
> > > +            },
> > > +        };
> > > +        struct iovec dev_buffers[2] = {
> > > +            { .iov_base = s->cvq_cmd_out_buffer },
> > > +            { .iov_base = s->cvq_cmd_in_buffer },
> > > +        };
> > > +        bool ok;
> > > +        virtio_net_ctrl_ack state;
> > > +
> > > +        ok = vhost_vdpa_net_cvq_map_sg(s, out, ARRAY_SIZE(out), dev_buffers);
> >
> >
> > To speed up the state recovery, can we map those buffers during svq start?
> >
>
> Not sure if I follow you here. This is the callback that is called
> during the device startup.
>
> If you mean to make these buffers permanently mapped I think that can
> be done for this series, but extra care will be needed when we
> introduce ASID support to not make them visible from the guest. I'm ok
> if you prefer to make it that way for this series.
>

Sending v4 without this part, please let me know if it needs further changes.

Thanks!



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-08-02 17:41 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-22 11:12 [PATCH v4 0/7] NIC vhost-vdpa state restore via Shadow CVQ Eugenio Pérez
2022-07-22 11:12 ` [PATCH v4 1/7] vhost: stop transfer elem ownership in vhost_handle_guest_kick Eugenio Pérez
2022-07-22 11:12 ` [PATCH v4 2/7] vdpa: Extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail Eugenio Pérez
2022-07-26  2:50   ` Jason Wang
2022-08-01  7:35     ` Eugenio Perez Martin
2022-07-22 11:12 ` [PATCH v4 3/7] vdpa: Make vhost_vdpa_net_cvq_map_elem accept any out sg Eugenio Pérez
2022-07-25  8:48   ` Jason Wang
2022-08-01  6:42     ` Eugenio Perez Martin
2022-07-22 11:12 ` [PATCH v4 4/7] vdpa: add NetClientState->start() callback Eugenio Pérez
2022-07-26  2:52   ` Jason Wang
2022-08-01  8:02     ` Eugenio Perez Martin
2022-07-22 11:12 ` [PATCH v4 5/7] vdpa: Reorder net_vhost_vdpa_info Eugenio Pérez
2022-07-22 11:12 ` [PATCH v4 6/7] vdpa: Add virtio-net mac address via CVQ at start Eugenio Pérez
2022-07-25  9:32   ` Jason Wang
2022-08-01  7:09     ` Eugenio Perez Martin
2022-08-02 17:37       ` Eugenio Perez Martin
2022-07-22 11:12 ` [PATCH v4 7/7] vdpa: Delete CVQ migration blocker Eugenio Pérez

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.