All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V3 00/10] vhost-vDPA multiqueue
@ 2021-09-07  9:03 Jason Wang
  2021-09-07  9:03 ` [PATCH V3 01/10] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
                   ` (11 more replies)
  0 siblings, 12 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

Hi All:

This patch implements the multiqueue support for vhost-vDPA. The most
important requirement si the control virtqueue support. The virtio-net
and vhost-net core are tweak to support control virtqueue as if what
data queue pairs are done: a dedicated vhost_net device which is
coupled with the NetClientState is intrdouced so most of the existing
vhost codes could be reused with minor changes. This means the control
virtqueue will bypass the Qemu. With the control virtqueue, vhost-vDPA
are extend to support creating and destroying multiqueue queue pairs
plus the control virtqueue.

For the future, if we want to support live migration, we can either do
shadow cvq on top or introduce new interfaces for reporting device
states.

Tests are done via the vp_vdpa driver in L1 guest.

Changes since V2:

- rebase to qemu master
- use "queue_pairs" instead of "qps"
- typo fixes

Changes since V1:

- start and stop vhost devices when all queues were setup
- fix the case when driver doesn't support MQ but device support
- correctly set the batching capability to avoid a map/unmap storm
- various other tweaks

Jason Wang (10):
  vhost-vdpa: open device fd in net_init_vhost_vdpa()
  vhost-vdpa: classify one time request
  vhost-vdpa: prepare for the multiqueue support
  vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
  net: introduce control client
  vhost-net: control virtqueue support
  virtio-net: use "queue_pairs" instead of "queues" when possible
  vhost: record the last virtqueue index for the virtio device
  virtio-net: vhost control virtqueue support
  vhost-vdpa: multiqueue support

 hw/net/vhost_net.c             |  55 ++++++++---
 hw/net/virtio-net.c            | 165 ++++++++++++++++++---------------
 hw/virtio/vhost-vdpa.c         |  56 +++++++++--
 include/hw/virtio/vhost-vdpa.h |   1 +
 include/hw/virtio/vhost.h      |   2 +
 include/hw/virtio/virtio-net.h |   5 +-
 include/net/net.h              |   5 +
 include/net/vhost_net.h        |   6 +-
 net/net.c                      |  24 ++++-
 net/vhost-vdpa.c               | 127 ++++++++++++++++++++++---
 10 files changed, 328 insertions(+), 118 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH V3 01/10] vhost-vdpa: open device fd in net_init_vhost_vdpa()
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
@ 2021-09-07  9:03 ` Jason Wang
  2021-09-07  9:03 ` [PATCH V3 02/10] vhost-vdpa: classify one time request Jason Wang
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel
  Cc: lulu, gdawar, eperezma, elic, lingshan.zhu, Stefano Garzarella

This patch switches to open device fd in net_init_vhost_vpda(). This is
used to prepare for the multiqueue support.

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 912686457c..73d29a74ef 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -156,24 +156,19 @@ static NetClientInfo net_vhost_vdpa_info = {
 };
 
 static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
-                               const char *name, const char *vhostdev)
+                               const char *name, int vdpa_device_fd)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
-    int vdpa_device_fd = -1;
     int ret = 0;
     assert(name);
     nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
     snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
     s = DO_UPCAST(VhostVDPAState, nc, nc);
-    vdpa_device_fd = qemu_open_old(vhostdev, O_RDWR);
-    if (vdpa_device_fd == -1) {
-        return -errno;
-    }
+
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
     if (ret) {
-        qemu_close(vdpa_device_fd);
         qemu_del_net_client(nc);
     }
     return ret;
@@ -201,6 +196,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
+    int vdpa_device_fd, ret;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -209,5 +205,16 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                           (char *)name, errp)) {
         return -1;
     }
-    return net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, opts->vhostdev);
+
+    vdpa_device_fd = qemu_open_old(opts->vhostdev, O_RDWR);
+    if (vdpa_device_fd == -1) {
+        return -errno;
+    }
+
+    ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
+    if (ret) {
+        qemu_close(vdpa_device_fd);
+    }
+
+    return ret;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH V3 02/10] vhost-vdpa: classify one time request
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
  2021-09-07  9:03 ` [PATCH V3 01/10] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
@ 2021-09-07  9:03 ` Jason Wang
  2021-09-07  9:03 ` [PATCH V3 03/10] vhost-vdpa: prepare for the multiqueue support Jason Wang
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

Vhost-vdpa uses one device multiqueue queue (pairs) model. So we need
to classify the one time request (e.g SET_OWNER) and make sure those
request were only called once per device.

This is used for multiqueue support.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c         | 52 ++++++++++++++++++++++++++++++----
 include/hw/virtio/vhost-vdpa.h |  1 +
 2 files changed, 47 insertions(+), 6 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 7633ea66d1..2e3b32245e 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -278,6 +278,13 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
     vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
 }
 
+static bool vhost_vdpa_one_time_request(struct vhost_dev *dev)
+{
+    struct vhost_vdpa *v = dev->opaque;
+
+    return v->index != 0;
+}
+
 static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
 {
     struct vhost_vdpa *v;
@@ -290,6 +297,10 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
     v->listener = vhost_vdpa_memory_listener;
     v->msg_type = VHOST_IOTLB_MSG_V2;
 
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
                                VIRTIO_CONFIG_S_DRIVER);
 
@@ -400,6 +411,10 @@ static int vhost_vdpa_memslots_limit(struct vhost_dev *dev)
 static int vhost_vdpa_set_mem_table(struct vhost_dev *dev,
                                     struct vhost_memory *mem)
 {
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_mem_table(dev, mem->nregions, mem->padding);
     if (trace_event_get_state_backends(TRACE_VHOST_VDPA_SET_MEM_TABLE) &&
         trace_event_get_state_backends(TRACE_VHOST_VDPA_DUMP_REGIONS)) {
@@ -423,6 +438,11 @@ static int vhost_vdpa_set_features(struct vhost_dev *dev,
                                    uint64_t features)
 {
     int ret;
+
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_features(dev, features);
     ret = vhost_vdpa_call(dev, VHOST_SET_FEATURES, &features);
     uint8_t status = 0;
@@ -447,9 +467,12 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
     }
 
     features &= f;
-    r = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, &features);
-    if (r) {
-        return -EFAULT;
+
+    if (vhost_vdpa_one_time_request(dev)) {
+        r = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, &features);
+        if (r) {
+            return -EFAULT;
+        }
     }
 
     dev->backend_cap = features;
@@ -558,11 +581,21 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
 {
     struct vhost_vdpa *v = dev->opaque;
     trace_vhost_vdpa_dev_start(dev, started);
+
     if (started) {
-        uint8_t status = 0;
-        memory_listener_register(&v->listener, &address_space_memory);
         vhost_vdpa_host_notifiers_init(dev);
         vhost_vdpa_set_vring_ready(dev);
+    } else {
+        vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
+    }
+
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
+    if (started) {
+        uint8_t status = 0;
+        memory_listener_register(&v->listener, &address_space_memory);
         vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
         vhost_vdpa_call(dev, VHOST_VDPA_GET_STATUS, &status);
 
@@ -571,7 +604,6 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
         vhost_vdpa_reset_device(dev);
         vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
                                    VIRTIO_CONFIG_S_DRIVER);
-        vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
         memory_listener_unregister(&v->listener);
 
         return 0;
@@ -581,6 +613,10 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
 static int vhost_vdpa_set_log_base(struct vhost_dev *dev, uint64_t base,
                                      struct vhost_log *log)
 {
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_log_base(dev, base, log->size, log->refcnt, log->fd,
                                   log->log);
     return vhost_vdpa_call(dev, VHOST_SET_LOG_BASE, &base);
@@ -646,6 +682,10 @@ static int vhost_vdpa_get_features(struct vhost_dev *dev,
 
 static int vhost_vdpa_set_owner(struct vhost_dev *dev)
 {
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_owner(dev);
     return vhost_vdpa_call(dev, VHOST_SET_OWNER, NULL);
 }
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index a8963da2d9..6b9288fef8 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -21,6 +21,7 @@ typedef struct VhostVDPAHostNotifier {
 
 typedef struct vhost_vdpa {
     int device_fd;
+    int index;
     uint32_t msg_type;
     bool iotlb_batch_begin_sent;
     MemoryListener listener;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH V3 03/10] vhost-vdpa: prepare for the multiqueue support
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
  2021-09-07  9:03 ` [PATCH V3 01/10] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
  2021-09-07  9:03 ` [PATCH V3 02/10] vhost-vdpa: classify one time request Jason Wang
@ 2021-09-07  9:03 ` Jason Wang
  2021-09-07  9:03 ` [PATCH V3 04/10] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

Unlike vhost-kernel, vhost-vdpa adapts a single device multiqueue
model. So we need to simply use virtqueue index as the vhost virtqueue
index. This is a must for multiqueue to work for vhost-vdpa.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 2e3b32245e..99c676796e 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -503,8 +503,8 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev *dev, int idx)
 {
     assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
 
-    trace_vhost_vdpa_get_vq_index(dev, idx, idx - dev->vq_index);
-    return idx - dev->vq_index;
+    trace_vhost_vdpa_get_vq_index(dev, idx, idx);
+    return idx;
 }
 
 static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH V3 04/10] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
                   ` (2 preceding siblings ...)
  2021-09-07  9:03 ` [PATCH V3 03/10] vhost-vdpa: prepare for the multiqueue support Jason Wang
@ 2021-09-07  9:03 ` Jason Wang
  2021-09-09 15:41   ` Zhang, Chen
  2021-09-07  9:03 ` [PATCH V3 05/10] net: introduce control client Jason Wang
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

This patch switches to let net_vhost_vdpa_init() to return
NetClientState *. This is used for the callers to allocate multiqueue
NetClientState for multiqueue support.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 73d29a74ef..834dab28dd 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -155,8 +155,10 @@ static NetClientInfo net_vhost_vdpa_info = {
         .has_ufo = vhost_vdpa_has_ufo,
 };
 
-static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
-                               const char *name, int vdpa_device_fd)
+static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
+                                           const char *device,
+                                           const char *name,
+                                           int vdpa_device_fd)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
@@ -170,8 +172,9 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
     if (ret) {
         qemu_del_net_client(nc);
+        return NULL;
     }
-    return ret;
+    return nc;
 }
 
 static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
@@ -196,7 +199,8 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
-    int vdpa_device_fd, ret;
+    int vdpa_device_fd;
+    NetClientState *nc;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -211,10 +215,11 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         return -errno;
     }
 
-    ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
-    if (ret) {
+    nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
+    if (!nc) {
         qemu_close(vdpa_device_fd);
+        return -1;
     }
 
-    return ret;
+    return 0;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH V3 05/10] net: introduce control client
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
                   ` (3 preceding siblings ...)
  2021-09-07  9:03 ` [PATCH V3 04/10] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
@ 2021-09-07  9:03 ` Jason Wang
  2021-09-09 15:45   ` Zhang, Chen
  2021-09-07  9:03 ` [PATCH V3 06/10] vhost-net: control virtqueue support Jason Wang
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

This patch introduces a boolean for the device has control queue which
can accepts control command via network queue.

The first user would be the control virtqueue support for vhost.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/net/net.h |  5 +++++
 net/net.c         | 24 +++++++++++++++++++++---
 2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/include/net/net.h b/include/net/net.h
index 5d1508081f..4f400b8a09 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -103,6 +103,7 @@ struct NetClientState {
     int vnet_hdr_len;
     bool is_netdev;
     bool do_not_pad; /* do not pad to the minimum ethernet frame length */
+    bool is_datapath;
     QTAILQ_HEAD(, NetFilterState) filters;
 };
 
@@ -134,6 +135,10 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
                                     NetClientState *peer,
                                     const char *model,
                                     const char *name);
+NetClientState *qemu_new_net_control_client(NetClientInfo *info,
+                                        NetClientState *peer,
+                                        const char *model,
+                                        const char *name);
 NICState *qemu_new_nic(NetClientInfo *info,
                        NICConf *conf,
                        const char *model,
diff --git a/net/net.c b/net/net.c
index 52c99196c6..f0d14dbfc1 100644
--- a/net/net.c
+++ b/net/net.c
@@ -239,7 +239,8 @@ static void qemu_net_client_setup(NetClientState *nc,
                                   NetClientState *peer,
                                   const char *model,
                                   const char *name,
-                                  NetClientDestructor *destructor)
+                                  NetClientDestructor *destructor,
+                                  bool is_datapath)
 {
     nc->info = info;
     nc->model = g_strdup(model);
@@ -258,6 +259,7 @@ static void qemu_net_client_setup(NetClientState *nc,
 
     nc->incoming_queue = qemu_new_net_queue(qemu_deliver_packet_iov, nc);
     nc->destructor = destructor;
+    nc->is_datapath = is_datapath;
     QTAILQ_INIT(&nc->filters);
 }
 
@@ -272,7 +274,23 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
 
     nc = g_malloc0(info->size);
     qemu_net_client_setup(nc, info, peer, model, name,
-                          qemu_net_client_destructor);
+                          qemu_net_client_destructor, true);
+
+    return nc;
+}
+
+NetClientState *qemu_new_net_control_client(NetClientInfo *info,
+                                            NetClientState *peer,
+                                            const char *model,
+                                            const char *name)
+{
+    NetClientState *nc;
+
+    assert(info->size >= sizeof(NetClientState));
+
+    nc = g_malloc0(info->size);
+    qemu_net_client_setup(nc, info, peer, model, name,
+                          qemu_net_client_destructor, false);
 
     return nc;
 }
@@ -297,7 +315,7 @@ NICState *qemu_new_nic(NetClientInfo *info,
 
     for (i = 0; i < queues; i++) {
         qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
-                              NULL);
+                              NULL, true);
         nic->ncs[i].queue_index = i;
     }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH V3 06/10] vhost-net: control virtqueue support
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
                   ` (4 preceding siblings ...)
  2021-09-07  9:03 ` [PATCH V3 05/10] net: introduce control client Jason Wang
@ 2021-09-07  9:03 ` Jason Wang
  2021-09-07  9:03 ` [PATCH V3 07/10] virtio-net: use "queue_pairs" instead of "queues" when possible Jason Wang
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

We assume there's no cvq in the past, this is not true when we need
control virtqueue support for vhost-user backends. So this patch
implements the control virtqueue support for vhost-net. As datapath,
the control virtqueue is also required to be coupled with the
NetClientState. The vhost_net_start/stop() are tweaked to accept the
number of datapath queue pairs plus the the number of control
virtqueue for us to start and stop the vhost device.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c      | 43 ++++++++++++++++++++++++++++++-----------
 hw/net/virtio-net.c     |  4 ++--
 include/net/vhost_net.h |  6 ++++--
 3 files changed, 38 insertions(+), 15 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 386ec2eaa2..e1e9d1ec89 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -315,11 +315,14 @@ static void vhost_net_stop_one(struct vhost_net *net,
 }
 
 int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
-                    int total_queues)
+                    int data_queue_pairs, int cvq)
 {
     BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
     VirtioBusState *vbus = VIRTIO_BUS(qbus);
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+    int total_notifiers = data_queue_pairs * 2 + cvq;
+    VirtIONet *n = VIRTIO_NET(dev);
+    int nvhosts = data_queue_pairs + cvq;
     struct vhost_net *net;
     int r, e, i;
     NetClientState *peer;
@@ -329,9 +332,14 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         return -ENOSYS;
     }
 
-    for (i = 0; i < total_queues; i++) {
+    for (i = 0; i < nvhosts; i++) {
+
+        if (i < data_queue_pairs) {
+            peer = qemu_get_peer(ncs, i);
+        } else { /* Control Virtqueue */
+            peer = qemu_get_peer(ncs, n->max_queues);
+        }
 
-        peer = qemu_get_peer(ncs, i);
         net = get_vhost_net(peer);
         vhost_net_set_vq_index(net, i * 2);
 
@@ -344,14 +352,18 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         }
      }
 
-    r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
+    r = k->set_guest_notifiers(qbus->parent, total_notifiers, true);
     if (r < 0) {
         error_report("Error binding guest notifier: %d", -r);
         goto err;
     }
 
-    for (i = 0; i < total_queues; i++) {
-        peer = qemu_get_peer(ncs, i);
+    for (i = 0; i < nvhosts; i++) {
+        if (i < data_queue_pairs) {
+            peer = qemu_get_peer(ncs, i);
+        } else {
+            peer = qemu_get_peer(ncs, n->max_queues);
+        }
         r = vhost_net_start_one(get_vhost_net(peer), dev);
 
         if (r < 0) {
@@ -375,7 +387,7 @@ err_start:
         peer = qemu_get_peer(ncs , i);
         vhost_net_stop_one(get_vhost_net(peer), dev);
     }
-    e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
+    e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
     if (e < 0) {
         fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e);
         fflush(stderr);
@@ -385,18 +397,27 @@ err:
 }
 
 void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
-                    int total_queues)
+                    int data_queue_pairs, int cvq)
 {
     BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
     VirtioBusState *vbus = VIRTIO_BUS(qbus);
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+    VirtIONet *n = VIRTIO_NET(dev);
+    NetClientState *peer;
+    int total_notifiers = data_queue_pairs * 2 + cvq;
+    int nvhosts = data_queue_pairs + cvq;
     int i, r;
 
-    for (i = 0; i < total_queues; i++) {
-        vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
+    for (i = 0; i < nvhosts; i++) {
+        if (i < data_queue_pairs) {
+            peer = qemu_get_peer(ncs, i);
+        } else {
+            peer = qemu_get_peer(ncs, n->max_queues);
+        }
+        vhost_net_stop_one(get_vhost_net(peer), dev);
     }
 
-    r = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
+    r = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
     if (r < 0) {
         fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", r);
         fflush(stderr);
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 16d20cdee5..8fccbaa44c 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         }
 
         n->vhost_started = 1;
-        r = vhost_net_start(vdev, n->nic->ncs, queues);
+        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
         if (r < 0) {
             error_report("unable to start vhost net: %d: "
                          "falling back on userspace virtio", -r);
             n->vhost_started = 0;
         }
     } else {
-        vhost_net_stop(vdev, n->nic->ncs, queues);
+        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
         n->vhost_started = 0;
     }
 }
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index fba40cf695..387e913e4e 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -21,8 +21,10 @@ typedef struct VhostNetOptions {
 uint64_t vhost_net_get_max_queues(VHostNetState *net);
 struct vhost_net *vhost_net_init(VhostNetOptions *options);
 
-int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
-void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs, int total_queues);
+int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
+                    int data_queue_pairs, int cvq);
+void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
+                    int data_queue_pairs, int cvq);
 
 void vhost_net_cleanup(VHostNetState *net);
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH V3 07/10] virtio-net: use "queue_pairs" instead of "queues" when possible
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
                   ` (5 preceding siblings ...)
  2021-09-07  9:03 ` [PATCH V3 06/10] vhost-net: control virtqueue support Jason Wang
@ 2021-09-07  9:03 ` Jason Wang
  2021-09-07  9:03 ` [PATCH V3 08/10] vhost: record the last virtqueue index for the virtio device Jason Wang
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

Most of the time, "queues" really means queue pairs. So this patch
switch to use "queue_pairs" to avoid confusion.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c             |   6 +-
 hw/net/virtio-net.c            | 150 ++++++++++++++++-----------------
 include/hw/virtio/virtio-net.h |   4 +-
 3 files changed, 80 insertions(+), 80 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index e1e9d1ec89..2b594b4642 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -337,7 +337,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         if (i < data_queue_pairs) {
             peer = qemu_get_peer(ncs, i);
         } else { /* Control Virtqueue */
-            peer = qemu_get_peer(ncs, n->max_queues);
+            peer = qemu_get_peer(ncs, n->max_queue_pairs);
         }
 
         net = get_vhost_net(peer);
@@ -362,7 +362,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         if (i < data_queue_pairs) {
             peer = qemu_get_peer(ncs, i);
         } else {
-            peer = qemu_get_peer(ncs, n->max_queues);
+            peer = qemu_get_peer(ncs, n->max_queue_pairs);
         }
         r = vhost_net_start_one(get_vhost_net(peer), dev);
 
@@ -412,7 +412,7 @@ void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
         if (i < data_queue_pairs) {
             peer = qemu_get_peer(ncs, i);
         } else {
-            peer = qemu_get_peer(ncs, n->max_queues);
+            peer = qemu_get_peer(ncs, n->max_queue_pairs);
         }
         vhost_net_stop_one(get_vhost_net(peer), dev);
     }
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 8fccbaa44c..926b6c930e 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -54,7 +54,7 @@
 #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
 #define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
 
-/* for now, only allow larger queues; with virtio-1, guest can downsize */
+/* for now, only allow larger queue_pairs; with virtio-1, guest can downsize */
 #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
 #define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
 
@@ -131,7 +131,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, uint8_t *config)
     int ret = 0;
     memset(&netcfg, 0 , sizeof(struct virtio_net_config));
     virtio_stw_p(vdev, &netcfg.status, n->status);
-    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_queues);
+    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_queue_pairs);
     virtio_stw_p(vdev, &netcfg.mtu, n->net_conf.mtu);
     memcpy(netcfg.mac, n->mac, ETH_ALEN);
     virtio_stl_p(vdev, &netcfg.speed, n->net_conf.speed);
@@ -243,7 +243,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     NetClientState *nc = qemu_get_queue(n->nic);
-    int queues = n->multiqueue ? n->max_queues : 1;
+    int queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
 
     if (!get_vhost_net(nc->peer)) {
         return;
@@ -266,7 +266,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         /* Any packets outstanding? Purge them to avoid touching rings
          * when vhost is running.
          */
-        for (i = 0;  i < queues; i++) {
+        for (i = 0;  i < queue_pairs; i++) {
             NetClientState *qnc = qemu_get_subqueue(n->nic, i);
 
             /* Purge both directions: TX and RX. */
@@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         }
 
         n->vhost_started = 1;
-        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
+        r = vhost_net_start(vdev, n->nic->ncs, queue_pairs, 0);
         if (r < 0) {
             error_report("unable to start vhost net: %d: "
                          "falling back on userspace virtio", -r);
             n->vhost_started = 0;
         }
     } else {
-        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
+        vhost_net_stop(vdev, n->nic->ncs, queue_pairs, 0);
         n->vhost_started = 0;
     }
 }
@@ -309,11 +309,11 @@ static int virtio_net_set_vnet_endian_one(VirtIODevice *vdev,
 }
 
 static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
-                                       int queues, bool enable)
+                                       int queue_pairs, bool enable)
 {
     int i;
 
-    for (i = 0; i < queues; i++) {
+    for (i = 0; i < queue_pairs; i++) {
         if (virtio_net_set_vnet_endian_one(vdev, ncs[i].peer, enable) < 0 &&
             enable) {
             while (--i >= 0) {
@@ -330,7 +330,7 @@ static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
 static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
-    int queues = n->multiqueue ? n->max_queues : 1;
+    int queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
 
     if (virtio_net_started(n, status)) {
         /* Before using the device, we tell the network backend about the
@@ -339,14 +339,14 @@ static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
          * virtio-net code.
          */
         n->needs_vnet_hdr_swap = virtio_net_set_vnet_endian(vdev, n->nic->ncs,
-                                                            queues, true);
+                                                            queue_pairs, true);
     } else if (virtio_net_started(n, vdev->status)) {
         /* After using the device, we need to reset the network backend to
          * the default (guest native endianness), otherwise the guest may
          * lose network connectivity if it is rebooted into a different
          * endianness.
          */
-        virtio_net_set_vnet_endian(vdev, n->nic->ncs, queues, false);
+        virtio_net_set_vnet_endian(vdev, n->nic->ncs, queue_pairs, false);
     }
 }
 
@@ -368,12 +368,12 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
     virtio_net_vnet_endian_status(n, status);
     virtio_net_vhost_status(n, status);
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_queue_pairs; i++) {
         NetClientState *ncs = qemu_get_subqueue(n->nic, i);
         bool queue_started;
         q = &n->vqs[i];
 
-        if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
+        if ((!n->multiqueue && i != 0) || i >= n->curr_queue_pairs) {
             queue_status = 0;
         } else {
             queue_status = status;
@@ -540,7 +540,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
     n->nouni = 0;
     n->nobcast = 0;
     /* multiqueue is disabled by default */
-    n->curr_queues = 1;
+    n->curr_queue_pairs = 1;
     timer_del(n->announce_timer.tm);
     n->announce_timer.round = 0;
     n->status &= ~VIRTIO_NET_S_ANNOUNCE;
@@ -556,7 +556,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
     memset(n->vlans, 0, MAX_VLAN >> 3);
 
     /* Flush any async TX */
-    for (i = 0;  i < n->max_queues; i++) {
+    for (i = 0;  i < n->max_queue_pairs; i++) {
         NetClientState *nc = qemu_get_subqueue(n->nic, i);
 
         if (nc->peer) {
@@ -610,7 +610,7 @@ static void virtio_net_set_mrg_rx_bufs(VirtIONet *n, int mergeable_rx_bufs,
             sizeof(struct virtio_net_hdr);
     }
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_queue_pairs; i++) {
         nc = qemu_get_subqueue(n->nic, i);
 
         if (peer_has_vnet_hdr(n) &&
@@ -655,7 +655,7 @@ static int peer_attach(VirtIONet *n, int index)
         return 0;
     }
 
-    if (n->max_queues == 1) {
+    if (n->max_queue_pairs == 1) {
         return 0;
     }
 
@@ -681,7 +681,7 @@ static int peer_detach(VirtIONet *n, int index)
     return tap_disable(nc->peer);
 }
 
-static void virtio_net_set_queues(VirtIONet *n)
+static void virtio_net_set_queue_pairs(VirtIONet *n)
 {
     int i;
     int r;
@@ -690,8 +690,8 @@ static void virtio_net_set_queues(VirtIONet *n)
         return;
     }
 
-    for (i = 0; i < n->max_queues; i++) {
-        if (i < n->curr_queues) {
+    for (i = 0; i < n->max_queue_pairs; i++) {
+        if (i < n->curr_queue_pairs) {
             r = peer_attach(n, i);
             assert(!r);
         } else {
@@ -920,7 +920,7 @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
         virtio_net_apply_guest_offloads(n);
     }
 
-    for (i = 0;  i < n->max_queues; i++) {
+    for (i = 0;  i < n->max_queue_pairs; i++) {
         NetClientState *nc = qemu_get_subqueue(n->nic, i);
 
         if (!get_vhost_net(nc->peer)) {
@@ -1247,7 +1247,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     struct virtio_net_rss_config cfg;
     size_t s, offset = 0, size_get;
-    uint16_t queues, i;
+    uint16_t queue_pairs, i;
     struct {
         uint16_t us;
         uint8_t b;
@@ -1289,7 +1289,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     }
     n->rss_data.default_queue = do_rss ?
         virtio_lduw_p(vdev, &cfg.unclassified_queue) : 0;
-    if (n->rss_data.default_queue >= n->max_queues) {
+    if (n->rss_data.default_queue >= n->max_queue_pairs) {
         err_msg = "Invalid default queue";
         err_value = n->rss_data.default_queue;
         goto error;
@@ -1318,14 +1318,14 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     size_get = sizeof(temp);
     s = iov_to_buf(iov, iov_cnt, offset, &temp, size_get);
     if (s != size_get) {
-        err_msg = "Can't get queues";
+        err_msg = "Can't get queue_pairs";
         err_value = (uint32_t)s;
         goto error;
     }
-    queues = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_queues;
-    if (queues == 0 || queues > n->max_queues) {
-        err_msg = "Invalid number of queues";
-        err_value = queues;
+    queue_pairs = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_queue_pairs;
+    if (queue_pairs == 0 || queue_pairs > n->max_queue_pairs) {
+        err_msg = "Invalid number of queue_pairs";
+        err_value = queue_pairs;
         goto error;
     }
     if (temp.b > VIRTIO_NET_RSS_MAX_KEY_SIZE) {
@@ -1340,7 +1340,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     }
     if (!temp.b && !n->rss_data.hash_types) {
         virtio_net_disable_rss(n);
-        return queues;
+        return queue_pairs;
     }
     offset += size_get;
     size_get = temp.b;
@@ -1373,7 +1373,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     trace_virtio_net_rss_enable(n->rss_data.hash_types,
                                 n->rss_data.indirections_len,
                                 temp.b);
-    return queues;
+    return queue_pairs;
 error:
     trace_virtio_net_rss_error(err_msg, err_value);
     virtio_net_disable_rss(n);
@@ -1384,15 +1384,15 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
                                 struct iovec *iov, unsigned int iov_cnt)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
-    uint16_t queues;
+    uint16_t queue_pairs;
 
     virtio_net_disable_rss(n);
     if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
-        queues = virtio_net_handle_rss(n, iov, iov_cnt, false);
-        return queues ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
+        queue_pairs = virtio_net_handle_rss(n, iov, iov_cnt, false);
+        return queue_pairs ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
     }
     if (cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
-        queues = virtio_net_handle_rss(n, iov, iov_cnt, true);
+        queue_pairs = virtio_net_handle_rss(n, iov, iov_cnt, true);
     } else if (cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
         struct virtio_net_ctrl_mq mq;
         size_t s;
@@ -1403,24 +1403,24 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
         if (s != sizeof(mq)) {
             return VIRTIO_NET_ERR;
         }
-        queues = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
+        queue_pairs = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
 
     } else {
         return VIRTIO_NET_ERR;
     }
 
-    if (queues < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
-        queues > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
-        queues > n->max_queues ||
+    if (queue_pairs < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
+        queue_pairs > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
+        queue_pairs > n->max_queue_pairs ||
         !n->multiqueue) {
         return VIRTIO_NET_ERR;
     }
 
-    n->curr_queues = queues;
-    /* stop the backend before changing the number of queues to avoid handling a
+    n->curr_queue_pairs = queue_pairs;
+    /* stop the backend before changing the number of queue_pairs to avoid handling a
      * disabled queue */
     virtio_net_set_status(vdev, vdev->status);
-    virtio_net_set_queues(n);
+    virtio_net_set_queue_pairs(n);
 
     return VIRTIO_NET_OK;
 }
@@ -1498,7 +1498,7 @@ static bool virtio_net_can_receive(NetClientState *nc)
         return false;
     }
 
-    if (nc->queue_index >= n->curr_queues) {
+    if (nc->queue_index >= n->curr_queue_pairs) {
         return false;
     }
 
@@ -2753,11 +2753,11 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
     virtio_del_queue(vdev, index * 2 + 1);
 }
 
-static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
+static void virtio_net_change_num_queue_pairs(VirtIONet *n, int new_max_queue_pairs)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     int old_num_queues = virtio_get_num_queues(vdev);
-    int new_num_queues = new_max_queues * 2 + 1;
+    int new_num_queues = new_max_queue_pairs * 2 + 1;
     int i;
 
     assert(old_num_queues >= 3);
@@ -2790,12 +2790,12 @@ static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
 
 static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue)
 {
-    int max = multiqueue ? n->max_queues : 1;
+    int max = multiqueue ? n->max_queue_pairs : 1;
 
     n->multiqueue = multiqueue;
-    virtio_net_change_num_queues(n, max);
+    virtio_net_change_num_queue_pairs(n, max);
 
-    virtio_net_set_queues(n);
+    virtio_net_set_queue_pairs(n);
 }
 
 static int virtio_net_post_load_device(void *opaque, int version_id)
@@ -2828,7 +2828,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
      */
     n->saved_guest_offloads = n->curr_guest_offloads;
 
-    virtio_net_set_queues(n);
+    virtio_net_set_queue_pairs(n);
 
     /* Find the first multicast entry in the saved MAC filter */
     for (i = 0; i < n->mac_table.in_use; i++) {
@@ -2841,7 +2841,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
     /* nc.link_down can't be migrated, so infer link_down according
      * to link status bit in n->status */
     link_down = (n->status & VIRTIO_NET_S_LINK_UP) == 0;
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_queue_pairs; i++) {
         qemu_get_subqueue(n->nic, i)->link_down = link_down;
     }
 
@@ -2906,9 +2906,9 @@ static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
    },
 };
 
-static bool max_queues_gt_1(void *opaque, int version_id)
+static bool max_queue_pairs_gt_1(void *opaque, int version_id)
 {
-    return VIRTIO_NET(opaque)->max_queues > 1;
+    return VIRTIO_NET(opaque)->max_queue_pairs > 1;
 }
 
 static bool has_ctrl_guest_offloads(void *opaque, int version_id)
@@ -2933,13 +2933,13 @@ static bool mac_table_doesnt_fit(void *opaque, int version_id)
 struct VirtIONetMigTmp {
     VirtIONet      *parent;
     VirtIONetQueue *vqs_1;
-    uint16_t        curr_queues_1;
+    uint16_t        curr_queue_pairs_1;
     uint8_t         has_ufo;
     uint32_t        has_vnet_hdr;
 };
 
 /* The 2nd and subsequent tx_waiting flags are loaded later than
- * the 1st entry in the queues and only if there's more than one
+ * the 1st entry in the queue_pairs and only if there's more than one
  * entry.  We use the tmp mechanism to calculate a temporary
  * pointer and count and also validate the count.
  */
@@ -2949,9 +2949,9 @@ static int virtio_net_tx_waiting_pre_save(void *opaque)
     struct VirtIONetMigTmp *tmp = opaque;
 
     tmp->vqs_1 = tmp->parent->vqs + 1;
-    tmp->curr_queues_1 = tmp->parent->curr_queues - 1;
-    if (tmp->parent->curr_queues == 0) {
-        tmp->curr_queues_1 = 0;
+    tmp->curr_queue_pairs_1 = tmp->parent->curr_queue_pairs - 1;
+    if (tmp->parent->curr_queue_pairs == 0) {
+        tmp->curr_queue_pairs_1 = 0;
     }
 
     return 0;
@@ -2964,9 +2964,9 @@ static int virtio_net_tx_waiting_pre_load(void *opaque)
     /* Reuse the pointer setup from save */
     virtio_net_tx_waiting_pre_save(opaque);
 
-    if (tmp->parent->curr_queues > tmp->parent->max_queues) {
-        error_report("virtio-net: curr_queues %x > max_queues %x",
-            tmp->parent->curr_queues, tmp->parent->max_queues);
+    if (tmp->parent->curr_queue_pairs > tmp->parent->max_queue_pairs) {
+        error_report("virtio-net: curr_queue_pairs %x > max_queue_pairs %x",
+            tmp->parent->curr_queue_pairs, tmp->parent->max_queue_pairs);
 
         return -EINVAL;
     }
@@ -2980,7 +2980,7 @@ static const VMStateDescription vmstate_virtio_net_tx_waiting = {
     .pre_save  = virtio_net_tx_waiting_pre_save,
     .fields    = (VMStateField[]) {
         VMSTATE_STRUCT_VARRAY_POINTER_UINT16(vqs_1, struct VirtIONetMigTmp,
-                                     curr_queues_1,
+                                     curr_queue_pairs_1,
                                      vmstate_virtio_net_queue_tx_waiting,
                                      struct VirtIONetQueue),
         VMSTATE_END_OF_LIST()
@@ -3122,9 +3122,9 @@ static const VMStateDescription vmstate_virtio_net_device = {
         VMSTATE_UINT8(nobcast, VirtIONet),
         VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
                          vmstate_virtio_net_has_ufo),
-        VMSTATE_SINGLE_TEST(max_queues, VirtIONet, max_queues_gt_1, 0,
+        VMSTATE_SINGLE_TEST(max_queue_pairs, VirtIONet, max_queue_pairs_gt_1, 0,
                             vmstate_info_uint16_equal, uint16_t),
-        VMSTATE_UINT16_TEST(curr_queues, VirtIONet, max_queues_gt_1),
+        VMSTATE_UINT16_TEST(curr_queue_pairs, VirtIONet, max_queue_pairs_gt_1),
         VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
                          vmstate_virtio_net_tx_waiting),
         VMSTATE_UINT64_TEST(curr_guest_offloads, VirtIONet,
@@ -3368,16 +3368,16 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    n->max_queues = MAX(n->nic_conf.peers.queues, 1);
-    if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
-        error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
+    n->max_queue_pairs = MAX(n->nic_conf.peers.queues, 1);
+    if (n->max_queue_pairs * 2 + 1 > VIRTIO_QUEUE_MAX) {
+        error_setg(errp, "Invalid number of queue_pairs (= %" PRIu32 "), "
                    "must be a positive integer less than %d.",
-                   n->max_queues, (VIRTIO_QUEUE_MAX - 1) / 2);
+                   n->max_queue_pairs, (VIRTIO_QUEUE_MAX - 1) / 2);
         virtio_cleanup(vdev);
         return;
     }
-    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_queues);
-    n->curr_queues = 1;
+    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_queue_pairs);
+    n->curr_queue_pairs = 1;
     n->tx_timeout = n->net_conf.txtimer;
 
     if (n->net_conf.tx && strcmp(n->net_conf.tx, "timer")
@@ -3391,7 +3391,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
     n->net_conf.tx_queue_size = MIN(virtio_net_max_tx_queue_size(n),
                                     n->net_conf.tx_queue_size);
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_queue_pairs; i++) {
         virtio_net_add_queue(n, i);
     }
 
@@ -3415,13 +3415,13 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
                               object_get_typename(OBJECT(dev)), dev->id, n);
     }
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_queue_pairs; i++) {
         n->nic->ncs[i].do_not_pad = true;
     }
 
     peer_test_vnet_hdr(n);
     if (peer_has_vnet_hdr(n)) {
-        for (i = 0; i < n->max_queues; i++) {
+        for (i = 0; i < n->max_queue_pairs; i++) {
             qemu_using_vnet_hdr(qemu_get_subqueue(n->nic, i)->peer, true);
         }
         n->host_hdr_len = sizeof(struct virtio_net_hdr);
@@ -3463,7 +3463,7 @@ static void virtio_net_device_unrealize(DeviceState *dev)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
     VirtIONet *n = VIRTIO_NET(dev);
-    int i, max_queues;
+    int i, max_queue_pairs;
 
     if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
         virtio_net_unload_ebpf(n);
@@ -3485,12 +3485,12 @@ static void virtio_net_device_unrealize(DeviceState *dev)
         remove_migration_state_change_notifier(&n->migration_state);
     }
 
-    max_queues = n->multiqueue ? n->max_queues : 1;
-    for (i = 0; i < max_queues; i++) {
+    max_queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
+    for (i = 0; i < max_queue_pairs; i++) {
         virtio_net_del_queue(n, i);
     }
     /* delete also control vq */
-    virtio_del_queue(vdev, max_queues * 2);
+    virtio_del_queue(vdev, max_queue_pairs * 2);
     qemu_announce_timer_del(&n->announce_timer, false);
     g_free(n->vqs);
     qemu_del_nic(n->nic);
diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
index 824a69c23f..71cbdc26d7 100644
--- a/include/hw/virtio/virtio-net.h
+++ b/include/hw/virtio/virtio-net.h
@@ -194,8 +194,8 @@ struct VirtIONet {
     NICConf nic_conf;
     DeviceState *qdev;
     int multiqueue;
-    uint16_t max_queues;
-    uint16_t curr_queues;
+    uint16_t max_queue_pairs;
+    uint16_t curr_queue_pairs;
     size_t config_size;
     char *netclient_name;
     char *netclient_type;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH V3 08/10] vhost: record the last virtqueue index for the virtio device
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
                   ` (6 preceding siblings ...)
  2021-09-07  9:03 ` [PATCH V3 07/10] virtio-net: use "queue_pairs" instead of "queues" when possible Jason Wang
@ 2021-09-07  9:03 ` Jason Wang
  2021-09-07  9:03 ` [PATCH V3 09/10] virtio-net: vhost control virtqueue support Jason Wang
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

This patch introduces a new field in the vhost_dev structure to record
the last virtqueue index for the virtio device. This will be useful
for the vhost backends with 1:N model to start or stop the device
after all the vhost_dev structures were started or stopped.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c        | 12 +++++++++---
 include/hw/virtio/vhost.h |  2 ++
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 2b594b4642..3aabab06ea 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -231,9 +231,11 @@ fail:
     return NULL;
 }
 
-static void vhost_net_set_vq_index(struct vhost_net *net, int vq_index)
+static void vhost_net_set_vq_index(struct vhost_net *net, int vq_index,
+                                   int last_index)
 {
     net->dev.vq_index = vq_index;
+    net->dev.last_index = last_index;
 }
 
 static int vhost_net_start_one(struct vhost_net *net,
@@ -324,9 +326,13 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
     VirtIONet *n = VIRTIO_NET(dev);
     int nvhosts = data_queue_pairs + cvq;
     struct vhost_net *net;
-    int r, e, i;
+    int r, e, i, last_index = data_qps * 2;
     NetClientState *peer;
 
+    if (!cvq) {
+        last_index -= 1;
+    }
+
     if (!k->set_guest_notifiers) {
         error_report("binding does not support guest notifiers");
         return -ENOSYS;
@@ -341,7 +347,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         }
 
         net = get_vhost_net(peer);
-        vhost_net_set_vq_index(net, i * 2);
+        vhost_net_set_vq_index(net, i * 2, last_index);
 
         /* Suppress the masking guest notifiers on vhost user
          * because vhost user doesn't interrupt masking/unmasking
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 1a9fc65089..3fa0b554ef 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -74,6 +74,8 @@ struct vhost_dev {
     unsigned int nvqs;
     /* the first virtqueue which would be used by this vhost dev */
     int vq_index;
+    /* the last vq index for the virtio device (not vhost) */
+    int last_index;
     /* if non-zero, minimum required value for max_queues */
     int num_queues;
     uint64_t features;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH V3 09/10] virtio-net: vhost control virtqueue support
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
                   ` (7 preceding siblings ...)
  2021-09-07  9:03 ` [PATCH V3 08/10] vhost: record the last virtqueue index for the virtio device Jason Wang
@ 2021-09-07  9:03 ` Jason Wang
  2021-09-07  9:03 ` [PATCH V3 10/10] vhost-vdpa: multiqueue support Jason Wang
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

This patch implements the control virtqueue support for vhost. This
requires virtio-net to figure out the datapath queue pairs and control
virtqueue via is_datapath and pass the number of those two types
of virtqueues to vhost_net_start()/vhost_net_stop().

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c             |  2 +-
 hw/net/virtio-net.c            | 23 +++++++++++++++++++----
 include/hw/virtio/virtio-net.h |  1 +
 3 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 3aabab06ea..0d888f29a6 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -326,7 +326,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
     VirtIONet *n = VIRTIO_NET(dev);
     int nvhosts = data_queue_pairs + cvq;
     struct vhost_net *net;
-    int r, e, i, last_index = data_qps * 2;
+    int r, e, i, last_index = data_queue_pairs * 2;
     NetClientState *peer;
 
     if (!cvq) {
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 926b6c930e..bd39d2edc4 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -244,6 +244,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     NetClientState *nc = qemu_get_queue(n->nic);
     int queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
+    int cvq = n->max_ncs - n->max_queue_pairs;
 
     if (!get_vhost_net(nc->peer)) {
         return;
@@ -285,14 +286,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         }
 
         n->vhost_started = 1;
-        r = vhost_net_start(vdev, n->nic->ncs, queue_pairs, 0);
+        r = vhost_net_start(vdev, n->nic->ncs, queue_pairs, cvq);
         if (r < 0) {
             error_report("unable to start vhost net: %d: "
                          "falling back on userspace virtio", -r);
             n->vhost_started = 0;
         }
     } else {
-        vhost_net_stop(vdev, n->nic->ncs, queue_pairs, 0);
+        vhost_net_stop(vdev, n->nic->ncs, queue_pairs, cvq);
         n->vhost_started = 0;
     }
 }
@@ -3368,9 +3369,23 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    n->max_queue_pairs = MAX(n->nic_conf.peers.queues, 1);
+    n->max_ncs = MAX(n->nic_conf.peers.queues, 1);
+
+    /*
+     * Figure out the datapath queue pairs since the backend could
+     * provide control queue via peers as well.
+     */
+    if (n->nic_conf.peers.queues) {
+        for (i = 0; i < n->max_ncs; i++) {
+            if (n->nic_conf.peers.ncs[i]->is_datapath) {
+                ++n->max_queue_pairs;
+            }
+        }
+    }
+    n->max_queue_pairs = MAX(n->max_queue_pairs, 1);
+
     if (n->max_queue_pairs * 2 + 1 > VIRTIO_QUEUE_MAX) {
-        error_setg(errp, "Invalid number of queue_pairs (= %" PRIu32 "), "
+        error_setg(errp, "Invalid number of queue pairs (= %" PRIu32 "), "
                    "must be a positive integer less than %d.",
                    n->max_queue_pairs, (VIRTIO_QUEUE_MAX - 1) / 2);
         virtio_cleanup(vdev);
diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
index 71cbdc26d7..08ee6dea39 100644
--- a/include/hw/virtio/virtio-net.h
+++ b/include/hw/virtio/virtio-net.h
@@ -196,6 +196,7 @@ struct VirtIONet {
     int multiqueue;
     uint16_t max_queue_pairs;
     uint16_t curr_queue_pairs;
+    uint16_t max_ncs;
     size_t config_size;
     char *netclient_name;
     char *netclient_type;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH V3 10/10] vhost-vdpa: multiqueue support
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
                   ` (8 preceding siblings ...)
  2021-09-07  9:03 ` [PATCH V3 09/10] virtio-net: vhost control virtqueue support Jason Wang
@ 2021-09-07  9:03 ` Jason Wang
  2021-09-09 15:40 ` [PATCH V3 00/10] vhost-vDPA multiqueue Zhang, Chen
  2021-09-29  5:18 ` Jason Wang
  11 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-07  9:03 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

This patch implements the multiqueue support for vhost-vdpa. This is
done simply by reading the number of queue pairs from the config space
and initialize the datapath and control path net client.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c |   2 +-
 net/vhost-vdpa.c       | 105 +++++++++++++++++++++++++++++++++++++----
 2 files changed, 97 insertions(+), 10 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 99c676796e..c0df409b31 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -589,7 +589,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
         vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
     }
 
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (dev->vq_index + dev->nvqs != dev->last_index) {
         return 0;
     }
 
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 834dab28dd..533b15ae8c 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -18,6 +18,7 @@
 #include "qemu/error-report.h"
 #include "qemu/option.h"
 #include "qapi/error.h"
+#include <linux/vhost.h>
 #include <sys/ioctl.h>
 #include <err.h>
 #include "standard-headers/linux/virtio_net.h"
@@ -51,6 +52,14 @@ const int vdpa_feature_bits[] = {
     VIRTIO_NET_F_HOST_UFO,
     VIRTIO_NET_F_MRG_RXBUF,
     VIRTIO_NET_F_MTU,
+    VIRTIO_NET_F_CTRL_RX,
+    VIRTIO_NET_F_CTRL_RX_EXTRA,
+    VIRTIO_NET_F_CTRL_VLAN,
+    VIRTIO_NET_F_GUEST_ANNOUNCE,
+    VIRTIO_NET_F_CTRL_MAC_ADDR,
+    VIRTIO_NET_F_RSS,
+    VIRTIO_NET_F_MQ,
+    VIRTIO_NET_F_CTRL_VQ,
     VIRTIO_F_IOMMU_PLATFORM,
     VIRTIO_F_RING_PACKED,
     VIRTIO_NET_F_RSS,
@@ -81,7 +90,8 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
     return ret;
 }
 
-static int vhost_vdpa_add(NetClientState *ncs, void *be)
+static int vhost_vdpa_add(NetClientState *ncs, void *be,
+                          int queue_pair_index, int nvqs)
 {
     VhostNetOptions options;
     struct vhost_net *net = NULL;
@@ -94,7 +104,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
     options.net_backend = ncs;
     options.opaque      = be;
     options.busyloop_timeout = 0;
-    options.nvqs = 2;
+    options.nvqs = nvqs;
 
     net = vhost_net_init(&options);
     if (!net) {
@@ -158,18 +168,28 @@ static NetClientInfo net_vhost_vdpa_info = {
 static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
                                            const char *device,
                                            const char *name,
-                                           int vdpa_device_fd)
+                                           int vdpa_device_fd,
+                                           int queue_pair_index,
+                                           int nvqs,
+                                           bool is_datapath)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
     int ret = 0;
     assert(name);
-    nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
+    if (is_datapath) {
+        nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device,
+                                 name);
+    } else {
+        nc = qemu_new_net_control_client(&net_vhost_vdpa_info, peer,
+                                         device, name);
+    }
     snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
     s = DO_UPCAST(VhostVDPAState, nc, nc);
 
     s->vhost_vdpa.device_fd = vdpa_device_fd;
-    ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
+    s->vhost_vdpa.index = queue_pair_index;
+    ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
     if (ret) {
         qemu_del_net_client(nc);
         return NULL;
@@ -195,12 +215,52 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
     return 0;
 }
 
+static int vhost_vdpa_get_max_queue_pairs(int fd, int *has_cvq, Error **errp)
+{
+    unsigned long config_size = offsetof(struct vhost_vdpa_config, buf);
+    struct vhost_vdpa_config *config;
+    __virtio16 *max_queue_pairs;
+    uint64_t features;
+    int ret;
+
+    ret = ioctl(fd, VHOST_GET_FEATURES, &features);
+    if (ret) {
+        error_setg(errp, "Fail to query features from vhost-vDPA device");
+        return ret;
+    }
+
+    if (features & (1 << VIRTIO_NET_F_CTRL_VQ)) {
+        *has_cvq = 1;
+    } else {
+        *has_cvq = 0;
+    }
+
+    if (features & (1 << VIRTIO_NET_F_MQ)) {
+        config = g_malloc0(config_size + sizeof(*max_queue_pairs));
+        config->off = offsetof(struct virtio_net_config, max_virtqueue_pairs);
+        config->len = sizeof(*max_queue_pairs);
+
+        ret = ioctl(fd, VHOST_VDPA_GET_CONFIG, config);
+        if (ret) {
+            error_setg(errp, "Fail to get config from vhost-vDPA device");
+            return -ret;
+        }
+
+        max_queue_pairs = (__virtio16 *)&config->buf;
+
+        return lduw_le_p(max_queue_pairs);
+    }
+
+    return 1;
+}
+
 int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
     int vdpa_device_fd;
-    NetClientState *nc;
+    NetClientState **ncs, *nc;
+    int queue_pairs, i, has_cvq = 0;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -215,11 +275,38 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         return -errno;
     }
 
-    nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
-    if (!nc) {
+    queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd,
+                                                 &has_cvq, errp);
+    if (queue_pairs < 0) {
         qemu_close(vdpa_device_fd);
-        return -1;
+        return queue_pairs;
+    }
+
+    ncs = g_malloc0(sizeof(*ncs) * queue_pairs);
+
+    for (i = 0; i < queue_pairs; i++) {
+        ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
+                                     vdpa_device_fd, i, 2, true);
+        if (!ncs[i])
+            goto err;
+    }
+
+    if (has_cvq) {
+        nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
+                                 vdpa_device_fd, i, 1, false);
+        if (!nc)
+            goto err;
     }
 
+    g_free(ncs);
     return 0;
+
+err:
+    if (i) {
+        qemu_del_net_client(ncs[0]);
+    }
+    qemu_close(vdpa_device_fd);
+    g_free(ncs);
+
+    return -1;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* RE: [PATCH V3 00/10] vhost-vDPA multiqueue
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
                   ` (9 preceding siblings ...)
  2021-09-07  9:03 ` [PATCH V3 10/10] vhost-vdpa: multiqueue support Jason Wang
@ 2021-09-09 15:40 ` Zhang, Chen
  2021-09-10  1:46   ` Jason Wang
  2021-09-29  5:18 ` Jason Wang
  11 siblings, 1 reply; 18+ messages in thread
From: Zhang, Chen @ 2021-09-09 15:40 UTC (permalink / raw)
  To: Jason Wang, mst, qemu-devel; +Cc: eperezma, elic, lulu, Zhu, Lingshan, gdawar



> -----Original Message-----
> From: Qemu-devel <qemu-devel-
> bounces+chen.zhang=intel.com@nongnu.org> On Behalf Of Jason Wang
> Sent: Tuesday, September 7, 2021 5:03 PM
> To: mst@redhat.com; jasowang@redhat.com; qemu-devel@nongnu.org
> Cc: eperezma@redhat.com; elic@nvidia.com; gdawar@xilinx.com; Zhu,
> Lingshan <lingshan.zhu@intel.com>; lulu@redhat.com
> Subject: [PATCH V3 00/10] vhost-vDPA multiqueue
> 
> Hi All:
> 
> This patch implements the multiqueue support for vhost-vDPA. The most
> important requirement si the control virtqueue support. The virtio-net and

Typo:
 S/si/is


> vhost-net core are tweak to support control virtqueue as if what data queue
> pairs are done: a dedicated vhost_net device which is coupled with the
> NetClientState is intrdouced so most of the existing vhost codes could be
> reused with minor changes. This means the control virtqueue will bypass the
> Qemu. With the control virtqueue, vhost-vDPA are extend to support
> creating and destroying multiqueue queue pairs plus the control virtqueue.
> 
> For the future, if we want to support live migration, we can either do shadow
> cvq on top or introduce new interfaces for reporting device states.
> 
> Tests are done via the vp_vdpa driver in L1 guest.
> 
> Changes since V2:
> 
> - rebase to qemu master
> - use "queue_pairs" instead of "qps"
> - typo fixes
> 
> Changes since V1:
> 
> - start and stop vhost devices when all queues were setup
> - fix the case when driver doesn't support MQ but device support
> - correctly set the batching capability to avoid a map/unmap storm
> - various other tweaks
> 
> Jason Wang (10):
>   vhost-vdpa: open device fd in net_init_vhost_vdpa()
>   vhost-vdpa: classify one time request
>   vhost-vdpa: prepare for the multiqueue support
>   vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
>   net: introduce control client
>   vhost-net: control virtqueue support
>   virtio-net: use "queue_pairs" instead of "queues" when possible
>   vhost: record the last virtqueue index for the virtio device
>   virtio-net: vhost control virtqueue support
>   vhost-vdpa: multiqueue support
> 
>  hw/net/vhost_net.c             |  55 ++++++++---
>  hw/net/virtio-net.c            | 165 ++++++++++++++++++---------------
>  hw/virtio/vhost-vdpa.c         |  56 +++++++++--
>  include/hw/virtio/vhost-vdpa.h |   1 +
>  include/hw/virtio/vhost.h      |   2 +
>  include/hw/virtio/virtio-net.h |   5 +-
>  include/net/net.h              |   5 +
>  include/net/vhost_net.h        |   6 +-
>  net/net.c                      |  24 ++++-
>  net/vhost-vdpa.c               | 127 ++++++++++++++++++++++---
>  10 files changed, 328 insertions(+), 118 deletions(-)
> 
> --
> 2.25.1
> 



^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH V3 04/10] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
  2021-09-07  9:03 ` [PATCH V3 04/10] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
@ 2021-09-09 15:41   ` Zhang, Chen
  2021-09-10  1:49     ` Jason Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Zhang, Chen @ 2021-09-09 15:41 UTC (permalink / raw)
  To: Jason Wang, mst, qemu-devel; +Cc: eperezma, elic, lulu, Zhu, Lingshan, gdawar



> -----Original Message-----
> From: Qemu-devel <qemu-devel-
> bounces+chen.zhang=intel.com@nongnu.org> On Behalf Of Jason Wang
> Sent: Tuesday, September 7, 2021 5:03 PM
> To: mst@redhat.com; jasowang@redhat.com; qemu-devel@nongnu.org
> Cc: eperezma@redhat.com; elic@nvidia.com; gdawar@xilinx.com; Zhu,
> Lingshan <lingshan.zhu@intel.com>; lulu@redhat.com
> Subject: [PATCH V3 04/10] vhost-vdpa: let net_vhost_vdpa_init() returns
> NetClientState *
> 
> This patch switches to let net_vhost_vdpa_init() to return NetClientState *.
> This is used for the callers to allocate multiqueue NetClientState for
> multiqueue support.
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  net/vhost-vdpa.c | 19 ++++++++++++-------
>  1 file changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index
> 73d29a74ef..834dab28dd 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -155,8 +155,10 @@ static NetClientInfo net_vhost_vdpa_info = {
>          .has_ufo = vhost_vdpa_has_ufo,
>  };
> 
> -static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
> -                               const char *name, int vdpa_device_fd)
> +static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> +                                           const char *device,
> +                                           const char *name,
> +                                           int vdpa_device_fd)
>  {
>      NetClientState *nc = NULL;
>      VhostVDPAState *s;
> @@ -170,8 +172,9 @@ static int net_vhost_vdpa_init(NetClientState *peer,
> const char *device,
>      ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
>      if (ret) {
>          qemu_del_net_client(nc);
> +        return NULL;
>      }
> -    return ret;
> +    return nc;
>  }
> 
>  static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error
> **errp) @@ -196,7 +199,8 @@ int net_init_vhost_vdpa(const Netdev
> *netdev, const char *name,
>                          NetClientState *peer, Error **errp)  {
>      const NetdevVhostVDPAOptions *opts;
> -    int vdpa_device_fd, ret;
> +    int vdpa_device_fd;
> +    NetClientState *nc;
> 
>      assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>      opts = &netdev->u.vhost_vdpa;
> @@ -211,10 +215,11 @@ int net_init_vhost_vdpa(const Netdev *netdev,
> const char *name,
>          return -errno;
>      }
> 
> -    ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> vdpa_device_fd);
> -    if (ret) {
> +    nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> vdpa_device_fd);
> +    if (!nc) {
>          qemu_close(vdpa_device_fd);
> +        return -1;

It looks like this patch needs to be merged with the 01/10 patch.
Both related to the changes of " net_vhost_vdpa_init ".

Thanks
Chen


>      }
> 
> -    return ret;
> +    return 0;
>  }
> --
> 2.25.1
> 



^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH V3 05/10] net: introduce control client
  2021-09-07  9:03 ` [PATCH V3 05/10] net: introduce control client Jason Wang
@ 2021-09-09 15:45   ` Zhang, Chen
  2021-09-10  1:47     ` Jason Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Zhang, Chen @ 2021-09-09 15:45 UTC (permalink / raw)
  To: Jason Wang, mst, qemu-devel; +Cc: eperezma, elic, lulu, Zhu, Lingshan, gdawar



> -----Original Message-----
> From: Qemu-devel <qemu-devel-
> bounces+chen.zhang=intel.com@nongnu.org> On Behalf Of Jason Wang
> Sent: Tuesday, September 7, 2021 5:03 PM
> To: mst@redhat.com; jasowang@redhat.com; qemu-devel@nongnu.org
> Cc: eperezma@redhat.com; elic@nvidia.com; gdawar@xilinx.com; Zhu,
> Lingshan <lingshan.zhu@intel.com>; lulu@redhat.com
> Subject: [PATCH V3 05/10] net: introduce control client
> 
> This patch introduces a boolean for the device has control queue which can
> accepts control command via network queue.
> 
> The first user would be the control virtqueue support for vhost.
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  include/net/net.h |  5 +++++
>  net/net.c         | 24 +++++++++++++++++++++---
>  2 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/include/net/net.h b/include/net/net.h index
> 5d1508081f..4f400b8a09 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -103,6 +103,7 @@ struct NetClientState {
>      int vnet_hdr_len;
>      bool is_netdev;
>      bool do_not_pad; /* do not pad to the minimum ethernet frame length */
> +    bool is_datapath;
>      QTAILQ_HEAD(, NetFilterState) filters;  };
> 
> @@ -134,6 +135,10 @@ NetClientState
> *qemu_new_net_client(NetClientInfo *info,
>                                      NetClientState *peer,
>                                      const char *model,
>                                      const char *name);
> +NetClientState *qemu_new_net_control_client(NetClientInfo *info,
> +                                        NetClientState *peer,
> +                                        const char *model,
> +                                        const char *name);
>  NICState *qemu_new_nic(NetClientInfo *info,
>                         NICConf *conf,
>                         const char *model, diff --git a/net/net.c b/net/net.c index
> 52c99196c6..f0d14dbfc1 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -239,7 +239,8 @@ static void qemu_net_client_setup(NetClientState
> *nc,
>                                    NetClientState *peer,
>                                    const char *model,
>                                    const char *name,
> -                                  NetClientDestructor *destructor)
> +                                  NetClientDestructor *destructor,
> +                                  bool is_datapath)
>  {
>      nc->info = info;
>      nc->model = g_strdup(model);
> @@ -258,6 +259,7 @@ static void qemu_net_client_setup(NetClientState
> *nc,
> 
>      nc->incoming_queue =
> qemu_new_net_queue(qemu_deliver_packet_iov, nc);
>      nc->destructor = destructor;
> +    nc->is_datapath = is_datapath;
>      QTAILQ_INIT(&nc->filters);
>  }
> 
> @@ -272,7 +274,23 @@ NetClientState
> *qemu_new_net_client(NetClientInfo *info,
> 
>      nc = g_malloc0(info->size);
>      qemu_net_client_setup(nc, info, peer, model, name,
> -                          qemu_net_client_destructor);
> +                          qemu_net_client_destructor, true);
> +
> +    return nc;
> +}
> +
> +NetClientState *qemu_new_net_control_client(NetClientInfo *info,
> +                                            NetClientState *peer,
> +                                            const char *model,
> +                                            const char *name) {
> +    NetClientState *nc;
> +
> +    assert(info->size >= sizeof(NetClientState));
> +
> +    nc = g_malloc0(info->size);
> +    qemu_net_client_setup(nc, info, peer, model, name,
> +                          qemu_net_client_destructor, false);
> 

Except for the "is_datapath", it looks the same as "qemu_new_net_client ".
Why not extend the original "qemu_new_net_client " with a "bool control" field?

Thanks
Chen

>      return nc;
>  }
> @@ -297,7 +315,7 @@ NICState *qemu_new_nic(NetClientInfo *info,
> 
>      for (i = 0; i < queues; i++) {
>          qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
> -                              NULL);
> +                              NULL, true);
>          nic->ncs[i].queue_index = i;
>      }
> 
> --
> 2.25.1
> 



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH V3 00/10] vhost-vDPA multiqueue
  2021-09-09 15:40 ` [PATCH V3 00/10] vhost-vDPA multiqueue Zhang, Chen
@ 2021-09-10  1:46   ` Jason Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-10  1:46 UTC (permalink / raw)
  To: Zhang, Chen; +Cc: lulu, mst, qemu-devel, gdawar, eperezma, Zhu, Lingshan, elic

On Thu, Sep 9, 2021 at 11:40 PM Zhang, Chen <chen.zhang@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Qemu-devel <qemu-devel-
> > bounces+chen.zhang=intel.com@nongnu.org> On Behalf Of Jason Wang
> > Sent: Tuesday, September 7, 2021 5:03 PM
> > To: mst@redhat.com; jasowang@redhat.com; qemu-devel@nongnu.org
> > Cc: eperezma@redhat.com; elic@nvidia.com; gdawar@xilinx.com; Zhu,
> > Lingshan <lingshan.zhu@intel.com>; lulu@redhat.com
> > Subject: [PATCH V3 00/10] vhost-vDPA multiqueue
> >
> > Hi All:
> >
> > This patch implements the multiqueue support for vhost-vDPA. The most
> > important requirement si the control virtqueue support. The virtio-net and
>
> Typo:
>  S/si/is

Will fix (it there's a respin).

Thanks

>
>
> > vhost-net core are tweak to support control virtqueue as if what data queue
> > pairs are done: a dedicated vhost_net device which is coupled with the
> > NetClientState is intrdouced so most of the existing vhost codes could be
> > reused with minor changes. This means the control virtqueue will bypass the
> > Qemu. With the control virtqueue, vhost-vDPA are extend to support
> > creating and destroying multiqueue queue pairs plus the control virtqueue.
> >
> > For the future, if we want to support live migration, we can either do shadow
> > cvq on top or introduce new interfaces for reporting device states.
> >
> > Tests are done via the vp_vdpa driver in L1 guest.
> >
> > Changes since V2:
> >
> > - rebase to qemu master
> > - use "queue_pairs" instead of "qps"
> > - typo fixes
> >
> > Changes since V1:
> >
> > - start and stop vhost devices when all queues were setup
> > - fix the case when driver doesn't support MQ but device support
> > - correctly set the batching capability to avoid a map/unmap storm
> > - various other tweaks
> >
> > Jason Wang (10):
> >   vhost-vdpa: open device fd in net_init_vhost_vdpa()
> >   vhost-vdpa: classify one time request
> >   vhost-vdpa: prepare for the multiqueue support
> >   vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
> >   net: introduce control client
> >   vhost-net: control virtqueue support
> >   virtio-net: use "queue_pairs" instead of "queues" when possible
> >   vhost: record the last virtqueue index for the virtio device
> >   virtio-net: vhost control virtqueue support
> >   vhost-vdpa: multiqueue support
> >
> >  hw/net/vhost_net.c             |  55 ++++++++---
> >  hw/net/virtio-net.c            | 165 ++++++++++++++++++---------------
> >  hw/virtio/vhost-vdpa.c         |  56 +++++++++--
> >  include/hw/virtio/vhost-vdpa.h |   1 +
> >  include/hw/virtio/vhost.h      |   2 +
> >  include/hw/virtio/virtio-net.h |   5 +-
> >  include/net/net.h              |   5 +
> >  include/net/vhost_net.h        |   6 +-
> >  net/net.c                      |  24 ++++-
> >  net/vhost-vdpa.c               | 127 ++++++++++++++++++++++---
> >  10 files changed, 328 insertions(+), 118 deletions(-)
> >
> > --
> > 2.25.1
> >
>



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH V3 05/10] net: introduce control client
  2021-09-09 15:45   ` Zhang, Chen
@ 2021-09-10  1:47     ` Jason Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-10  1:47 UTC (permalink / raw)
  To: Zhang, Chen; +Cc: lulu, mst, qemu-devel, gdawar, eperezma, Zhu, Lingshan, elic

On Thu, Sep 9, 2021 at 11:46 PM Zhang, Chen <chen.zhang@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Qemu-devel <qemu-devel-
> > bounces+chen.zhang=intel.com@nongnu.org> On Behalf Of Jason Wang
> > Sent: Tuesday, September 7, 2021 5:03 PM
> > To: mst@redhat.com; jasowang@redhat.com; qemu-devel@nongnu.org
> > Cc: eperezma@redhat.com; elic@nvidia.com; gdawar@xilinx.com; Zhu,
> > Lingshan <lingshan.zhu@intel.com>; lulu@redhat.com
> > Subject: [PATCH V3 05/10] net: introduce control client
> >
> > This patch introduces a boolean for the device has control queue which can
> > accepts control command via network queue.
> >
> > The first user would be the control virtqueue support for vhost.
> >
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
> > ---
> >  include/net/net.h |  5 +++++
> >  net/net.c         | 24 +++++++++++++++++++++---
> >  2 files changed, 26 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/net/net.h b/include/net/net.h index
> > 5d1508081f..4f400b8a09 100644
> > --- a/include/net/net.h
> > +++ b/include/net/net.h
> > @@ -103,6 +103,7 @@ struct NetClientState {
> >      int vnet_hdr_len;
> >      bool is_netdev;
> >      bool do_not_pad; /* do not pad to the minimum ethernet frame length */
> > +    bool is_datapath;
> >      QTAILQ_HEAD(, NetFilterState) filters;  };
> >
> > @@ -134,6 +135,10 @@ NetClientState
> > *qemu_new_net_client(NetClientInfo *info,
> >                                      NetClientState *peer,
> >                                      const char *model,
> >                                      const char *name);
> > +NetClientState *qemu_new_net_control_client(NetClientInfo *info,
> > +                                        NetClientState *peer,
> > +                                        const char *model,
> > +                                        const char *name);
> >  NICState *qemu_new_nic(NetClientInfo *info,
> >                         NICConf *conf,
> >                         const char *model, diff --git a/net/net.c b/net/net.c index
> > 52c99196c6..f0d14dbfc1 100644
> > --- a/net/net.c
> > +++ b/net/net.c
> > @@ -239,7 +239,8 @@ static void qemu_net_client_setup(NetClientState
> > *nc,
> >                                    NetClientState *peer,
> >                                    const char *model,
> >                                    const char *name,
> > -                                  NetClientDestructor *destructor)
> > +                                  NetClientDestructor *destructor,
> > +                                  bool is_datapath)
> >  {
> >      nc->info = info;
> >      nc->model = g_strdup(model);
> > @@ -258,6 +259,7 @@ static void qemu_net_client_setup(NetClientState
> > *nc,
> >
> >      nc->incoming_queue =
> > qemu_new_net_queue(qemu_deliver_packet_iov, nc);
> >      nc->destructor = destructor;
> > +    nc->is_datapath = is_datapath;
> >      QTAILQ_INIT(&nc->filters);
> >  }
> >
> > @@ -272,7 +274,23 @@ NetClientState
> > *qemu_new_net_client(NetClientInfo *info,
> >
> >      nc = g_malloc0(info->size);
> >      qemu_net_client_setup(nc, info, peer, model, name,
> > -                          qemu_net_client_destructor);
> > +                          qemu_net_client_destructor, true);
> > +
> > +    return nc;
> > +}
> > +
> > +NetClientState *qemu_new_net_control_client(NetClientInfo *info,
> > +                                            NetClientState *peer,
> > +                                            const char *model,
> > +                                            const char *name) {
> > +    NetClientState *nc;
> > +
> > +    assert(info->size >= sizeof(NetClientState));
> > +
> > +    nc = g_malloc0(info->size);
> > +    qemu_net_client_setup(nc, info, peer, model, name,
> > +                          qemu_net_client_destructor, false);
> >
>
> Except for the "is_datapath", it looks the same as "qemu_new_net_client ".
> Why not extend the original "qemu_new_net_client " with a "bool control" field?

Either is fine. I think it's a trick to reduce the changeset,
otherwise we need to modify all the 13 callers.

Thanks

>
> Thanks
> Chen
>
> >      return nc;
> >  }
> > @@ -297,7 +315,7 @@ NICState *qemu_new_nic(NetClientInfo *info,
> >
> >      for (i = 0; i < queues; i++) {
> >          qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
> > -                              NULL);
> > +                              NULL, true);
> >          nic->ncs[i].queue_index = i;
> >      }
> >
> > --
> > 2.25.1
> >
>



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH V3 04/10] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
  2021-09-09 15:41   ` Zhang, Chen
@ 2021-09-10  1:49     ` Jason Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-10  1:49 UTC (permalink / raw)
  To: Zhang, Chen; +Cc: lulu, mst, qemu-devel, gdawar, eperezma, Zhu, Lingshan, elic

On Thu, Sep 9, 2021 at 11:41 PM Zhang, Chen <chen.zhang@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Qemu-devel <qemu-devel-
> > bounces+chen.zhang=intel.com@nongnu.org> On Behalf Of Jason Wang
> > Sent: Tuesday, September 7, 2021 5:03 PM
> > To: mst@redhat.com; jasowang@redhat.com; qemu-devel@nongnu.org
> > Cc: eperezma@redhat.com; elic@nvidia.com; gdawar@xilinx.com; Zhu,
> > Lingshan <lingshan.zhu@intel.com>; lulu@redhat.com
> > Subject: [PATCH V3 04/10] vhost-vdpa: let net_vhost_vdpa_init() returns
> > NetClientState *
> >
> > This patch switches to let net_vhost_vdpa_init() to return NetClientState *.
> > This is used for the callers to allocate multiqueue NetClientState for
> > multiqueue support.
> >
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
> > ---
> >  net/vhost-vdpa.c | 19 ++++++++++++-------
> >  1 file changed, 12 insertions(+), 7 deletions(-)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index
> > 73d29a74ef..834dab28dd 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -155,8 +155,10 @@ static NetClientInfo net_vhost_vdpa_info = {
> >          .has_ufo = vhost_vdpa_has_ufo,
> >  };
> >
> > -static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
> > -                               const char *name, int vdpa_device_fd)
> > +static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > +                                           const char *device,
> > +                                           const char *name,
> > +                                           int vdpa_device_fd)
> >  {
> >      NetClientState *nc = NULL;
> >      VhostVDPAState *s;
> > @@ -170,8 +172,9 @@ static int net_vhost_vdpa_init(NetClientState *peer,
> > const char *device,
> >      ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
> >      if (ret) {
> >          qemu_del_net_client(nc);
> > +        return NULL;
> >      }
> > -    return ret;
> > +    return nc;
> >  }
> >
> >  static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error
> > **errp) @@ -196,7 +199,8 @@ int net_init_vhost_vdpa(const Netdev
> > *netdev, const char *name,
> >                          NetClientState *peer, Error **errp)  {
> >      const NetdevVhostVDPAOptions *opts;
> > -    int vdpa_device_fd, ret;
> > +    int vdpa_device_fd;
> > +    NetClientState *nc;
> >
> >      assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> >      opts = &netdev->u.vhost_vdpa;
> > @@ -211,10 +215,11 @@ int net_init_vhost_vdpa(const Netdev *netdev,
> > const char *name,
> >          return -errno;
> >      }
> >
> > -    ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > vdpa_device_fd);
> > -    if (ret) {
> > +    nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > vdpa_device_fd);
> > +    if (!nc) {
> >          qemu_close(vdpa_device_fd);
> > +        return -1;
>
> It looks like this patch needs to be merged with the 01/10 patch.
> Both related to the changes of " net_vhost_vdpa_init ".

Either is fine, but I prefer to keep the patch logical independent to
ease the reviewers.

Thanks

>
> Thanks
> Chen
>
>
> >      }
> >
> > -    return ret;
> > +    return 0;
> >  }
> > --
> > 2.25.1
> >
>



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH V3 00/10] vhost-vDPA multiqueue
  2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
                   ` (10 preceding siblings ...)
  2021-09-09 15:40 ` [PATCH V3 00/10] vhost-vDPA multiqueue Zhang, Chen
@ 2021-09-29  5:18 ` Jason Wang
  11 siblings, 0 replies; 18+ messages in thread
From: Jason Wang @ 2021-09-29  5:18 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel
  Cc: eperezma, Eli Cohen, Gautam Dawar, Zhu Lingshan, Cindy Lu

On Tue, Sep 7, 2021 at 5:03 PM Jason Wang <jasowang@redhat.com> wrote:
>
> Hi All:
>
> This patch implements the multiqueue support for vhost-vDPA. The most
> important requirement si the control virtqueue support. The virtio-net
> and vhost-net core are tweak to support control virtqueue as if what
> data queue pairs are done: a dedicated vhost_net device which is
> coupled with the NetClientState is intrdouced so most of the existing
> vhost codes could be reused with minor changes. This means the control
> virtqueue will bypass the Qemu. With the control virtqueue, vhost-vDPA
> are extend to support creating and destroying multiqueue queue pairs
> plus the control virtqueue.
>
> For the future, if we want to support live migration, we can either do
> shadow cvq on top or introduce new interfaces for reporting device
> states.
>
> Tests are done via the vp_vdpa driver in L1 guest.

Michael, does this look good to you? If yes, do you plan to merge this
or I can do that?

Thanks

>
> Changes since V2:
>
> - rebase to qemu master
> - use "queue_pairs" instead of "qps"
> - typo fixes
>
> Changes since V1:
>
> - start and stop vhost devices when all queues were setup
> - fix the case when driver doesn't support MQ but device support
> - correctly set the batching capability to avoid a map/unmap storm
> - various other tweaks
>
> Jason Wang (10):
>   vhost-vdpa: open device fd in net_init_vhost_vdpa()
>   vhost-vdpa: classify one time request
>   vhost-vdpa: prepare for the multiqueue support
>   vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
>   net: introduce control client
>   vhost-net: control virtqueue support
>   virtio-net: use "queue_pairs" instead of "queues" when possible
>   vhost: record the last virtqueue index for the virtio device
>   virtio-net: vhost control virtqueue support
>   vhost-vdpa: multiqueue support
>
>  hw/net/vhost_net.c             |  55 ++++++++---
>  hw/net/virtio-net.c            | 165 ++++++++++++++++++---------------
>  hw/virtio/vhost-vdpa.c         |  56 +++++++++--
>  include/hw/virtio/vhost-vdpa.h |   1 +
>  include/hw/virtio/vhost.h      |   2 +
>  include/hw/virtio/virtio-net.h |   5 +-
>  include/net/net.h              |   5 +
>  include/net/vhost_net.h        |   6 +-
>  net/net.c                      |  24 ++++-
>  net/vhost-vdpa.c               | 127 ++++++++++++++++++++++---
>  10 files changed, 328 insertions(+), 118 deletions(-)
>
> --
> 2.25.1
>



^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-09-29  5:20 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-07  9:03 [PATCH V3 00/10] vhost-vDPA multiqueue Jason Wang
2021-09-07  9:03 ` [PATCH V3 01/10] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
2021-09-07  9:03 ` [PATCH V3 02/10] vhost-vdpa: classify one time request Jason Wang
2021-09-07  9:03 ` [PATCH V3 03/10] vhost-vdpa: prepare for the multiqueue support Jason Wang
2021-09-07  9:03 ` [PATCH V3 04/10] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
2021-09-09 15:41   ` Zhang, Chen
2021-09-10  1:49     ` Jason Wang
2021-09-07  9:03 ` [PATCH V3 05/10] net: introduce control client Jason Wang
2021-09-09 15:45   ` Zhang, Chen
2021-09-10  1:47     ` Jason Wang
2021-09-07  9:03 ` [PATCH V3 06/10] vhost-net: control virtqueue support Jason Wang
2021-09-07  9:03 ` [PATCH V3 07/10] virtio-net: use "queue_pairs" instead of "queues" when possible Jason Wang
2021-09-07  9:03 ` [PATCH V3 08/10] vhost: record the last virtqueue index for the virtio device Jason Wang
2021-09-07  9:03 ` [PATCH V3 09/10] virtio-net: vhost control virtqueue support Jason Wang
2021-09-07  9:03 ` [PATCH V3 10/10] vhost-vdpa: multiqueue support Jason Wang
2021-09-09 15:40 ` [PATCH V3 00/10] vhost-vDPA multiqueue Zhang, Chen
2021-09-10  1:46   ` Jason Wang
2021-09-29  5:18 ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.