All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 00/18] vhost-vDPA multiqueue
@ 2021-07-06  8:26 Jason Wang
  2021-07-06  8:27 ` [PATCH V2 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
                   ` (18 more replies)
  0 siblings, 19 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:26 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

Hi All:

This patch implements the multiqueue support for vhost-vDPA. The most
important requirement the control virtqueue support. The virtio-net
and vhost-net core are tweak to support control virtqueue as if what
data queue pairs are done: a dedicated vhost_net device which is
coupled with the NetClientState is intrdouced so most of the existing
vhost codes could be reused with minor changes. With the control
virtqueue, vhost-vDPA are extend to support creating and destroying
multiqueue queue pairs plus the control virtqueue.

Tests are done via the vp_vdpa driver in L1 guest plus vdpa simulator
on L0.

Please reivew.

Changes since V1:

- validating all features that depends on ctrl vq
- typo fixes and commit log tweaks
- fix build errors because max_qps is used before it is introduced

Thanks

Jason Wang (18):
  vhost_net: remove the meaningless assignment in vhost_net_start_one()
  vhost: use unsigned int for nvqs
  vhost_net: do not assume nvqs is always 2
  vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
  vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
  vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
  vhost-vdpa: tweak the error label in vhost_vdpa_add()
  vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
  vhost-vdpa: remove the unncessary queue_index assignment
  vhost-vdpa: open device fd in net_init_vhost_vdpa()
  vhost-vdpa: classify one time request
  vhost-vdpa: prepare for the multiqueue support
  vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
  net: introduce control client
  vhost-net: control virtqueue support
  virito-net: use "qps" instead of "queues" when possible
  virtio-net: vhost control virtqueue support
  vhost-vdpa: multiqueue support

 hw/net/vhost_net.c             |  48 +++++++---
 hw/net/virtio-net.c            | 165 ++++++++++++++++++---------------
 hw/virtio/vhost-vdpa.c         |  55 ++++++++++-
 include/hw/virtio/vhost-vdpa.h |   1 +
 include/hw/virtio/vhost.h      |   2 +-
 include/hw/virtio/virtio-net.h |   5 +-
 include/net/net.h              |   5 +
 include/net/vhost_net.h        |   7 +-
 net/net.c                      |  24 ++++-
 net/tap.c                      |   1 +
 net/vhost-user.c               |   1 +
 net/vhost-vdpa.c               | 156 ++++++++++++++++++++++++-------
 12 files changed, 332 insertions(+), 138 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH V2 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one()
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 02/18] vhost: use unsigned int for nvqs Jason Wang
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

The nvqs and vqs have been initialized during vhost_net_init() and are
not expected to change during the life cycle of vhost_net
structure. So this patch removes the meaningless assignment.

Reviewed-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 44c1ed92dc..6bd4184f96 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -238,9 +238,6 @@ static int vhost_net_start_one(struct vhost_net *net,
     struct vhost_vring_file file = { };
     int r;
 
-    net->dev.nvqs = 2;
-    net->dev.vqs = net->vqs;
-
     r = vhost_dev_enable_notifiers(&net->dev, dev);
     if (r < 0) {
         goto fail_notifiers;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 02/18] vhost: use unsigned int for nvqs
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
  2021-07-06  8:27 ` [PATCH V2 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 03/18] vhost_net: do not assume nvqs is always 2 Jason Wang
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

Switch to use unsigned int for nvqs since it's not expected to be
negative.

Reviewed-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/hw/virtio/vhost.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 21a9a52088..ddd7d3d594 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -71,7 +71,7 @@ struct vhost_dev {
     int n_tmp_sections;
     MemoryRegionSection *tmp_sections;
     struct vhost_virtqueue *vqs;
-    int nvqs;
+    unsigned int nvqs;
     /* the first virtqueue which would be used by this vhost dev */
     int vq_index;
     /* if non-zero, minimum required value for max_queues */
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 03/18] vhost_net: do not assume nvqs is always 2
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
  2021-07-06  8:27 ` [PATCH V2 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
  2021-07-06  8:27 ` [PATCH V2 02/18] vhost: use unsigned int for nvqs Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang
  Cc: eperezma, elic, lingshan.zhu, lulu, Stefano Garzarella

This patch switches to initialize dev.nvqs from the VhostNetOptions
instead of assuming it was 2. This is useful for implementing control
virtqueue support which will be a single vhost_net structure with a
single cvq.

Note that nvqs is still set to 2 for all users and this patch does not
change functionality.

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c      | 2 +-
 include/net/vhost_net.h | 1 +
 net/tap.c               | 1 +
 net/vhost-user.c        | 1 +
 net/vhost-vdpa.c        | 1 +
 5 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 6bd4184f96..ef1370bd92 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -163,9 +163,9 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
         goto fail;
     }
     net->nc = options->net_backend;
+    net->dev.nvqs = options->nvqs;
 
     net->dev.max_queues = 1;
-    net->dev.nvqs = 2;
     net->dev.vqs = net->vqs;
 
     if (backend_kernel) {
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index 172b0051d8..fba40cf695 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -14,6 +14,7 @@ typedef struct VhostNetOptions {
     VhostBackendType backend_type;
     NetClientState *net_backend;
     uint32_t busyloop_timeout;
+    unsigned int nvqs;
     void *opaque;
 } VhostNetOptions;
 
diff --git a/net/tap.c b/net/tap.c
index f5686bbf77..f716be3e3f 100644
--- a/net/tap.c
+++ b/net/tap.c
@@ -749,6 +749,7 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
             qemu_set_nonblock(vhostfd);
         }
         options.opaque = (void *)(uintptr_t)vhostfd;
+        options.nvqs = 2;
 
         s->vhost_net = vhost_net_init(&options);
         if (!s->vhost_net) {
diff --git a/net/vhost-user.c b/net/vhost-user.c
index ffbd94d944..b93918c5a4 100644
--- a/net/vhost-user.c
+++ b/net/vhost-user.c
@@ -85,6 +85,7 @@ static int vhost_user_start(int queues, NetClientState *ncs[],
         options.net_backend = ncs[i];
         options.opaque      = be;
         options.busyloop_timeout = 0;
+        options.nvqs = 2;
         net = vhost_net_init(&options);
         if (!net) {
             error_report("failed to init vhost_net for queue %d", i);
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 19187dce8c..18b45ad777 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -105,6 +105,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
     options.net_backend = ncs;
     options.opaque      = be;
     options.busyloop_timeout = 0;
+    options.nvqs = 2;
 
     net = vhost_net_init(&options);
     if (!net) {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (2 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 03/18] vhost_net: do not assume nvqs is always 2 Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 05/18] vhost-vdpa: don't cleanup twice " Jason Wang
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

The VhostVDPAState is just allocated by qemu_new_net_client() via
g_malloc0() in net_vhost_vdpa_init(). So s->vhost_net is NULL for
sure, let's remove this unnecessary check in vhost_vdpa_add().

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 18b45ad777..728e63ff54 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -112,10 +112,6 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
         error_report("failed to init vhost_net for queue");
         goto err;
     }
-    if (s->vhost_net) {
-        vhost_net_cleanup(s->vhost_net);
-        g_free(s->vhost_net);
-    }
     s->vhost_net = net;
     ret = vhost_vdpa_net_check_device_id(net);
     if (ret) {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 05/18] vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (3 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 06/18] vhost-vdpa: fix leaking of vhost_net " Jason Wang
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang
  Cc: eperezma, elic, lingshan.zhu, lulu, Stefano Garzarella

The previous vhost_net_cleanup is sufficient for freeing, calling
vhost_vdpa_del() in this case will lead an extra round of free. Note
that this kind of "double free" is safe since vhost_dev_cleanup() zero
the whole structure.

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 728e63ff54..f5689a7c32 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -82,16 +82,6 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
     return ret;
 }
 
-static void vhost_vdpa_del(NetClientState *ncs)
-{
-    VhostVDPAState *s;
-    assert(ncs->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
-    s = DO_UPCAST(VhostVDPAState, nc, ncs);
-    if (s->vhost_net) {
-        vhost_net_cleanup(s->vhost_net);
-    }
-}
-
 static int vhost_vdpa_add(NetClientState *ncs, void *be)
 {
     VhostNetOptions options;
@@ -122,7 +112,6 @@ err:
     if (net) {
         vhost_net_cleanup(net);
     }
-    vhost_vdpa_del(ncs);
     return -1;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 06/18] vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (4 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 05/18] vhost-vdpa: don't cleanup twice " Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 07/18] vhost-vdpa: tweak the error label " Jason Wang
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang
  Cc: eperezma, elic, lingshan.zhu, lulu, Stefano Garzarella

Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index f5689a7c32..21f09c546f 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -111,6 +111,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
 err:
     if (net) {
         vhost_net_cleanup(net);
+        g_free(net);
     }
     return -1;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 07/18] vhost-vdpa: tweak the error label in vhost_vdpa_add()
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (5 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 06/18] vhost-vdpa: fix leaking of vhost_net " Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

Introduce new error label to avoid the unnecessary checking of net
pointer.

Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 21f09c546f..0da7bc347a 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -100,19 +100,18 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
     net = vhost_net_init(&options);
     if (!net) {
         error_report("failed to init vhost_net for queue");
-        goto err;
+        goto err_init;
     }
     s->vhost_net = net;
     ret = vhost_vdpa_net_check_device_id(net);
     if (ret) {
-        goto err;
+        goto err_check;
     }
     return 0;
-err:
-    if (net) {
-        vhost_net_cleanup(net);
-        g_free(net);
-    }
+err_check:
+    vhost_net_cleanup(net);
+    g_free(net);
+err_init:
     return -1;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (6 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 07/18] vhost-vdpa: tweak the error label " Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 09/18] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang
  Cc: eperezma, elic, lingshan.zhu, lulu, Stefano Garzarella

Vhost_vdpa_add() can fail for various reasons, so the assertion of the
succeed is wrong. Instead, we should free the NetClientState and
propagate the error to the caller

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 0da7bc347a..87b181a74e 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -174,7 +174,10 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
     }
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
-    assert(s->vhost_net);
+    if (ret) {
+        qemu_close(vdpa_device_fd);
+        qemu_del_net_client(nc);
+    }
     return ret;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 09/18] vhost-vdpa: remove the unncessary queue_index assignment
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (7 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang
  Cc: eperezma, elic, lingshan.zhu, lulu, Stefano Garzarella

The queue_index of NetClientState should be assigned in set_netdev()
afterwards, so trying to net_vhost_vdpa_init() is meaningless. This
patch removes this.

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 87b181a74e..572aed4ca2 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -166,7 +166,6 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
     assert(name);
     nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
     snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
-    nc->queue_index = 0;
     s = DO_UPCAST(VhostVDPAState, nc, nc);
     vdpa_device_fd = qemu_open_old(vhostdev, O_RDWR);
     if (vdpa_device_fd == -1) {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa()
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (8 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 09/18] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 11/18] vhost-vdpa: classify one time request Jason Wang
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang
  Cc: eperezma, elic, lingshan.zhu, lulu, Stefano Garzarella

This path switches to open device fd in net_init_vhost_vpda(). This is
used to prepare for the multiqueue support.

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 572aed4ca2..e63a54a938 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -157,24 +157,19 @@ static NetClientInfo net_vhost_vdpa_info = {
 };
 
 static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
-                               const char *name, const char *vhostdev)
+                               const char *name, int vdpa_device_fd)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
-    int vdpa_device_fd = -1;
     int ret = 0;
     assert(name);
     nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
     snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
     s = DO_UPCAST(VhostVDPAState, nc, nc);
-    vdpa_device_fd = qemu_open_old(vhostdev, O_RDWR);
-    if (vdpa_device_fd == -1) {
-        return -errno;
-    }
+
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
     if (ret) {
-        qemu_close(vdpa_device_fd);
         qemu_del_net_client(nc);
     }
     return ret;
@@ -202,6 +197,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
+    int vdpa_device_fd, ret;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -210,5 +206,16 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                           (char *)name, errp)) {
         return -1;
     }
-    return net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, opts->vhostdev);
+
+    vdpa_device_fd = qemu_open_old(opts->vhostdev, O_RDWR);
+    if (vdpa_device_fd == -1) {
+        return -errno;
+    }
+
+    ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
+    if (ret) {
+        qemu_close(vdpa_device_fd);
+    }
+
+    return ret;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 11/18] vhost-vdpa: classify one time request
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (9 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 12/18] vhost-vdpa: prepare for the multiqueue support Jason Wang
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

Vhost-vdpa uses one device multiqueue queue (pairs) model. So we need
to classify the one time request (e.g SET_OWNER) and make sure those
request were only called once per device.

This is used for multiqueue support.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c         | 51 ++++++++++++++++++++++++++++++++--
 include/hw/virtio/vhost-vdpa.h |  1 +
 2 files changed, 49 insertions(+), 3 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 61ba313331..397f47bc11 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -265,6 +265,13 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
     vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
 }
 
+static bool vhost_vdpa_one_time_request(struct vhost_dev *dev)
+{
+    struct vhost_vdpa *v = dev->opaque;
+
+    return v->index != 0;
+}
+
 static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque)
 {
     struct vhost_vdpa *v;
@@ -277,6 +284,10 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque)
     v->listener = vhost_vdpa_memory_listener;
     v->msg_type = VHOST_IOTLB_MSG_V2;
 
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
                                VIRTIO_CONFIG_S_DRIVER);
 
@@ -387,6 +398,10 @@ static int vhost_vdpa_memslots_limit(struct vhost_dev *dev)
 static int vhost_vdpa_set_mem_table(struct vhost_dev *dev,
                                     struct vhost_memory *mem)
 {
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_mem_table(dev, mem->nregions, mem->padding);
     if (trace_event_get_state_backends(TRACE_VHOST_VDPA_SET_MEM_TABLE) &&
         trace_event_get_state_backends(TRACE_VHOST_VDPA_DUMP_REGIONS)) {
@@ -410,6 +425,11 @@ static int vhost_vdpa_set_features(struct vhost_dev *dev,
                                    uint64_t features)
 {
     int ret;
+
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_features(dev, features);
     ret = vhost_vdpa_call(dev, VHOST_SET_FEATURES, &features);
     uint8_t status = 0;
@@ -429,6 +449,10 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
         0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
     int r;
 
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
         return 0;
     }
@@ -458,6 +482,10 @@ static int vhost_vdpa_reset_device(struct vhost_dev *dev)
     int ret;
     uint8_t status = 0;
 
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     ret = vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &status);
     trace_vhost_vdpa_reset_device(dev, status);
     return ret;
@@ -545,11 +573,21 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
 {
     struct vhost_vdpa *v = dev->opaque;
     trace_vhost_vdpa_dev_start(dev, started);
+
     if (started) {
-        uint8_t status = 0;
-        memory_listener_register(&v->listener, &address_space_memory);
         vhost_vdpa_host_notifiers_init(dev);
         vhost_vdpa_set_vring_ready(dev);
+    } else {
+        vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
+    }
+
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
+    if (started) {
+        uint8_t status = 0;
+        memory_listener_register(&v->listener, &address_space_memory);
         vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
         vhost_vdpa_call(dev, VHOST_VDPA_GET_STATUS, &status);
 
@@ -558,7 +596,6 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
         vhost_vdpa_reset_device(dev);
         vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
                                    VIRTIO_CONFIG_S_DRIVER);
-        vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
         memory_listener_unregister(&v->listener);
 
         return 0;
@@ -568,6 +605,10 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
 static int vhost_vdpa_set_log_base(struct vhost_dev *dev, uint64_t base,
                                      struct vhost_log *log)
 {
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_log_base(dev, base, log->size, log->refcnt, log->fd,
                                   log->log);
     return vhost_vdpa_call(dev, VHOST_SET_LOG_BASE, &base);
@@ -633,6 +674,10 @@ static int vhost_vdpa_get_features(struct vhost_dev *dev,
 
 static int vhost_vdpa_set_owner(struct vhost_dev *dev)
 {
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_owner(dev);
     return vhost_vdpa_call(dev, VHOST_SET_OWNER, NULL);
 }
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 9188226d8b..e98e327f12 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -21,6 +21,7 @@ typedef struct VhostVDPAHostNotifier {
 
 typedef struct vhost_vdpa {
     int device_fd;
+    int index;
     uint32_t msg_type;
     MemoryListener listener;
     struct vhost_dev *dev;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 12/18] vhost-vdpa: prepare for the multiqueue support
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (10 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 11/18] vhost-vdpa: classify one time request Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 13/18] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

Unlike vhost-kernel, vhost-vdpa adapts a single device multiqueue
model. So we need to simply use virtqueue index as the vhost virtqueue
index. This is a must for multiqueue to work for vhost-vdpa.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 397f47bc11..e7e6b23108 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -495,8 +495,8 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev *dev, int idx)
 {
     assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
 
-    trace_vhost_vdpa_get_vq_index(dev, idx, idx - dev->vq_index);
-    return idx - dev->vq_index;
+    trace_vhost_vdpa_get_vq_index(dev, idx, idx);
+    return idx;
 }
 
 static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 13/18] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (11 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 12/18] vhost-vdpa: prepare for the multiqueue support Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 14/18] net: introduce control client Jason Wang
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

This patch switches to let net_vhost_vdpa_init() to return
NetClientState *. This is used for the callers to allocate multiqueue
NetClientState for multiqueue support.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index e63a54a938..cc11b2ec40 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -156,8 +156,10 @@ static NetClientInfo net_vhost_vdpa_info = {
         .has_ufo = vhost_vdpa_has_ufo,
 };
 
-static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
-                               const char *name, int vdpa_device_fd)
+static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
+                                           const char *device,
+                                           const char *name,
+                                           int vdpa_device_fd)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
@@ -171,8 +173,9 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
     if (ret) {
         qemu_del_net_client(nc);
+        return NULL;
     }
-    return ret;
+    return nc;
 }
 
 static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
@@ -197,7 +200,8 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
-    int vdpa_device_fd, ret;
+    int vdpa_device_fd;
+    NetClientState *nc;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -212,10 +216,11 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         return -errno;
     }
 
-    ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
-    if (ret) {
+    nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
+    if (!nc) {
         qemu_close(vdpa_device_fd);
+        return -1;
     }
 
-    return ret;
+    return 0;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 14/18] net: introduce control client
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (12 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 13/18] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 15/18] vhost-net: control virtqueue support Jason Wang
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

This patch introduces a boolean for the device has control queue which
can accepts control command via network queue.

The first user would be the control virtqueue support for vhost.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/net/net.h |  5 +++++
 net/net.c         | 24 +++++++++++++++++++++---
 2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/include/net/net.h b/include/net/net.h
index 5d1508081f..4f400b8a09 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -103,6 +103,7 @@ struct NetClientState {
     int vnet_hdr_len;
     bool is_netdev;
     bool do_not_pad; /* do not pad to the minimum ethernet frame length */
+    bool is_datapath;
     QTAILQ_HEAD(, NetFilterState) filters;
 };
 
@@ -134,6 +135,10 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
                                     NetClientState *peer,
                                     const char *model,
                                     const char *name);
+NetClientState *qemu_new_net_control_client(NetClientInfo *info,
+                                        NetClientState *peer,
+                                        const char *model,
+                                        const char *name);
 NICState *qemu_new_nic(NetClientInfo *info,
                        NICConf *conf,
                        const char *model,
diff --git a/net/net.c b/net/net.c
index 76bbb7c31b..fcaf9c7715 100644
--- a/net/net.c
+++ b/net/net.c
@@ -237,7 +237,8 @@ static void qemu_net_client_setup(NetClientState *nc,
                                   NetClientState *peer,
                                   const char *model,
                                   const char *name,
-                                  NetClientDestructor *destructor)
+                                  NetClientDestructor *destructor,
+                                  bool is_datapath)
 {
     nc->info = info;
     nc->model = g_strdup(model);
@@ -256,6 +257,7 @@ static void qemu_net_client_setup(NetClientState *nc,
 
     nc->incoming_queue = qemu_new_net_queue(qemu_deliver_packet_iov, nc);
     nc->destructor = destructor;
+    nc->is_datapath = is_datapath;
     QTAILQ_INIT(&nc->filters);
 }
 
@@ -270,7 +272,23 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
 
     nc = g_malloc0(info->size);
     qemu_net_client_setup(nc, info, peer, model, name,
-                          qemu_net_client_destructor);
+                          qemu_net_client_destructor, true);
+
+    return nc;
+}
+
+NetClientState *qemu_new_net_control_client(NetClientInfo *info,
+                                            NetClientState *peer,
+                                            const char *model,
+                                            const char *name)
+{
+    NetClientState *nc;
+
+    assert(info->size >= sizeof(NetClientState));
+
+    nc = g_malloc0(info->size);
+    qemu_net_client_setup(nc, info, peer, model, name,
+                          qemu_net_client_destructor, false);
 
     return nc;
 }
@@ -295,7 +313,7 @@ NICState *qemu_new_nic(NetClientInfo *info,
 
     for (i = 0; i < queues; i++) {
         qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
-                              NULL);
+                              NULL, true);
         nic->ncs[i].queue_index = i;
     }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 15/18] vhost-net: control virtqueue support
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (13 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 14/18] net: introduce control client Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 16/18] virito-net: use "qps" instead of "queues" when possible Jason Wang
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

We assume there's no cvq in the past, this is not true when we need
control virtqueue support for vhost-user backends. So this patch
implements the control virtqueue support for vhost-net. As datapath,
the control virtqueue is also required to be coupled with the
NetClientState. The vhost_net_start/stop() are tweaked to accept the
number of datapath queue pairs plus the the number of control
virtqueue for us to start and stop the vhost device.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c      | 43 ++++++++++++++++++++++++++++++-----------
 hw/net/virtio-net.c     |  4 ++--
 include/net/vhost_net.h |  6 ++++--
 3 files changed, 38 insertions(+), 15 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index ef1370bd92..4294fb9fc9 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -311,11 +311,14 @@ static void vhost_net_stop_one(struct vhost_net *net,
 }
 
 int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
-                    int total_queues)
+                    int data_qps, int cvq)
 {
     BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
     VirtioBusState *vbus = VIRTIO_BUS(qbus);
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+    int total_notifiers = data_qps * 2 + cvq;
+    VirtIONet *n = VIRTIO_NET(dev);
+    int nvhosts = data_qps + cvq;
     struct vhost_net *net;
     int r, e, i;
     NetClientState *peer;
@@ -325,9 +328,14 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         return -ENOSYS;
     }
 
-    for (i = 0; i < total_queues; i++) {
+    for (i = 0; i < nvhosts; i++) {
+
+        if (i < data_qps) {
+            peer = qemu_get_peer(ncs, i);
+        } else { /* Control Virtqueue */
+            peer = qemu_get_peer(ncs, n->max_queues);
+        }
 
-        peer = qemu_get_peer(ncs, i);
         net = get_vhost_net(peer);
         vhost_net_set_vq_index(net, i * 2);
 
@@ -340,14 +348,18 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         }
      }
 
-    r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
+    r = k->set_guest_notifiers(qbus->parent, total_notifiers, true);
     if (r < 0) {
         error_report("Error binding guest notifier: %d", -r);
         goto err;
     }
 
-    for (i = 0; i < total_queues; i++) {
-        peer = qemu_get_peer(ncs, i);
+    for (i = 0; i < nvhosts; i++) {
+        if (i < data_qps) {
+            peer = qemu_get_peer(ncs, i);
+        } else {
+            peer = qemu_get_peer(ncs, n->max_queues);
+        }
         r = vhost_net_start_one(get_vhost_net(peer), dev);
 
         if (r < 0) {
@@ -371,7 +383,7 @@ err_start:
         peer = qemu_get_peer(ncs , i);
         vhost_net_stop_one(get_vhost_net(peer), dev);
     }
-    e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
+    e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
     if (e < 0) {
         fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e);
         fflush(stderr);
@@ -381,18 +393,27 @@ err:
 }
 
 void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
-                    int total_queues)
+                    int data_qps, int cvq)
 {
     BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
     VirtioBusState *vbus = VIRTIO_BUS(qbus);
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+    VirtIONet *n = VIRTIO_NET(dev);
+    NetClientState *peer;
+    int total_notifiers = data_qps * 2 + cvq;
+    int nvhosts = data_qps + cvq;
     int i, r;
 
-    for (i = 0; i < total_queues; i++) {
-        vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
+    for (i = 0; i < nvhosts; i++) {
+        if (i < data_qps) {
+            peer = qemu_get_peer(ncs, i);
+        } else {
+            peer = qemu_get_peer(ncs, n->max_queues);
+        }
+        vhost_net_stop_one(get_vhost_net(peer), dev);
     }
 
-    r = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
+    r = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
     if (r < 0) {
         fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", r);
         fflush(stderr);
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index bd7958b9f0..614660274c 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         }
 
         n->vhost_started = 1;
-        r = vhost_net_start(vdev, n->nic->ncs, queues);
+        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
         if (r < 0) {
             error_report("unable to start vhost net: %d: "
                          "falling back on userspace virtio", -r);
             n->vhost_started = 0;
         }
     } else {
-        vhost_net_stop(vdev, n->nic->ncs, queues);
+        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
         n->vhost_started = 0;
     }
 }
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index fba40cf695..e656e38af9 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -21,8 +21,10 @@ typedef struct VhostNetOptions {
 uint64_t vhost_net_get_max_queues(VHostNetState *net);
 struct vhost_net *vhost_net_init(VhostNetOptions *options);
 
-int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
-void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs, int total_queues);
+int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
+                    int data_qps, int cvq);
+void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
+                    int data_qps, int cvq);
 
 void vhost_net_cleanup(VHostNetState *net);
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 16/18] virito-net: use "qps" instead of "queues" when possible
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (14 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 15/18] vhost-net: control virtqueue support Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 17/18] virtio-net: vhost control virtqueue support Jason Wang
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

Most of the time, "queues" really means queue pairs. So this patch
switch to use "qps" to avoid confusion.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c             |   6 +-
 hw/net/virtio-net.c            | 150 ++++++++++++++++-----------------
 include/hw/virtio/virtio-net.h |   4 +-
 3 files changed, 80 insertions(+), 80 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 4294fb9fc9..fe2fd7e3d5 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -333,7 +333,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         if (i < data_qps) {
             peer = qemu_get_peer(ncs, i);
         } else { /* Control Virtqueue */
-            peer = qemu_get_peer(ncs, n->max_queues);
+            peer = qemu_get_peer(ncs, n->max_qps);
         }
 
         net = get_vhost_net(peer);
@@ -358,7 +358,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         if (i < data_qps) {
             peer = qemu_get_peer(ncs, i);
         } else {
-            peer = qemu_get_peer(ncs, n->max_queues);
+            peer = qemu_get_peer(ncs, n->max_qps);
         }
         r = vhost_net_start_one(get_vhost_net(peer), dev);
 
@@ -408,7 +408,7 @@ void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
         if (i < data_qps) {
             peer = qemu_get_peer(ncs, i);
         } else {
-            peer = qemu_get_peer(ncs, n->max_queues);
+            peer = qemu_get_peer(ncs, n->max_qps);
         }
         vhost_net_stop_one(get_vhost_net(peer), dev);
     }
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 614660274c..36bd197087 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -54,7 +54,7 @@
 #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
 #define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
 
-/* for now, only allow larger queues; with virtio-1, guest can downsize */
+/* for now, only allow larger qps; with virtio-1, guest can downsize */
 #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
 #define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
 
@@ -131,7 +131,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, uint8_t *config)
     int ret = 0;
     memset(&netcfg, 0 , sizeof(struct virtio_net_config));
     virtio_stw_p(vdev, &netcfg.status, n->status);
-    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_queues);
+    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_qps);
     virtio_stw_p(vdev, &netcfg.mtu, n->net_conf.mtu);
     memcpy(netcfg.mac, n->mac, ETH_ALEN);
     virtio_stl_p(vdev, &netcfg.speed, n->net_conf.speed);
@@ -243,7 +243,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     NetClientState *nc = qemu_get_queue(n->nic);
-    int queues = n->multiqueue ? n->max_queues : 1;
+    int qps = n->multiqueue ? n->max_qps : 1;
 
     if (!get_vhost_net(nc->peer)) {
         return;
@@ -266,7 +266,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         /* Any packets outstanding? Purge them to avoid touching rings
          * when vhost is running.
          */
-        for (i = 0;  i < queues; i++) {
+        for (i = 0;  i < qps; i++) {
             NetClientState *qnc = qemu_get_subqueue(n->nic, i);
 
             /* Purge both directions: TX and RX. */
@@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         }
 
         n->vhost_started = 1;
-        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
+        r = vhost_net_start(vdev, n->nic->ncs, qps, 0);
         if (r < 0) {
             error_report("unable to start vhost net: %d: "
                          "falling back on userspace virtio", -r);
             n->vhost_started = 0;
         }
     } else {
-        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
+        vhost_net_stop(vdev, n->nic->ncs, qps, 0);
         n->vhost_started = 0;
     }
 }
@@ -309,11 +309,11 @@ static int virtio_net_set_vnet_endian_one(VirtIODevice *vdev,
 }
 
 static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
-                                       int queues, bool enable)
+                                       int qps, bool enable)
 {
     int i;
 
-    for (i = 0; i < queues; i++) {
+    for (i = 0; i < qps; i++) {
         if (virtio_net_set_vnet_endian_one(vdev, ncs[i].peer, enable) < 0 &&
             enable) {
             while (--i >= 0) {
@@ -330,7 +330,7 @@ static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
 static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
-    int queues = n->multiqueue ? n->max_queues : 1;
+    int qps = n->multiqueue ? n->max_qps : 1;
 
     if (virtio_net_started(n, status)) {
         /* Before using the device, we tell the network backend about the
@@ -339,14 +339,14 @@ static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
          * virtio-net code.
          */
         n->needs_vnet_hdr_swap = virtio_net_set_vnet_endian(vdev, n->nic->ncs,
-                                                            queues, true);
+                                                            qps, true);
     } else if (virtio_net_started(n, vdev->status)) {
         /* After using the device, we need to reset the network backend to
          * the default (guest native endianness), otherwise the guest may
          * lose network connectivity if it is rebooted into a different
          * endianness.
          */
-        virtio_net_set_vnet_endian(vdev, n->nic->ncs, queues, false);
+        virtio_net_set_vnet_endian(vdev, n->nic->ncs, qps, false);
     }
 }
 
@@ -368,12 +368,12 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
     virtio_net_vnet_endian_status(n, status);
     virtio_net_vhost_status(n, status);
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_qps; i++) {
         NetClientState *ncs = qemu_get_subqueue(n->nic, i);
         bool queue_started;
         q = &n->vqs[i];
 
-        if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
+        if ((!n->multiqueue && i != 0) || i >= n->curr_qps) {
             queue_status = 0;
         } else {
             queue_status = status;
@@ -540,7 +540,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
     n->nouni = 0;
     n->nobcast = 0;
     /* multiqueue is disabled by default */
-    n->curr_queues = 1;
+    n->curr_qps = 1;
     timer_del(n->announce_timer.tm);
     n->announce_timer.round = 0;
     n->status &= ~VIRTIO_NET_S_ANNOUNCE;
@@ -556,7 +556,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
     memset(n->vlans, 0, MAX_VLAN >> 3);
 
     /* Flush any async TX */
-    for (i = 0;  i < n->max_queues; i++) {
+    for (i = 0;  i < n->max_qps; i++) {
         NetClientState *nc = qemu_get_subqueue(n->nic, i);
 
         if (nc->peer) {
@@ -610,7 +610,7 @@ static void virtio_net_set_mrg_rx_bufs(VirtIONet *n, int mergeable_rx_bufs,
             sizeof(struct virtio_net_hdr);
     }
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_qps; i++) {
         nc = qemu_get_subqueue(n->nic, i);
 
         if (peer_has_vnet_hdr(n) &&
@@ -655,7 +655,7 @@ static int peer_attach(VirtIONet *n, int index)
         return 0;
     }
 
-    if (n->max_queues == 1) {
+    if (n->max_qps == 1) {
         return 0;
     }
 
@@ -681,7 +681,7 @@ static int peer_detach(VirtIONet *n, int index)
     return tap_disable(nc->peer);
 }
 
-static void virtio_net_set_queues(VirtIONet *n)
+static void virtio_net_set_qps(VirtIONet *n)
 {
     int i;
     int r;
@@ -690,8 +690,8 @@ static void virtio_net_set_queues(VirtIONet *n)
         return;
     }
 
-    for (i = 0; i < n->max_queues; i++) {
-        if (i < n->curr_queues) {
+    for (i = 0; i < n->max_qps; i++) {
+        if (i < n->curr_qps) {
             r = peer_attach(n, i);
             assert(!r);
         } else {
@@ -920,7 +920,7 @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
         virtio_net_apply_guest_offloads(n);
     }
 
-    for (i = 0;  i < n->max_queues; i++) {
+    for (i = 0;  i < n->max_qps; i++) {
         NetClientState *nc = qemu_get_subqueue(n->nic, i);
 
         if (!get_vhost_net(nc->peer)) {
@@ -1247,7 +1247,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     struct virtio_net_rss_config cfg;
     size_t s, offset = 0, size_get;
-    uint16_t queues, i;
+    uint16_t qps, i;
     struct {
         uint16_t us;
         uint8_t b;
@@ -1289,7 +1289,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     }
     n->rss_data.default_queue = do_rss ?
         virtio_lduw_p(vdev, &cfg.unclassified_queue) : 0;
-    if (n->rss_data.default_queue >= n->max_queues) {
+    if (n->rss_data.default_queue >= n->max_qps) {
         err_msg = "Invalid default queue";
         err_value = n->rss_data.default_queue;
         goto error;
@@ -1318,14 +1318,14 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     size_get = sizeof(temp);
     s = iov_to_buf(iov, iov_cnt, offset, &temp, size_get);
     if (s != size_get) {
-        err_msg = "Can't get queues";
+        err_msg = "Can't get qps";
         err_value = (uint32_t)s;
         goto error;
     }
-    queues = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_queues;
-    if (queues == 0 || queues > n->max_queues) {
-        err_msg = "Invalid number of queues";
-        err_value = queues;
+    qps = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_qps;
+    if (qps == 0 || qps > n->max_qps) {
+        err_msg = "Invalid number of qps";
+        err_value = qps;
         goto error;
     }
     if (temp.b > VIRTIO_NET_RSS_MAX_KEY_SIZE) {
@@ -1340,7 +1340,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     }
     if (!temp.b && !n->rss_data.hash_types) {
         virtio_net_disable_rss(n);
-        return queues;
+        return qps;
     }
     offset += size_get;
     size_get = temp.b;
@@ -1373,7 +1373,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     trace_virtio_net_rss_enable(n->rss_data.hash_types,
                                 n->rss_data.indirections_len,
                                 temp.b);
-    return queues;
+    return qps;
 error:
     trace_virtio_net_rss_error(err_msg, err_value);
     virtio_net_disable_rss(n);
@@ -1384,15 +1384,15 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
                                 struct iovec *iov, unsigned int iov_cnt)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
-    uint16_t queues;
+    uint16_t qps;
 
     virtio_net_disable_rss(n);
     if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
-        queues = virtio_net_handle_rss(n, iov, iov_cnt, false);
-        return queues ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
+        qps = virtio_net_handle_rss(n, iov, iov_cnt, false);
+        return qps ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
     }
     if (cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
-        queues = virtio_net_handle_rss(n, iov, iov_cnt, true);
+        qps = virtio_net_handle_rss(n, iov, iov_cnt, true);
     } else if (cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
         struct virtio_net_ctrl_mq mq;
         size_t s;
@@ -1403,24 +1403,24 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
         if (s != sizeof(mq)) {
             return VIRTIO_NET_ERR;
         }
-        queues = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
+        qps = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
 
     } else {
         return VIRTIO_NET_ERR;
     }
 
-    if (queues < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
-        queues > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
-        queues > n->max_queues ||
+    if (qps < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
+        qps > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
+        qps > n->max_qps ||
         !n->multiqueue) {
         return VIRTIO_NET_ERR;
     }
 
-    n->curr_queues = queues;
-    /* stop the backend before changing the number of queues to avoid handling a
+    n->curr_qps = qps;
+    /* stop the backend before changing the number of qps to avoid handling a
      * disabled queue */
     virtio_net_set_status(vdev, vdev->status);
-    virtio_net_set_queues(n);
+    virtio_net_set_qps(n);
 
     return VIRTIO_NET_OK;
 }
@@ -1498,7 +1498,7 @@ static bool virtio_net_can_receive(NetClientState *nc)
         return false;
     }
 
-    if (nc->queue_index >= n->curr_queues) {
+    if (nc->queue_index >= n->curr_qps) {
         return false;
     }
 
@@ -2753,11 +2753,11 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
     virtio_del_queue(vdev, index * 2 + 1);
 }
 
-static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
+static void virtio_net_change_num_qps(VirtIONet *n, int new_max_qps)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     int old_num_queues = virtio_get_num_queues(vdev);
-    int new_num_queues = new_max_queues * 2 + 1;
+    int new_num_queues = new_max_qps * 2 + 1;
     int i;
 
     assert(old_num_queues >= 3);
@@ -2790,12 +2790,12 @@ static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
 
 static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue)
 {
-    int max = multiqueue ? n->max_queues : 1;
+    int max = multiqueue ? n->max_qps : 1;
 
     n->multiqueue = multiqueue;
-    virtio_net_change_num_queues(n, max);
+    virtio_net_change_num_qps(n, max);
 
-    virtio_net_set_queues(n);
+    virtio_net_set_qps(n);
 }
 
 static int virtio_net_post_load_device(void *opaque, int version_id)
@@ -2828,7 +2828,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
      */
     n->saved_guest_offloads = n->curr_guest_offloads;
 
-    virtio_net_set_queues(n);
+    virtio_net_set_qps(n);
 
     /* Find the first multicast entry in the saved MAC filter */
     for (i = 0; i < n->mac_table.in_use; i++) {
@@ -2841,7 +2841,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
     /* nc.link_down can't be migrated, so infer link_down according
      * to link status bit in n->status */
     link_down = (n->status & VIRTIO_NET_S_LINK_UP) == 0;
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_qps; i++) {
         qemu_get_subqueue(n->nic, i)->link_down = link_down;
     }
 
@@ -2906,9 +2906,9 @@ static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
    },
 };
 
-static bool max_queues_gt_1(void *opaque, int version_id)
+static bool max_qps_gt_1(void *opaque, int version_id)
 {
-    return VIRTIO_NET(opaque)->max_queues > 1;
+    return VIRTIO_NET(opaque)->max_qps > 1;
 }
 
 static bool has_ctrl_guest_offloads(void *opaque, int version_id)
@@ -2933,13 +2933,13 @@ static bool mac_table_doesnt_fit(void *opaque, int version_id)
 struct VirtIONetMigTmp {
     VirtIONet      *parent;
     VirtIONetQueue *vqs_1;
-    uint16_t        curr_queues_1;
+    uint16_t        curr_qps_1;
     uint8_t         has_ufo;
     uint32_t        has_vnet_hdr;
 };
 
 /* The 2nd and subsequent tx_waiting flags are loaded later than
- * the 1st entry in the queues and only if there's more than one
+ * the 1st entry in the qps and only if there's more than one
  * entry.  We use the tmp mechanism to calculate a temporary
  * pointer and count and also validate the count.
  */
@@ -2949,9 +2949,9 @@ static int virtio_net_tx_waiting_pre_save(void *opaque)
     struct VirtIONetMigTmp *tmp = opaque;
 
     tmp->vqs_1 = tmp->parent->vqs + 1;
-    tmp->curr_queues_1 = tmp->parent->curr_queues - 1;
-    if (tmp->parent->curr_queues == 0) {
-        tmp->curr_queues_1 = 0;
+    tmp->curr_qps_1 = tmp->parent->curr_qps - 1;
+    if (tmp->parent->curr_qps == 0) {
+        tmp->curr_qps_1 = 0;
     }
 
     return 0;
@@ -2964,9 +2964,9 @@ static int virtio_net_tx_waiting_pre_load(void *opaque)
     /* Reuse the pointer setup from save */
     virtio_net_tx_waiting_pre_save(opaque);
 
-    if (tmp->parent->curr_queues > tmp->parent->max_queues) {
-        error_report("virtio-net: curr_queues %x > max_queues %x",
-            tmp->parent->curr_queues, tmp->parent->max_queues);
+    if (tmp->parent->curr_qps > tmp->parent->max_qps) {
+        error_report("virtio-net: curr_qps %x > max_qps %x",
+            tmp->parent->curr_qps, tmp->parent->max_qps);
 
         return -EINVAL;
     }
@@ -2980,7 +2980,7 @@ static const VMStateDescription vmstate_virtio_net_tx_waiting = {
     .pre_save  = virtio_net_tx_waiting_pre_save,
     .fields    = (VMStateField[]) {
         VMSTATE_STRUCT_VARRAY_POINTER_UINT16(vqs_1, struct VirtIONetMigTmp,
-                                     curr_queues_1,
+                                     curr_qps_1,
                                      vmstate_virtio_net_queue_tx_waiting,
                                      struct VirtIONetQueue),
         VMSTATE_END_OF_LIST()
@@ -3122,9 +3122,9 @@ static const VMStateDescription vmstate_virtio_net_device = {
         VMSTATE_UINT8(nobcast, VirtIONet),
         VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
                          vmstate_virtio_net_has_ufo),
-        VMSTATE_SINGLE_TEST(max_queues, VirtIONet, max_queues_gt_1, 0,
+        VMSTATE_SINGLE_TEST(max_qps, VirtIONet, max_qps_gt_1, 0,
                             vmstate_info_uint16_equal, uint16_t),
-        VMSTATE_UINT16_TEST(curr_queues, VirtIONet, max_queues_gt_1),
+        VMSTATE_UINT16_TEST(curr_qps, VirtIONet, max_qps_gt_1),
         VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
                          vmstate_virtio_net_tx_waiting),
         VMSTATE_UINT64_TEST(curr_guest_offloads, VirtIONet,
@@ -3367,16 +3367,16 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    n->max_queues = MAX(n->nic_conf.peers.queues, 1);
-    if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
-        error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
+    n->max_qps = MAX(n->nic_conf.peers.queues, 1);
+    if (n->max_qps * 2 + 1 > VIRTIO_QUEUE_MAX) {
+        error_setg(errp, "Invalid number of qps (= %" PRIu32 "), "
                    "must be a positive integer less than %d.",
-                   n->max_queues, (VIRTIO_QUEUE_MAX - 1) / 2);
+                   n->max_qps, (VIRTIO_QUEUE_MAX - 1) / 2);
         virtio_cleanup(vdev);
         return;
     }
-    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_queues);
-    n->curr_queues = 1;
+    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_qps);
+    n->curr_qps = 1;
     n->tx_timeout = n->net_conf.txtimer;
 
     if (n->net_conf.tx && strcmp(n->net_conf.tx, "timer")
@@ -3390,7 +3390,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
     n->net_conf.tx_queue_size = MIN(virtio_net_max_tx_queue_size(n),
                                     n->net_conf.tx_queue_size);
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_qps; i++) {
         virtio_net_add_queue(n, i);
     }
 
@@ -3414,13 +3414,13 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
                               object_get_typename(OBJECT(dev)), dev->id, n);
     }
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_qps; i++) {
         n->nic->ncs[i].do_not_pad = true;
     }
 
     peer_test_vnet_hdr(n);
     if (peer_has_vnet_hdr(n)) {
-        for (i = 0; i < n->max_queues; i++) {
+        for (i = 0; i < n->max_qps; i++) {
             qemu_using_vnet_hdr(qemu_get_subqueue(n->nic, i)->peer, true);
         }
         n->host_hdr_len = sizeof(struct virtio_net_hdr);
@@ -3462,7 +3462,7 @@ static void virtio_net_device_unrealize(DeviceState *dev)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
     VirtIONet *n = VIRTIO_NET(dev);
-    int i, max_queues;
+    int i, max_qps;
 
     if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
         virtio_net_unload_ebpf(n);
@@ -3484,12 +3484,12 @@ static void virtio_net_device_unrealize(DeviceState *dev)
         remove_migration_state_change_notifier(&n->migration_state);
     }
 
-    max_queues = n->multiqueue ? n->max_queues : 1;
-    for (i = 0; i < max_queues; i++) {
+    max_qps = n->multiqueue ? n->max_qps : 1;
+    for (i = 0; i < max_qps; i++) {
         virtio_net_del_queue(n, i);
     }
     /* delete also control vq */
-    virtio_del_queue(vdev, max_queues * 2);
+    virtio_del_queue(vdev, max_qps * 2);
     qemu_announce_timer_del(&n->announce_timer, false);
     g_free(n->vqs);
     qemu_del_nic(n->nic);
diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
index 824a69c23f..a9b6dc252e 100644
--- a/include/hw/virtio/virtio-net.h
+++ b/include/hw/virtio/virtio-net.h
@@ -194,8 +194,8 @@ struct VirtIONet {
     NICConf nic_conf;
     DeviceState *qdev;
     int multiqueue;
-    uint16_t max_queues;
-    uint16_t curr_queues;
+    uint16_t max_qps;
+    uint16_t curr_qps;
     size_t config_size;
     char *netclient_name;
     char *netclient_type;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 17/18] virtio-net: vhost control virtqueue support
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (15 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 16/18] virito-net: use "qps" instead of "queues" when possible Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-06  8:27 ` [PATCH V2 18/18] vhost-vdpa: multiqueue support Jason Wang
  2021-07-12  5:44 ` [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

This patch implements the control virtqueue support for vhost. This
requires virtio-net to figure out the datapath queue pairs and control
virtqueue via is_datapath and pass the number of those two types
of virtqueues to vhost_net_start()/vhost_net_stop().

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/virtio-net.c            | 21 ++++++++++++++++++---
 include/hw/virtio/virtio-net.h |  1 +
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 36bd197087..f003687579 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -244,6 +244,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     NetClientState *nc = qemu_get_queue(n->nic);
     int qps = n->multiqueue ? n->max_qps : 1;
+    int cvq = n->max_ncs - n->max_qps;
 
     if (!get_vhost_net(nc->peer)) {
         return;
@@ -285,14 +286,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         }
 
         n->vhost_started = 1;
-        r = vhost_net_start(vdev, n->nic->ncs, qps, 0);
+        r = vhost_net_start(vdev, n->nic->ncs, qps, cvq);
         if (r < 0) {
             error_report("unable to start vhost net: %d: "
                          "falling back on userspace virtio", -r);
             n->vhost_started = 0;
         }
     } else {
-        vhost_net_stop(vdev, n->nic->ncs, qps, 0);
+        vhost_net_stop(vdev, n->nic->ncs, qps, cvq);
         n->vhost_started = 0;
     }
 }
@@ -3367,7 +3368,21 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    n->max_qps = MAX(n->nic_conf.peers.queues, 1);
+    n->max_ncs = MAX(n->nic_conf.peers.queues, 1);
+
+    /*
+     * Figure out the datapath queue pairs since the backend could
+     * provide control queue via peers as well.
+     */
+    if (n->nic_conf.peers.queues) {
+        for (i = 0; i < n->max_ncs; i++) {
+            if (n->nic_conf.peers.ncs[i]->is_datapath) {
+                ++n->max_qps;
+            }
+        }
+    }
+    n->max_qps = MAX(n->max_qps, 1);
+
     if (n->max_qps * 2 + 1 > VIRTIO_QUEUE_MAX) {
         error_setg(errp, "Invalid number of qps (= %" PRIu32 "), "
                    "must be a positive integer less than %d.",
diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
index a9b6dc252e..ed4659c189 100644
--- a/include/hw/virtio/virtio-net.h
+++ b/include/hw/virtio/virtio-net.h
@@ -194,6 +194,7 @@ struct VirtIONet {
     NICConf nic_conf;
     DeviceState *qdev;
     int multiqueue;
+    uint16_t max_ncs;
     uint16_t max_qps;
     uint16_t curr_qps;
     size_t config_size;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH V2 18/18] vhost-vdpa: multiqueue support
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (16 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 17/18] virtio-net: vhost control virtqueue support Jason Wang
@ 2021-07-06  8:27 ` Jason Wang
  2021-07-12  5:44 ` [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
  18 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-06  8:27 UTC (permalink / raw)
  To: qemu-devel, mst, jasowang; +Cc: eperezma, elic, lingshan.zhu, lulu

This patch implements the multiqueue support for vhost-vdpa. This is
done simply by reading the number of queue pairs from the config space
and initialize the datapath and control path net client.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 104 +++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 95 insertions(+), 9 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index cc11b2ec40..01a667deb9 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -18,6 +18,7 @@
 #include "qemu/error-report.h"
 #include "qemu/option.h"
 #include "qapi/error.h"
+#include <linux/vhost.h>
 #include <sys/ioctl.h>
 #include <err.h>
 #include "standard-headers/linux/virtio_net.h"
@@ -52,6 +53,14 @@ const int vdpa_feature_bits[] = {
     VIRTIO_NET_F_HOST_UFO,
     VIRTIO_NET_F_MRG_RXBUF,
     VIRTIO_NET_F_MTU,
+    VIRTIO_NET_F_CTRL_RX,
+    VIRTIO_NET_F_CTRL_RX_EXTRA,
+    VIRTIO_NET_F_CTRL_VLAN,
+    VIRTIO_NET_F_GUEST_ANNOUNCE,
+    VIRTIO_NET_F_CTRL_MAC_ADDR,
+    VIRTIO_NET_F_RSS,
+    VIRTIO_NET_F_MQ,
+    VIRTIO_NET_F_CTRL_VQ,
     VIRTIO_F_IOMMU_PLATFORM,
     VIRTIO_F_RING_PACKED,
     VIRTIO_NET_F_RSS,
@@ -82,7 +91,8 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
     return ret;
 }
 
-static int vhost_vdpa_add(NetClientState *ncs, void *be)
+static int vhost_vdpa_add(NetClientState *ncs, void *be, int qp_index,
+                          int nvqs)
 {
     VhostNetOptions options;
     struct vhost_net *net = NULL;
@@ -95,7 +105,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
     options.net_backend = ncs;
     options.opaque      = be;
     options.busyloop_timeout = 0;
-    options.nvqs = 2;
+    options.nvqs = nvqs;
 
     net = vhost_net_init(&options);
     if (!net) {
@@ -159,18 +169,28 @@ static NetClientInfo net_vhost_vdpa_info = {
 static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
                                            const char *device,
                                            const char *name,
-                                           int vdpa_device_fd)
+                                           int vdpa_device_fd,
+                                           int qp_index,
+                                           int nvqs,
+                                           bool is_datapath)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
     int ret = 0;
     assert(name);
-    nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
+    if (is_datapath) {
+        nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device,
+                                 name);
+    } else {
+        nc = qemu_new_net_control_client(&net_vhost_vdpa_info, peer,
+                                         device, name);
+    }
     snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
     s = DO_UPCAST(VhostVDPAState, nc, nc);
 
     s->vhost_vdpa.device_fd = vdpa_device_fd;
-    ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
+    s->vhost_vdpa.index = qp_index;
+    ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, qp_index, nvqs);
     if (ret) {
         qemu_del_net_client(nc);
         return NULL;
@@ -196,12 +216,52 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
     return 0;
 }
 
+static int vhost_vdpa_get_max_qps(int fd, int *has_cvq, Error **errp)
+{
+    unsigned long config_size = offsetof(struct vhost_vdpa_config, buf);
+    struct vhost_vdpa_config *config;
+    __virtio16 *max_qps;
+    uint64_t features;
+    int ret;
+
+    ret = ioctl(fd, VHOST_GET_FEATURES, &features);
+    if (ret) {
+        error_setg(errp, "Fail to query features from vhost-vDPA device");
+        return ret;
+    }
+
+    if (features & (1 << VIRTIO_NET_F_CTRL_VQ)) {
+        *has_cvq = 1;
+    } else {
+        *has_cvq = 0;
+    }
+
+    if (features & (1 << VIRTIO_NET_F_MQ)) {
+        config = g_malloc0(config_size + sizeof(*max_qps));
+        config->off = offsetof(struct virtio_net_config, max_virtqueue_pairs);
+        config->len = sizeof(*max_qps);
+
+        ret = ioctl(fd, VHOST_VDPA_GET_CONFIG, config);
+        if (ret) {
+            error_setg(errp, "Fail to get config from vhost-vDPA device");
+            return -ret;
+        }
+
+        max_qps = (__virtio16 *)&config->buf;
+
+        return lduw_le_p(max_qps);
+    }
+
+    return 1;
+}
+
 int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
     int vdpa_device_fd;
-    NetClientState *nc;
+    NetClientState **ncs, *nc;
+    int qps, i, has_cvq = 0;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -216,11 +276,37 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         return -errno;
     }
 
-    nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
-    if (!nc) {
+    qps = vhost_vdpa_get_max_qps(vdpa_device_fd, &has_cvq, errp);
+    if (qps < 0) {
         qemu_close(vdpa_device_fd);
-        return -1;
+        return qps;
+    }
+
+    ncs = g_malloc0(sizeof(*ncs) * qps);
+
+    for (i = 0; i < qps; i++) {
+        ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
+                                     vdpa_device_fd, i, 2, true);
+        if (!ncs[i])
+            goto err;
+    }
+
+    if (has_cvq) {
+        nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
+                                 vdpa_device_fd, i, 1, false);
+        if (!nc)
+            goto err;
     }
 
+    g_free(ncs);
     return 0;
+
+err:
+    if (i) {
+        qemu_del_net_client(ncs[0]);
+    }
+    qemu_close(vdpa_device_fd);
+    g_free(ncs);
+
+    return -1;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH V2 00/18] vhost-vDPA multiqueue
  2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
                   ` (17 preceding siblings ...)
  2021-07-06  8:27 ` [PATCH V2 18/18] vhost-vdpa: multiqueue support Jason Wang
@ 2021-07-12  5:44 ` Jason Wang
  2021-07-12 13:15   ` Michael S. Tsirkin
  2021-07-15  4:24   ` Jason Wang
  18 siblings, 2 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-12  5:44 UTC (permalink / raw)
  To: qemu-devel, mst; +Cc: eperezma, elic, lingshan.zhu, lulu


在 2021/7/6 下午4:26, Jason Wang 写道:
> Hi All:
>
> This patch implements the multiqueue support for vhost-vDPA. The most
> important requirement the control virtqueue support. The virtio-net
> and vhost-net core are tweak to support control virtqueue as if what
> data queue pairs are done: a dedicated vhost_net device which is
> coupled with the NetClientState is intrdouced so most of the existing
> vhost codes could be reused with minor changes. With the control
> virtqueue, vhost-vDPA are extend to support creating and destroying
> multiqueue queue pairs plus the control virtqueue.
>
> Tests are done via the vp_vdpa driver in L1 guest plus vdpa simulator
> on L0.
>
> Please reivew.


If no objection, I will queue this for 6.1.

Thanks


>
> Changes since V1:
>
> - validating all features that depends on ctrl vq
> - typo fixes and commit log tweaks
> - fix build errors because max_qps is used before it is introduced
>
> Thanks
>
> Jason Wang (18):
>    vhost_net: remove the meaningless assignment in vhost_net_start_one()
>    vhost: use unsigned int for nvqs
>    vhost_net: do not assume nvqs is always 2
>    vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
>    vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
>    vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
>    vhost-vdpa: tweak the error label in vhost_vdpa_add()
>    vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
>    vhost-vdpa: remove the unncessary queue_index assignment
>    vhost-vdpa: open device fd in net_init_vhost_vdpa()
>    vhost-vdpa: classify one time request
>    vhost-vdpa: prepare for the multiqueue support
>    vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
>    net: introduce control client
>    vhost-net: control virtqueue support
>    virito-net: use "qps" instead of "queues" when possible
>    virtio-net: vhost control virtqueue support
>    vhost-vdpa: multiqueue support
>
>   hw/net/vhost_net.c             |  48 +++++++---
>   hw/net/virtio-net.c            | 165 ++++++++++++++++++---------------
>   hw/virtio/vhost-vdpa.c         |  55 ++++++++++-
>   include/hw/virtio/vhost-vdpa.h |   1 +
>   include/hw/virtio/vhost.h      |   2 +-
>   include/hw/virtio/virtio-net.h |   5 +-
>   include/net/net.h              |   5 +
>   include/net/vhost_net.h        |   7 +-
>   net/net.c                      |  24 ++++-
>   net/tap.c                      |   1 +
>   net/vhost-user.c               |   1 +
>   net/vhost-vdpa.c               | 156 ++++++++++++++++++++++++-------
>   12 files changed, 332 insertions(+), 138 deletions(-)
>



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH V2 00/18] vhost-vDPA multiqueue
  2021-07-12  5:44 ` [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
@ 2021-07-12 13:15   ` Michael S. Tsirkin
       [not found]     ` <CACGkMEs_sNOqdsDvpMR+Mx7TXY2wW8p_NVALvHLPgeAsiWNTGA@mail.gmail.com>
  2021-07-15  4:24   ` Jason Wang
  1 sibling, 1 reply; 23+ messages in thread
From: Michael S. Tsirkin @ 2021-07-12 13:15 UTC (permalink / raw)
  To: Jason Wang; +Cc: eperezma, elic, lulu, qemu-devel, lingshan.zhu

On Mon, Jul 12, 2021 at 01:44:45PM +0800, Jason Wang wrote:
> 
> 在 2021/7/6 下午4:26, Jason Wang 写道:
> > Hi All:
> > 
> > This patch implements the multiqueue support for vhost-vDPA. The most
> > important requirement the control virtqueue support. The virtio-net
> > and vhost-net core are tweak to support control virtqueue as if what
> > data queue pairs are done: a dedicated vhost_net device which is
> > coupled with the NetClientState is intrdouced so most of the existing
> > vhost codes could be reused with minor changes. With the control
> > virtqueue, vhost-vDPA are extend to support creating and destroying
> > multiqueue queue pairs plus the control virtqueue.
> > 
> > Tests are done via the vp_vdpa driver in L1 guest plus vdpa simulator
> > on L0.
> > 
> > Please reivew.
> 
> 
> If no objection, I will queue this for 6.1.
> 
> Thanks


Just to make sure I understand, this basically works by
passing the cvq through to the guest right?
Giving up on maintaining the state in qemu.

> 
> > 
> > Changes since V1:
> > 
> > - validating all features that depends on ctrl vq
> > - typo fixes and commit log tweaks
> > - fix build errors because max_qps is used before it is introduced
> > 
> > Thanks
> > 
> > Jason Wang (18):
> >    vhost_net: remove the meaningless assignment in vhost_net_start_one()
> >    vhost: use unsigned int for nvqs
> >    vhost_net: do not assume nvqs is always 2
> >    vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
> >    vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
> >    vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
> >    vhost-vdpa: tweak the error label in vhost_vdpa_add()
> >    vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
> >    vhost-vdpa: remove the unncessary queue_index assignment
> >    vhost-vdpa: open device fd in net_init_vhost_vdpa()
> >    vhost-vdpa: classify one time request
> >    vhost-vdpa: prepare for the multiqueue support
> >    vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
> >    net: introduce control client
> >    vhost-net: control virtqueue support
> >    virito-net: use "qps" instead of "queues" when possible
> >    virtio-net: vhost control virtqueue support
> >    vhost-vdpa: multiqueue support
> > 
> >   hw/net/vhost_net.c             |  48 +++++++---
> >   hw/net/virtio-net.c            | 165 ++++++++++++++++++---------------
> >   hw/virtio/vhost-vdpa.c         |  55 ++++++++++-
> >   include/hw/virtio/vhost-vdpa.h |   1 +
> >   include/hw/virtio/vhost.h      |   2 +-
> >   include/hw/virtio/virtio-net.h |   5 +-
> >   include/net/net.h              |   5 +
> >   include/net/vhost_net.h        |   7 +-
> >   net/net.c                      |  24 ++++-
> >   net/tap.c                      |   1 +
> >   net/vhost-user.c               |   1 +
> >   net/vhost-vdpa.c               | 156 ++++++++++++++++++++++++-------
> >   12 files changed, 332 insertions(+), 138 deletions(-)
> > 



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH V2 00/18] vhost-vDPA multiqueue
       [not found]       ` <20210713114825-mutt-send-email-mst@kernel.org>
@ 2021-07-14  2:00         ` Jason Wang
  0 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-14  2:00 UTC (permalink / raw)
  To: Michael S. Tsirkin, QEMU Developers, Zhu, Lingshan, Cindy Lu,
	Eugenio Perez Martin


在 2021/7/13 下午11:53, Michael S. Tsirkin 写道:
> On Tue, Jul 13, 2021 at 10:34:50AM +0800, Jason Wang wrote:
>> On Mon, Jul 12, 2021 at 9:15 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>>> On Mon, Jul 12, 2021 at 01:44:45PM +0800, Jason Wang wrote:
>>>> 在 2021/7/6 下午4:26, Jason Wang 写道:
>>>>> Hi All:
>>>>>
>>>>> This patch implements the multiqueue support for vhost-vDPA. The most
>>>>> important requirement the control virtqueue support. The virtio-net
>>>>> and vhost-net core are tweak to support control virtqueue as if what
>>>>> data queue pairs are done: a dedicated vhost_net device which is
>>>>> coupled with the NetClientState is intrdouced so most of the existing
>>>>> vhost codes could be reused with minor changes. With the control
>>>>> virtqueue, vhost-vDPA are extend to support creating and destroying
>>>>> multiqueue queue pairs plus the control virtqueue.
>>>>>
>>>>> Tests are done via the vp_vdpa driver in L1 guest plus vdpa simulator
>>>>> on L0.
>>>>>
>>>>> Please reivew.
>>>>
>>>> If no objection, I will queue this for 6.1.
>>>>
>>>> Thanks
>>>
>>> Just to make sure I understand, this basically works by
>>> passing the cvq through to the guest right?
>>> Giving up on maintaining the state in qemu.
>> Yes, if I understand correctly. This is the conclusion since our last
>> discussion.
>>
>> We can handle migration by using shadow virtqueue on top (depends on
>> the Eugenio's work), and multiple IOTLB support on the vhost-vDPA.
>>
>> Thanks
> I still think it's wrong to force userspace to use shadow vq or multiple
> IOTLB. These should be implementation detail.


Stick to a virtqueue interface doesn't mean we need to force the vendor 
to implement the hardware control virtqueue. See below.


>
> Short term I'm inclined to say just switch to userspace emulation
> or to vhost for the duration of migration.
> Long term I think we should push commands to the kernel and have it
> pass them to the PF.


So the issues are, I think we've discussed several times but it's time 
to figure them out now:

1) There's no guarantee that the control virtqueue is implemented in PF
2) Something like pushing commands will bring extra issues:
2.1) duplicating all the existing control virtqueue command via another uAPI
2.2) no asynchronous support
3) can't work for virtio_vdpa
4) bring extra complications for the nested virtualization

If we manage to overcome 2.1 and 2.2 it's just a re-invention of control 
virtqueue.


>
> So it worries me a bit that we are pushing this specific way into QEMU.
> If you are sure it won't push other vendors in this direction and
> we'll be able to back out later then ok, I won't nack it.


Let me clarify, control virtqueue + multiple IOTLB is just the uAPI but 
not the implementation. Parent/vendor is free to implement those 
semantics in their comfortable way:

1) Having a consistent (or re-using) uAPI to work for all kinds of 
control virtqueue or event virtqueue

2) Fit for all kinds of the hardware implementation

2.1) Hardware doesn't have control virtqueue but using registers. Parent 
just decode the cvq commands and translate them to register commands
2.2) Hardware doesn't have control virtqueue but using other device (e.g 
PF) to implement the semantics. Parent just decode the cvq commands and 
send them to the device that implements the semantic (PF)
2.3) Hardware does have control virtqueue with transport specific ASID 
support. Parent just assign a different PASID to cvq, and let userspace 
to use that cvq directly.
2.4) Hardware does have control virtqueue with device specific ASID 
support. Parent just assign a different device specific ASID and let 
userspace to use that cvq directly.

The above 4 should covers all the vendor cases that I know that at least 
2.1 and 2.4 are supported by some vendors. Some vendors have the plan 
for 2.3.

Thanks


>
>>>>> Changes since V1:
>>>>>
>>>>> - validating all features that depends on ctrl vq
>>>>> - typo fixes and commit log tweaks
>>>>> - fix build errors because max_qps is used before it is introduced
>>>>>
>>>>> Thanks
>>>>>
>>>>> Jason Wang (18):
>>>>>     vhost_net: remove the meaningless assignment in vhost_net_start_one()
>>>>>     vhost: use unsigned int for nvqs
>>>>>     vhost_net: do not assume nvqs is always 2
>>>>>     vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
>>>>>     vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
>>>>>     vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
>>>>>     vhost-vdpa: tweak the error label in vhost_vdpa_add()
>>>>>     vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
>>>>>     vhost-vdpa: remove the unncessary queue_index assignment
>>>>>     vhost-vdpa: open device fd in net_init_vhost_vdpa()
>>>>>     vhost-vdpa: classify one time request
>>>>>     vhost-vdpa: prepare for the multiqueue support
>>>>>     vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
>>>>>     net: introduce control client
>>>>>     vhost-net: control virtqueue support
>>>>>     virito-net: use "qps" instead of "queues" when possible
>>>>>     virtio-net: vhost control virtqueue support
>>>>>     vhost-vdpa: multiqueue support
>>>>>
>>>>>    hw/net/vhost_net.c             |  48 +++++++---
>>>>>    hw/net/virtio-net.c            | 165 ++++++++++++++++++---------------
>>>>>    hw/virtio/vhost-vdpa.c         |  55 ++++++++++-
>>>>>    include/hw/virtio/vhost-vdpa.h |   1 +
>>>>>    include/hw/virtio/vhost.h      |   2 +-
>>>>>    include/hw/virtio/virtio-net.h |   5 +-
>>>>>    include/net/net.h              |   5 +
>>>>>    include/net/vhost_net.h        |   7 +-
>>>>>    net/net.c                      |  24 ++++-
>>>>>    net/tap.c                      |   1 +
>>>>>    net/vhost-user.c               |   1 +
>>>>>    net/vhost-vdpa.c               | 156 ++++++++++++++++++++++++-------
>>>>>    12 files changed, 332 insertions(+), 138 deletions(-)
>>>>>



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH V2 00/18] vhost-vDPA multiqueue
  2021-07-12  5:44 ` [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
  2021-07-12 13:15   ` Michael S. Tsirkin
@ 2021-07-15  4:24   ` Jason Wang
  1 sibling, 0 replies; 23+ messages in thread
From: Jason Wang @ 2021-07-15  4:24 UTC (permalink / raw)
  To: qemu-devel, mst; +Cc: eperezma, elic, lingshan.zhu, lulu


在 2021/7/12 下午1:44, Jason Wang 写道:
>
> 在 2021/7/6 下午4:26, Jason Wang 写道:
>> Hi All:
>>
>> This patch implements the multiqueue support for vhost-vDPA. The most
>> important requirement the control virtqueue support. The virtio-net
>> and vhost-net core are tweak to support control virtqueue as if what
>> data queue pairs are done: a dedicated vhost_net device which is
>> coupled with the NetClientState is intrdouced so most of the existing
>> vhost codes could be reused with minor changes. With the control
>> virtqueue, vhost-vDPA are extend to support creating and destroying
>> multiqueue queue pairs plus the control virtqueue.
>>
>> Tests are done via the vp_vdpa driver in L1 guest plus vdpa simulator
>> on L0.
>>
>> Please reivew.
>
>
> If no objection, I will queue this for 6.1.


Hi Michael:

So we miss the soft freeze, want to know if the series is fine from your 
side, and if you'd like to merge them (for 6.2 probably?).

Thanks



^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2021-07-15  4:25 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-06  8:26 [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
2021-07-06  8:27 ` [PATCH V2 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
2021-07-06  8:27 ` [PATCH V2 02/18] vhost: use unsigned int for nvqs Jason Wang
2021-07-06  8:27 ` [PATCH V2 03/18] vhost_net: do not assume nvqs is always 2 Jason Wang
2021-07-06  8:27 ` [PATCH V2 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
2021-07-06  8:27 ` [PATCH V2 05/18] vhost-vdpa: don't cleanup twice " Jason Wang
2021-07-06  8:27 ` [PATCH V2 06/18] vhost-vdpa: fix leaking of vhost_net " Jason Wang
2021-07-06  8:27 ` [PATCH V2 07/18] vhost-vdpa: tweak the error label " Jason Wang
2021-07-06  8:27 ` [PATCH V2 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
2021-07-06  8:27 ` [PATCH V2 09/18] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
2021-07-06  8:27 ` [PATCH V2 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
2021-07-06  8:27 ` [PATCH V2 11/18] vhost-vdpa: classify one time request Jason Wang
2021-07-06  8:27 ` [PATCH V2 12/18] vhost-vdpa: prepare for the multiqueue support Jason Wang
2021-07-06  8:27 ` [PATCH V2 13/18] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
2021-07-06  8:27 ` [PATCH V2 14/18] net: introduce control client Jason Wang
2021-07-06  8:27 ` [PATCH V2 15/18] vhost-net: control virtqueue support Jason Wang
2021-07-06  8:27 ` [PATCH V2 16/18] virito-net: use "qps" instead of "queues" when possible Jason Wang
2021-07-06  8:27 ` [PATCH V2 17/18] virtio-net: vhost control virtqueue support Jason Wang
2021-07-06  8:27 ` [PATCH V2 18/18] vhost-vdpa: multiqueue support Jason Wang
2021-07-12  5:44 ` [PATCH V2 00/18] vhost-vDPA multiqueue Jason Wang
2021-07-12 13:15   ` Michael S. Tsirkin
     [not found]     ` <CACGkMEs_sNOqdsDvpMR+Mx7TXY2wW8p_NVALvHLPgeAsiWNTGA@mail.gmail.com>
     [not found]       ` <20210713114825-mutt-send-email-mst@kernel.org>
2021-07-14  2:00         ` Jason Wang
2021-07-15  4:24   ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.