All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 00/21] vhost-vDPA multiqueue
@ 2021-09-03  9:10 Jason Wang
  2021-09-03  9:10 ` [PATCH V2 01/21] vhost-vdpa: remove unused variable "acked_features" Jason Wang
                   ` (20 more replies)
  0 siblings, 21 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

Hi All:

This patch implements the multiqueue support for vhost-vDPA. The most
important requirement si the control virtqueue support. The virtio-net
and vhost-net core are tweak to support control virtqueue as if what
data queue pairs are done: a dedicated vhost_net device which is
coupled with the NetClientState is intrdouced so most of the existing
vhost codes could be reused with minor changes. This means the control
virtqueue will bypass the Qemu. With the control virtqueue, vhost-vDPA
are extend to support creating and destroying multiqueue queue pairs
plus the control virtqueue.

For the future, if we want to support live migration, we can either do
shadow cvq on top or introduce new interfaces for reporting device
states.

Tests are done via the vp_vdpa driver in L1 guest.

Changes since V1:

- start and stop vhost devices when all queues were setup
- fix the case when driver doesn't support MQ but device support
- correctly set the batching capability to avoid a map/unmap storm
- various other tweaks

Please reivew.

Thanks

Jason Wang (21):
  vhost-vdpa: remove unused variable "acked_features"
  vhost-vdpa: correctly return err in vhost_vdpa_set_backend_cap()
  vhost_net: remove the meaningless assignment in vhost_net_start_one()
  vhost: use unsigned int for nvqs
  vhost_net: do not assume nvqs is always 2
  vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
  vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
  vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
  vhost-vdpa: tweak the error label in vhost_vdpa_add()
  vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
  vhost-vdpa: remove the unncessary queue_index assignment
  vhost-vdpa: open device fd in net_init_vhost_vdpa()
  vhost-vdpa: classify one time request
  vhost-vdpa: prepare for the multiqueue support
  vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
  net: introduce control client
  vhost-net: control virtqueue support
  virito-net: use "qps" instead of "queues" when possible
  vhost: record the last virtqueue index for the virtio device
  virtio-net: vhost control virtqueue support
  vhost-vdpa: multiqueue support

 hw/net/vhost_net.c             |  60 ++++++++----
 hw/net/virtio-net.c            | 165 ++++++++++++++++++---------------
 hw/virtio/vhost-vdpa.c         |  58 ++++++++++--
 include/hw/virtio/vhost-vdpa.h |   1 +
 include/hw/virtio/vhost.h      |   4 +-
 include/hw/virtio/virtio-net.h |   5 +-
 include/net/net.h              |   5 +
 include/net/vhost_net.h        |   7 +-
 net/net.c                      |  24 ++++-
 net/tap.c                      |   1 +
 net/vhost-user.c               |   1 +
 net/vhost-vdpa.c               | 157 ++++++++++++++++++++++++-------
 12 files changed, 342 insertions(+), 146 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH V2 01/21] vhost-vdpa: remove unused variable "acked_features"
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 02/21] vhost-vdpa: correctly return err in vhost_vdpa_set_backend_cap() Jason Wang
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

"acked_features" is unused, let's remove that.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 19187dce8c..72829884d7 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -29,7 +29,6 @@ typedef struct VhostVDPAState {
     NetClientState nc;
     struct vhost_vdpa vhost_vdpa;
     VHostNetState *vhost_net;
-    uint64_t acked_features;
     bool started;
 } VhostVDPAState;
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 02/21] vhost-vdpa: correctly return err in vhost_vdpa_set_backend_cap()
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
  2021-09-03  9:10 ` [PATCH V2 01/21] vhost-vdpa: remove unused variable "acked_features" Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 03/21] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

We should return error code instead of zero, otherwise there's no way
for the caller to detect the failure.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 4fa414feea..579f515e65 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -432,13 +432,13 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
     int r;
 
     if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
-        return 0;
+        return -EFAULT;
     }
 
     features &= f;
     r = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, &features);
     if (r) {
-        return 0;
+        return -EFAULT;
     }
 
     dev->backend_cap = features;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 03/21] vhost_net: remove the meaningless assignment in vhost_net_start_one()
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
  2021-09-03  9:10 ` [PATCH V2 01/21] vhost-vdpa: remove unused variable "acked_features" Jason Wang
  2021-09-03  9:10 ` [PATCH V2 02/21] vhost-vdpa: correctly return err in vhost_vdpa_set_backend_cap() Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 04/21] vhost: use unsigned int for nvqs Jason Wang
                   ` (17 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

The nvqs and vqs have been initialized during vhost_net_init() and are
not expected to change during the life cycle of vhost_net
structure. So this patch removes the meaningless assignment.

Reviewed-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 10a7780a13..6ed0c39836 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -242,9 +242,6 @@ static int vhost_net_start_one(struct vhost_net *net,
     struct vhost_vring_file file = { };
     int r;
 
-    net->dev.nvqs = 2;
-    net->dev.vqs = net->vqs;
-
     r = vhost_dev_enable_notifiers(&net->dev, dev);
     if (r < 0) {
         goto fail_notifiers;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 04/21] vhost: use unsigned int for nvqs
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (2 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 03/21] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 05/21] vhost_net: do not assume nvqs is always 2 Jason Wang
                   ` (16 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

Switch to use unsigned int for nvqs since it's not expected to be
negative.

Reviewed-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/hw/virtio/vhost.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 045d0fd9f2..1222b21b94 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -71,7 +71,7 @@ struct vhost_dev {
     int n_tmp_sections;
     MemoryRegionSection *tmp_sections;
     struct vhost_virtqueue *vqs;
-    int nvqs;
+    unsigned int nvqs;
     /* the first virtqueue which would be used by this vhost dev */
     int vq_index;
     /* if non-zero, minimum required value for max_queues */
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 05/21] vhost_net: do not assume nvqs is always 2
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (3 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 04/21] vhost: use unsigned int for nvqs Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 06/21] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel
  Cc: lulu, gdawar, eperezma, elic, lingshan.zhu, Stefano Garzarella

This patch switches to initialize dev.nvqs from the VhostNetOptions
instead of assuming it was 2. This is useful for implementing control
virtqueue support which will be a single vhost_net structure with a
single cvq.

Note that nvqs is still set to 2 for all users and this patch does not
change functionality.

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c      | 2 +-
 include/net/vhost_net.h | 1 +
 net/tap.c               | 1 +
 net/vhost-user.c        | 1 +
 net/vhost-vdpa.c        | 1 +
 5 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 6ed0c39836..386ec2eaa2 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -165,9 +165,9 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
         goto fail;
     }
     net->nc = options->net_backend;
+    net->dev.nvqs = options->nvqs;
 
     net->dev.max_queues = 1;
-    net->dev.nvqs = 2;
     net->dev.vqs = net->vqs;
 
     if (backend_kernel) {
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index 172b0051d8..fba40cf695 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -14,6 +14,7 @@ typedef struct VhostNetOptions {
     VhostBackendType backend_type;
     NetClientState *net_backend;
     uint32_t busyloop_timeout;
+    unsigned int nvqs;
     void *opaque;
 } VhostNetOptions;
 
diff --git a/net/tap.c b/net/tap.c
index f5686bbf77..f716be3e3f 100644
--- a/net/tap.c
+++ b/net/tap.c
@@ -749,6 +749,7 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
             qemu_set_nonblock(vhostfd);
         }
         options.opaque = (void *)(uintptr_t)vhostfd;
+        options.nvqs = 2;
 
         s->vhost_net = vhost_net_init(&options);
         if (!s->vhost_net) {
diff --git a/net/vhost-user.c b/net/vhost-user.c
index 6adfcd623a..4a939124d2 100644
--- a/net/vhost-user.c
+++ b/net/vhost-user.c
@@ -85,6 +85,7 @@ static int vhost_user_start(int queues, NetClientState *ncs[],
         options.net_backend = ncs[i];
         options.opaque      = be;
         options.busyloop_timeout = 0;
+        options.nvqs = 2;
         net = vhost_net_init(&options);
         if (!net) {
             error_report("failed to init vhost_net for queue %d", i);
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 72829884d7..395117debd 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -104,6 +104,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
     options.net_backend = ncs;
     options.opaque      = be;
     options.busyloop_timeout = 0;
+    options.nvqs = 2;
 
     net = vhost_net_init(&options);
     if (!net) {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 06/21] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (4 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 05/21] vhost_net: do not assume nvqs is always 2 Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 07/21] vhost-vdpa: don't cleanup twice " Jason Wang
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

The VhostVDPAState is just allocated by qemu_new_net_client() via
g_malloc0() in net_vhost_vdpa_init(). So s->vhost_net is NULL for
sure, let's remove this unnecessary check in vhost_vdpa_add().

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 395117debd..5c09cacd5a 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -111,10 +111,6 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
         error_report("failed to init vhost_net for queue");
         goto err;
     }
-    if (s->vhost_net) {
-        vhost_net_cleanup(s->vhost_net);
-        g_free(s->vhost_net);
-    }
     s->vhost_net = net;
     ret = vhost_vdpa_net_check_device_id(net);
     if (ret) {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 07/21] vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (5 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 06/21] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 08/21] vhost-vdpa: fix leaking of vhost_net " Jason Wang
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel
  Cc: lulu, gdawar, eperezma, elic, lingshan.zhu, Stefano Garzarella

The previous vhost_net_cleanup is sufficient for freeing, calling
vhost_vdpa_del() in this case will lead an extra round of free. Note
that this kind of "double free" is safe since vhost_dev_cleanup() zero
the whole structure.

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 5c09cacd5a..3213e69d63 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -81,16 +81,6 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
     return ret;
 }
 
-static void vhost_vdpa_del(NetClientState *ncs)
-{
-    VhostVDPAState *s;
-    assert(ncs->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
-    s = DO_UPCAST(VhostVDPAState, nc, ncs);
-    if (s->vhost_net) {
-        vhost_net_cleanup(s->vhost_net);
-    }
-}
-
 static int vhost_vdpa_add(NetClientState *ncs, void *be)
 {
     VhostNetOptions options;
@@ -121,7 +111,6 @@ err:
     if (net) {
         vhost_net_cleanup(net);
     }
-    vhost_vdpa_del(ncs);
     return -1;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 08/21] vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (6 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 07/21] vhost-vdpa: don't cleanup twice " Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 09/21] vhost-vdpa: tweak the error label " Jason Wang
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel
  Cc: lulu, gdawar, eperezma, elic, lingshan.zhu, Stefano Garzarella

Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 3213e69d63..b43df00a85 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -110,6 +110,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
 err:
     if (net) {
         vhost_net_cleanup(net);
+        g_free(net);
     }
     return -1;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 09/21] vhost-vdpa: tweak the error label in vhost_vdpa_add()
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (7 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 08/21] vhost-vdpa: fix leaking of vhost_net " Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 10/21] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

Introduce new error label to avoid the unnecessary checking of net
pointer.

Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index b43df00a85..99327d17b4 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -99,19 +99,18 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
     net = vhost_net_init(&options);
     if (!net) {
         error_report("failed to init vhost_net for queue");
-        goto err;
+        goto err_init;
     }
     s->vhost_net = net;
     ret = vhost_vdpa_net_check_device_id(net);
     if (ret) {
-        goto err;
+        goto err_check;
     }
     return 0;
-err:
-    if (net) {
-        vhost_net_cleanup(net);
-        g_free(net);
-    }
+err_check:
+    vhost_net_cleanup(net);
+    g_free(net);
+err_init:
     return -1;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 10/21] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (8 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 09/21] vhost-vdpa: tweak the error label " Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 11/21] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel
  Cc: lulu, gdawar, eperezma, elic, lingshan.zhu, Stefano Garzarella

Vhost_vdpa_add() can fail for various reasons, so the assertion of the
succeed is wrong. Instead, we should free the NetClientState and
propagate the error to the caller

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 99327d17b4..d02cad9855 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -173,7 +173,10 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
     }
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
-    assert(s->vhost_net);
+    if (ret) {
+        qemu_close(vdpa_device_fd);
+        qemu_del_net_client(nc);
+    }
     return ret;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 11/21] vhost-vdpa: remove the unncessary queue_index assignment
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (9 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 10/21] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 12/21] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel
  Cc: lulu, gdawar, eperezma, elic, lingshan.zhu, Stefano Garzarella

The queue_index of NetClientState should be assigned in set_netdev()
afterwards, so trying to net_vhost_vdpa_init() is meaningless. This
patch removes this.

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index d02cad9855..912686457c 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -165,7 +165,6 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
     assert(name);
     nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
     snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
-    nc->queue_index = 0;
     s = DO_UPCAST(VhostVDPAState, nc, nc);
     vdpa_device_fd = qemu_open_old(vhostdev, O_RDWR);
     if (vdpa_device_fd == -1) {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 12/21] vhost-vdpa: open device fd in net_init_vhost_vdpa()
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (10 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 11/21] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-04 20:41   ` Michael S. Tsirkin
  2021-09-03  9:10 ` [PATCH V2 13/21] vhost-vdpa: classify one time request Jason Wang
                   ` (8 subsequent siblings)
  20 siblings, 1 reply; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel
  Cc: lulu, gdawar, eperezma, elic, lingshan.zhu, Stefano Garzarella

This path switches to open device fd in net_init_vhost_vpda(). This is
used to prepare for the multiqueue support.

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 912686457c..73d29a74ef 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -156,24 +156,19 @@ static NetClientInfo net_vhost_vdpa_info = {
 };
 
 static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
-                               const char *name, const char *vhostdev)
+                               const char *name, int vdpa_device_fd)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
-    int vdpa_device_fd = -1;
     int ret = 0;
     assert(name);
     nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
     snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
     s = DO_UPCAST(VhostVDPAState, nc, nc);
-    vdpa_device_fd = qemu_open_old(vhostdev, O_RDWR);
-    if (vdpa_device_fd == -1) {
-        return -errno;
-    }
+
     s->vhost_vdpa.device_fd = vdpa_device_fd;
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
     if (ret) {
-        qemu_close(vdpa_device_fd);
         qemu_del_net_client(nc);
     }
     return ret;
@@ -201,6 +196,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
+    int vdpa_device_fd, ret;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -209,5 +205,16 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                           (char *)name, errp)) {
         return -1;
     }
-    return net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, opts->vhostdev);
+
+    vdpa_device_fd = qemu_open_old(opts->vhostdev, O_RDWR);
+    if (vdpa_device_fd == -1) {
+        return -errno;
+    }
+
+    ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
+    if (ret) {
+        qemu_close(vdpa_device_fd);
+    }
+
+    return ret;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 13/21] vhost-vdpa: classify one time request
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (11 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 12/21] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 14/21] vhost-vdpa: prepare for the multiqueue support Jason Wang
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

Vhost-vdpa uses one device multiqueue queue (pairs) model. So we need
to classify the one time request (e.g SET_OWNER) and make sure those
request were only called once per device.

This is used for multiqueue support.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c         | 52 ++++++++++++++++++++++++++++++----
 include/hw/virtio/vhost-vdpa.h |  1 +
 2 files changed, 47 insertions(+), 6 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 579f515e65..42c66de791 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -267,6 +267,13 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
     vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
 }
 
+static bool vhost_vdpa_one_time_request(struct vhost_dev *dev)
+{
+    struct vhost_vdpa *v = dev->opaque;
+
+    return v->index != 0;
+}
+
 static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
 {
     struct vhost_vdpa *v;
@@ -279,6 +286,10 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
     v->listener = vhost_vdpa_memory_listener;
     v->msg_type = VHOST_IOTLB_MSG_V2;
 
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
                                VIRTIO_CONFIG_S_DRIVER);
 
@@ -389,6 +400,10 @@ static int vhost_vdpa_memslots_limit(struct vhost_dev *dev)
 static int vhost_vdpa_set_mem_table(struct vhost_dev *dev,
                                     struct vhost_memory *mem)
 {
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_mem_table(dev, mem->nregions, mem->padding);
     if (trace_event_get_state_backends(TRACE_VHOST_VDPA_SET_MEM_TABLE) &&
         trace_event_get_state_backends(TRACE_VHOST_VDPA_DUMP_REGIONS)) {
@@ -412,6 +427,11 @@ static int vhost_vdpa_set_features(struct vhost_dev *dev,
                                    uint64_t features)
 {
     int ret;
+
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_features(dev, features);
     ret = vhost_vdpa_call(dev, VHOST_SET_FEATURES, &features);
     uint8_t status = 0;
@@ -436,9 +456,12 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
     }
 
     features &= f;
-    r = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, &features);
-    if (r) {
-        return -EFAULT;
+
+    if (vhost_vdpa_one_time_request(dev)) {
+        r = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, &features);
+        if (r) {
+            return -EFAULT;
+        }
     }
 
     dev->backend_cap = features;
@@ -547,11 +570,21 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
 {
     struct vhost_vdpa *v = dev->opaque;
     trace_vhost_vdpa_dev_start(dev, started);
+
     if (started) {
-        uint8_t status = 0;
-        memory_listener_register(&v->listener, &address_space_memory);
         vhost_vdpa_host_notifiers_init(dev);
         vhost_vdpa_set_vring_ready(dev);
+    } else {
+        vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
+    }
+
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
+    if (started) {
+        uint8_t status = 0;
+        memory_listener_register(&v->listener, &address_space_memory);
         vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
         vhost_vdpa_call(dev, VHOST_VDPA_GET_STATUS, &status);
 
@@ -560,7 +593,6 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
         vhost_vdpa_reset_device(dev);
         vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
                                    VIRTIO_CONFIG_S_DRIVER);
-        vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
         memory_listener_unregister(&v->listener);
 
         return 0;
@@ -570,6 +602,10 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
 static int vhost_vdpa_set_log_base(struct vhost_dev *dev, uint64_t base,
                                      struct vhost_log *log)
 {
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_log_base(dev, base, log->size, log->refcnt, log->fd,
                                   log->log);
     return vhost_vdpa_call(dev, VHOST_SET_LOG_BASE, &base);
@@ -635,6 +671,10 @@ static int vhost_vdpa_get_features(struct vhost_dev *dev,
 
 static int vhost_vdpa_set_owner(struct vhost_dev *dev)
 {
+    if (vhost_vdpa_one_time_request(dev)) {
+        return 0;
+    }
+
     trace_vhost_vdpa_set_owner(dev);
     return vhost_vdpa_call(dev, VHOST_SET_OWNER, NULL);
 }
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 9188226d8b..e98e327f12 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -21,6 +21,7 @@ typedef struct VhostVDPAHostNotifier {
 
 typedef struct vhost_vdpa {
     int device_fd;
+    int index;
     uint32_t msg_type;
     MemoryListener listener;
     struct vhost_dev *dev;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 14/21] vhost-vdpa: prepare for the multiqueue support
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (12 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 13/21] vhost-vdpa: classify one time request Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 15/21] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

Unlike vhost-kernel, vhost-vdpa adapts a single device multiqueue
model. So we need to simply use virtqueue index as the vhost virtqueue
index. This is a must for multiqueue to work for vhost-vdpa.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 42c66de791..94eb9d4069 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -492,8 +492,8 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev *dev, int idx)
 {
     assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
 
-    trace_vhost_vdpa_get_vq_index(dev, idx, idx - dev->vq_index);
-    return idx - dev->vq_index;
+    trace_vhost_vdpa_get_vq_index(dev, idx, idx);
+    return idx;
 }
 
 static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 15/21] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (13 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 14/21] vhost-vdpa: prepare for the multiqueue support Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 16/21] net: introduce control client Jason Wang
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

This patch switches to let net_vhost_vdpa_init() to return
NetClientState *. This is used for the callers to allocate multiqueue
NetClientState for multiqueue support.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 73d29a74ef..834dab28dd 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -155,8 +155,10 @@ static NetClientInfo net_vhost_vdpa_info = {
         .has_ufo = vhost_vdpa_has_ufo,
 };
 
-static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
-                               const char *name, int vdpa_device_fd)
+static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
+                                           const char *device,
+                                           const char *name,
+                                           int vdpa_device_fd)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
@@ -170,8 +172,9 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
     if (ret) {
         qemu_del_net_client(nc);
+        return NULL;
     }
-    return ret;
+    return nc;
 }
 
 static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
@@ -196,7 +199,8 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
-    int vdpa_device_fd, ret;
+    int vdpa_device_fd;
+    NetClientState *nc;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -211,10 +215,11 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         return -errno;
     }
 
-    ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
-    if (ret) {
+    nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
+    if (!nc) {
         qemu_close(vdpa_device_fd);
+        return -1;
     }
 
-    return ret;
+    return 0;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 16/21] net: introduce control client
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (14 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 15/21] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 17/21] vhost-net: control virtqueue support Jason Wang
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

This patch introduces a boolean for the device has control queue which
can accepts control command via network queue.

The first user would be the control virtqueue support for vhost.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/net/net.h |  5 +++++
 net/net.c         | 24 +++++++++++++++++++++---
 2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/include/net/net.h b/include/net/net.h
index 5d1508081f..4f400b8a09 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -103,6 +103,7 @@ struct NetClientState {
     int vnet_hdr_len;
     bool is_netdev;
     bool do_not_pad; /* do not pad to the minimum ethernet frame length */
+    bool is_datapath;
     QTAILQ_HEAD(, NetFilterState) filters;
 };
 
@@ -134,6 +135,10 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
                                     NetClientState *peer,
                                     const char *model,
                                     const char *name);
+NetClientState *qemu_new_net_control_client(NetClientInfo *info,
+                                        NetClientState *peer,
+                                        const char *model,
+                                        const char *name);
 NICState *qemu_new_nic(NetClientInfo *info,
                        NICConf *conf,
                        const char *model,
diff --git a/net/net.c b/net/net.c
index 52c99196c6..f0d14dbfc1 100644
--- a/net/net.c
+++ b/net/net.c
@@ -239,7 +239,8 @@ static void qemu_net_client_setup(NetClientState *nc,
                                   NetClientState *peer,
                                   const char *model,
                                   const char *name,
-                                  NetClientDestructor *destructor)
+                                  NetClientDestructor *destructor,
+                                  bool is_datapath)
 {
     nc->info = info;
     nc->model = g_strdup(model);
@@ -258,6 +259,7 @@ static void qemu_net_client_setup(NetClientState *nc,
 
     nc->incoming_queue = qemu_new_net_queue(qemu_deliver_packet_iov, nc);
     nc->destructor = destructor;
+    nc->is_datapath = is_datapath;
     QTAILQ_INIT(&nc->filters);
 }
 
@@ -272,7 +274,23 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
 
     nc = g_malloc0(info->size);
     qemu_net_client_setup(nc, info, peer, model, name,
-                          qemu_net_client_destructor);
+                          qemu_net_client_destructor, true);
+
+    return nc;
+}
+
+NetClientState *qemu_new_net_control_client(NetClientInfo *info,
+                                            NetClientState *peer,
+                                            const char *model,
+                                            const char *name)
+{
+    NetClientState *nc;
+
+    assert(info->size >= sizeof(NetClientState));
+
+    nc = g_malloc0(info->size);
+    qemu_net_client_setup(nc, info, peer, model, name,
+                          qemu_net_client_destructor, false);
 
     return nc;
 }
@@ -297,7 +315,7 @@ NICState *qemu_new_nic(NetClientInfo *info,
 
     for (i = 0; i < queues; i++) {
         qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
-                              NULL);
+                              NULL, true);
         nic->ncs[i].queue_index = i;
     }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 17/21] vhost-net: control virtqueue support
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (15 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 16/21] net: introduce control client Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-04 20:40   ` Michael S. Tsirkin
  2021-09-03  9:10 ` [PATCH V2 18/21] virito-net: use "qps" instead of "queues" when possible Jason Wang
                   ` (3 subsequent siblings)
  20 siblings, 1 reply; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

We assume there's no cvq in the past, this is not true when we need
control virtqueue support for vhost-user backends. So this patch
implements the control virtqueue support for vhost-net. As datapath,
the control virtqueue is also required to be coupled with the
NetClientState. The vhost_net_start/stop() are tweaked to accept the
number of datapath queue pairs plus the the number of control
virtqueue for us to start and stop the vhost device.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c      | 43 ++++++++++++++++++++++++++++++-----------
 hw/net/virtio-net.c     |  4 ++--
 include/net/vhost_net.h |  6 ++++--
 3 files changed, 38 insertions(+), 15 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 386ec2eaa2..7e0b60b4d9 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -315,11 +315,14 @@ static void vhost_net_stop_one(struct vhost_net *net,
 }
 
 int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
-                    int total_queues)
+                    int data_qps, int cvq)
 {
     BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
     VirtioBusState *vbus = VIRTIO_BUS(qbus);
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+    int total_notifiers = data_qps * 2 + cvq;
+    VirtIONet *n = VIRTIO_NET(dev);
+    int nvhosts = data_qps + cvq;
     struct vhost_net *net;
     int r, e, i;
     NetClientState *peer;
@@ -329,9 +332,14 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         return -ENOSYS;
     }
 
-    for (i = 0; i < total_queues; i++) {
+    for (i = 0; i < nvhosts; i++) {
+
+        if (i < data_qps) {
+            peer = qemu_get_peer(ncs, i);
+        } else { /* Control Virtqueue */
+            peer = qemu_get_peer(ncs, n->max_queues);
+        }
 
-        peer = qemu_get_peer(ncs, i);
         net = get_vhost_net(peer);
         vhost_net_set_vq_index(net, i * 2);
 
@@ -344,14 +352,18 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         }
      }
 
-    r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
+    r = k->set_guest_notifiers(qbus->parent, total_notifiers, true);
     if (r < 0) {
         error_report("Error binding guest notifier: %d", -r);
         goto err;
     }
 
-    for (i = 0; i < total_queues; i++) {
-        peer = qemu_get_peer(ncs, i);
+    for (i = 0; i < nvhosts; i++) {
+        if (i < data_qps) {
+            peer = qemu_get_peer(ncs, i);
+        } else {
+            peer = qemu_get_peer(ncs, n->max_queues);
+        }
         r = vhost_net_start_one(get_vhost_net(peer), dev);
 
         if (r < 0) {
@@ -375,7 +387,7 @@ err_start:
         peer = qemu_get_peer(ncs , i);
         vhost_net_stop_one(get_vhost_net(peer), dev);
     }
-    e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
+    e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
     if (e < 0) {
         fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e);
         fflush(stderr);
@@ -385,18 +397,27 @@ err:
 }
 
 void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
-                    int total_queues)
+                    int data_qps, int cvq)
 {
     BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
     VirtioBusState *vbus = VIRTIO_BUS(qbus);
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+    VirtIONet *n = VIRTIO_NET(dev);
+    NetClientState *peer;
+    int total_notifiers = data_qps * 2 + cvq;
+    int nvhosts = data_qps + cvq;
     int i, r;
 
-    for (i = 0; i < total_queues; i++) {
-        vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
+    for (i = 0; i < nvhosts; i++) {
+        if (i < data_qps) {
+            peer = qemu_get_peer(ncs, i);
+        } else {
+            peer = qemu_get_peer(ncs, n->max_queues);
+        }
+        vhost_net_stop_one(get_vhost_net(peer), dev);
     }
 
-    r = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
+    r = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
     if (r < 0) {
         fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", r);
         fflush(stderr);
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 16d20cdee5..8fccbaa44c 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         }
 
         n->vhost_started = 1;
-        r = vhost_net_start(vdev, n->nic->ncs, queues);
+        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
         if (r < 0) {
             error_report("unable to start vhost net: %d: "
                          "falling back on userspace virtio", -r);
             n->vhost_started = 0;
         }
     } else {
-        vhost_net_stop(vdev, n->nic->ncs, queues);
+        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
         n->vhost_started = 0;
     }
 }
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index fba40cf695..e656e38af9 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -21,8 +21,10 @@ typedef struct VhostNetOptions {
 uint64_t vhost_net_get_max_queues(VHostNetState *net);
 struct vhost_net *vhost_net_init(VhostNetOptions *options);
 
-int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
-void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs, int total_queues);
+int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
+                    int data_qps, int cvq);
+void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
+                    int data_qps, int cvq);
 
 void vhost_net_cleanup(VHostNetState *net);
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 18/21] virito-net: use "qps" instead of "queues" when possible
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (16 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 17/21] vhost-net: control virtqueue support Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-04 20:42   ` Michael S. Tsirkin
  2021-09-03  9:10 ` [PATCH V2 19/21] vhost: record the last virtqueue index for the virtio device Jason Wang
                   ` (2 subsequent siblings)
  20 siblings, 1 reply; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

Most of the time, "queues" really means queue pairs. So this patch
switch to use "qps" to avoid confusion.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c             |   6 +-
 hw/net/virtio-net.c            | 150 ++++++++++++++++-----------------
 include/hw/virtio/virtio-net.h |   4 +-
 3 files changed, 80 insertions(+), 80 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 7e0b60b4d9..b40fdfa625 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -337,7 +337,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         if (i < data_qps) {
             peer = qemu_get_peer(ncs, i);
         } else { /* Control Virtqueue */
-            peer = qemu_get_peer(ncs, n->max_queues);
+            peer = qemu_get_peer(ncs, n->max_qps);
         }
 
         net = get_vhost_net(peer);
@@ -362,7 +362,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         if (i < data_qps) {
             peer = qemu_get_peer(ncs, i);
         } else {
-            peer = qemu_get_peer(ncs, n->max_queues);
+            peer = qemu_get_peer(ncs, n->max_qps);
         }
         r = vhost_net_start_one(get_vhost_net(peer), dev);
 
@@ -412,7 +412,7 @@ void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
         if (i < data_qps) {
             peer = qemu_get_peer(ncs, i);
         } else {
-            peer = qemu_get_peer(ncs, n->max_queues);
+            peer = qemu_get_peer(ncs, n->max_qps);
         }
         vhost_net_stop_one(get_vhost_net(peer), dev);
     }
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 8fccbaa44c..0a5d9862ec 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -54,7 +54,7 @@
 #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
 #define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
 
-/* for now, only allow larger queues; with virtio-1, guest can downsize */
+/* for now, only allow larger qps; with virtio-1, guest can downsize */
 #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
 #define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
 
@@ -131,7 +131,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, uint8_t *config)
     int ret = 0;
     memset(&netcfg, 0 , sizeof(struct virtio_net_config));
     virtio_stw_p(vdev, &netcfg.status, n->status);
-    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_queues);
+    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_qps);
     virtio_stw_p(vdev, &netcfg.mtu, n->net_conf.mtu);
     memcpy(netcfg.mac, n->mac, ETH_ALEN);
     virtio_stl_p(vdev, &netcfg.speed, n->net_conf.speed);
@@ -243,7 +243,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     NetClientState *nc = qemu_get_queue(n->nic);
-    int queues = n->multiqueue ? n->max_queues : 1;
+    int qps = n->multiqueue ? n->max_qps : 1;
 
     if (!get_vhost_net(nc->peer)) {
         return;
@@ -266,7 +266,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         /* Any packets outstanding? Purge them to avoid touching rings
          * when vhost is running.
          */
-        for (i = 0;  i < queues; i++) {
+        for (i = 0;  i < qps; i++) {
             NetClientState *qnc = qemu_get_subqueue(n->nic, i);
 
             /* Purge both directions: TX and RX. */
@@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         }
 
         n->vhost_started = 1;
-        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
+        r = vhost_net_start(vdev, n->nic->ncs, qps, 0);
         if (r < 0) {
             error_report("unable to start vhost net: %d: "
                          "falling back on userspace virtio", -r);
             n->vhost_started = 0;
         }
     } else {
-        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
+        vhost_net_stop(vdev, n->nic->ncs, qps, 0);
         n->vhost_started = 0;
     }
 }
@@ -309,11 +309,11 @@ static int virtio_net_set_vnet_endian_one(VirtIODevice *vdev,
 }
 
 static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
-                                       int queues, bool enable)
+                                       int qps, bool enable)
 {
     int i;
 
-    for (i = 0; i < queues; i++) {
+    for (i = 0; i < qps; i++) {
         if (virtio_net_set_vnet_endian_one(vdev, ncs[i].peer, enable) < 0 &&
             enable) {
             while (--i >= 0) {
@@ -330,7 +330,7 @@ static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
 static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
-    int queues = n->multiqueue ? n->max_queues : 1;
+    int qps = n->multiqueue ? n->max_qps : 1;
 
     if (virtio_net_started(n, status)) {
         /* Before using the device, we tell the network backend about the
@@ -339,14 +339,14 @@ static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
          * virtio-net code.
          */
         n->needs_vnet_hdr_swap = virtio_net_set_vnet_endian(vdev, n->nic->ncs,
-                                                            queues, true);
+                                                            qps, true);
     } else if (virtio_net_started(n, vdev->status)) {
         /* After using the device, we need to reset the network backend to
          * the default (guest native endianness), otherwise the guest may
          * lose network connectivity if it is rebooted into a different
          * endianness.
          */
-        virtio_net_set_vnet_endian(vdev, n->nic->ncs, queues, false);
+        virtio_net_set_vnet_endian(vdev, n->nic->ncs, qps, false);
     }
 }
 
@@ -368,12 +368,12 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
     virtio_net_vnet_endian_status(n, status);
     virtio_net_vhost_status(n, status);
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_qps; i++) {
         NetClientState *ncs = qemu_get_subqueue(n->nic, i);
         bool queue_started;
         q = &n->vqs[i];
 
-        if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
+        if ((!n->multiqueue && i != 0) || i >= n->curr_qps) {
             queue_status = 0;
         } else {
             queue_status = status;
@@ -540,7 +540,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
     n->nouni = 0;
     n->nobcast = 0;
     /* multiqueue is disabled by default */
-    n->curr_queues = 1;
+    n->curr_qps = 1;
     timer_del(n->announce_timer.tm);
     n->announce_timer.round = 0;
     n->status &= ~VIRTIO_NET_S_ANNOUNCE;
@@ -556,7 +556,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
     memset(n->vlans, 0, MAX_VLAN >> 3);
 
     /* Flush any async TX */
-    for (i = 0;  i < n->max_queues; i++) {
+    for (i = 0;  i < n->max_qps; i++) {
         NetClientState *nc = qemu_get_subqueue(n->nic, i);
 
         if (nc->peer) {
@@ -610,7 +610,7 @@ static void virtio_net_set_mrg_rx_bufs(VirtIONet *n, int mergeable_rx_bufs,
             sizeof(struct virtio_net_hdr);
     }
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_qps; i++) {
         nc = qemu_get_subqueue(n->nic, i);
 
         if (peer_has_vnet_hdr(n) &&
@@ -655,7 +655,7 @@ static int peer_attach(VirtIONet *n, int index)
         return 0;
     }
 
-    if (n->max_queues == 1) {
+    if (n->max_qps == 1) {
         return 0;
     }
 
@@ -681,7 +681,7 @@ static int peer_detach(VirtIONet *n, int index)
     return tap_disable(nc->peer);
 }
 
-static void virtio_net_set_queues(VirtIONet *n)
+static void virtio_net_set_qps(VirtIONet *n)
 {
     int i;
     int r;
@@ -690,8 +690,8 @@ static void virtio_net_set_queues(VirtIONet *n)
         return;
     }
 
-    for (i = 0; i < n->max_queues; i++) {
-        if (i < n->curr_queues) {
+    for (i = 0; i < n->max_qps; i++) {
+        if (i < n->curr_qps) {
             r = peer_attach(n, i);
             assert(!r);
         } else {
@@ -920,7 +920,7 @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
         virtio_net_apply_guest_offloads(n);
     }
 
-    for (i = 0;  i < n->max_queues; i++) {
+    for (i = 0;  i < n->max_qps; i++) {
         NetClientState *nc = qemu_get_subqueue(n->nic, i);
 
         if (!get_vhost_net(nc->peer)) {
@@ -1247,7 +1247,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     struct virtio_net_rss_config cfg;
     size_t s, offset = 0, size_get;
-    uint16_t queues, i;
+    uint16_t qps, i;
     struct {
         uint16_t us;
         uint8_t b;
@@ -1289,7 +1289,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     }
     n->rss_data.default_queue = do_rss ?
         virtio_lduw_p(vdev, &cfg.unclassified_queue) : 0;
-    if (n->rss_data.default_queue >= n->max_queues) {
+    if (n->rss_data.default_queue >= n->max_qps) {
         err_msg = "Invalid default queue";
         err_value = n->rss_data.default_queue;
         goto error;
@@ -1318,14 +1318,14 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     size_get = sizeof(temp);
     s = iov_to_buf(iov, iov_cnt, offset, &temp, size_get);
     if (s != size_get) {
-        err_msg = "Can't get queues";
+        err_msg = "Can't get qps";
         err_value = (uint32_t)s;
         goto error;
     }
-    queues = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_queues;
-    if (queues == 0 || queues > n->max_queues) {
-        err_msg = "Invalid number of queues";
-        err_value = queues;
+    qps = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_qps;
+    if (qps == 0 || qps > n->max_qps) {
+        err_msg = "Invalid number of qps";
+        err_value = qps;
         goto error;
     }
     if (temp.b > VIRTIO_NET_RSS_MAX_KEY_SIZE) {
@@ -1340,7 +1340,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     }
     if (!temp.b && !n->rss_data.hash_types) {
         virtio_net_disable_rss(n);
-        return queues;
+        return qps;
     }
     offset += size_get;
     size_get = temp.b;
@@ -1373,7 +1373,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
     trace_virtio_net_rss_enable(n->rss_data.hash_types,
                                 n->rss_data.indirections_len,
                                 temp.b);
-    return queues;
+    return qps;
 error:
     trace_virtio_net_rss_error(err_msg, err_value);
     virtio_net_disable_rss(n);
@@ -1384,15 +1384,15 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
                                 struct iovec *iov, unsigned int iov_cnt)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
-    uint16_t queues;
+    uint16_t qps;
 
     virtio_net_disable_rss(n);
     if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
-        queues = virtio_net_handle_rss(n, iov, iov_cnt, false);
-        return queues ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
+        qps = virtio_net_handle_rss(n, iov, iov_cnt, false);
+        return qps ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
     }
     if (cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
-        queues = virtio_net_handle_rss(n, iov, iov_cnt, true);
+        qps = virtio_net_handle_rss(n, iov, iov_cnt, true);
     } else if (cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
         struct virtio_net_ctrl_mq mq;
         size_t s;
@@ -1403,24 +1403,24 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
         if (s != sizeof(mq)) {
             return VIRTIO_NET_ERR;
         }
-        queues = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
+        qps = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
 
     } else {
         return VIRTIO_NET_ERR;
     }
 
-    if (queues < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
-        queues > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
-        queues > n->max_queues ||
+    if (qps < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
+        qps > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
+        qps > n->max_qps ||
         !n->multiqueue) {
         return VIRTIO_NET_ERR;
     }
 
-    n->curr_queues = queues;
-    /* stop the backend before changing the number of queues to avoid handling a
+    n->curr_qps = qps;
+    /* stop the backend before changing the number of qps to avoid handling a
      * disabled queue */
     virtio_net_set_status(vdev, vdev->status);
-    virtio_net_set_queues(n);
+    virtio_net_set_qps(n);
 
     return VIRTIO_NET_OK;
 }
@@ -1498,7 +1498,7 @@ static bool virtio_net_can_receive(NetClientState *nc)
         return false;
     }
 
-    if (nc->queue_index >= n->curr_queues) {
+    if (nc->queue_index >= n->curr_qps) {
         return false;
     }
 
@@ -2753,11 +2753,11 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
     virtio_del_queue(vdev, index * 2 + 1);
 }
 
-static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
+static void virtio_net_change_num_qps(VirtIONet *n, int new_max_qps)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     int old_num_queues = virtio_get_num_queues(vdev);
-    int new_num_queues = new_max_queues * 2 + 1;
+    int new_num_queues = new_max_qps * 2 + 1;
     int i;
 
     assert(old_num_queues >= 3);
@@ -2790,12 +2790,12 @@ static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
 
 static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue)
 {
-    int max = multiqueue ? n->max_queues : 1;
+    int max = multiqueue ? n->max_qps : 1;
 
     n->multiqueue = multiqueue;
-    virtio_net_change_num_queues(n, max);
+    virtio_net_change_num_qps(n, max);
 
-    virtio_net_set_queues(n);
+    virtio_net_set_qps(n);
 }
 
 static int virtio_net_post_load_device(void *opaque, int version_id)
@@ -2828,7 +2828,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
      */
     n->saved_guest_offloads = n->curr_guest_offloads;
 
-    virtio_net_set_queues(n);
+    virtio_net_set_qps(n);
 
     /* Find the first multicast entry in the saved MAC filter */
     for (i = 0; i < n->mac_table.in_use; i++) {
@@ -2841,7 +2841,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
     /* nc.link_down can't be migrated, so infer link_down according
      * to link status bit in n->status */
     link_down = (n->status & VIRTIO_NET_S_LINK_UP) == 0;
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_qps; i++) {
         qemu_get_subqueue(n->nic, i)->link_down = link_down;
     }
 
@@ -2906,9 +2906,9 @@ static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
    },
 };
 
-static bool max_queues_gt_1(void *opaque, int version_id)
+static bool max_qps_gt_1(void *opaque, int version_id)
 {
-    return VIRTIO_NET(opaque)->max_queues > 1;
+    return VIRTIO_NET(opaque)->max_qps > 1;
 }
 
 static bool has_ctrl_guest_offloads(void *opaque, int version_id)
@@ -2933,13 +2933,13 @@ static bool mac_table_doesnt_fit(void *opaque, int version_id)
 struct VirtIONetMigTmp {
     VirtIONet      *parent;
     VirtIONetQueue *vqs_1;
-    uint16_t        curr_queues_1;
+    uint16_t        curr_qps_1;
     uint8_t         has_ufo;
     uint32_t        has_vnet_hdr;
 };
 
 /* The 2nd and subsequent tx_waiting flags are loaded later than
- * the 1st entry in the queues and only if there's more than one
+ * the 1st entry in the qps and only if there's more than one
  * entry.  We use the tmp mechanism to calculate a temporary
  * pointer and count and also validate the count.
  */
@@ -2949,9 +2949,9 @@ static int virtio_net_tx_waiting_pre_save(void *opaque)
     struct VirtIONetMigTmp *tmp = opaque;
 
     tmp->vqs_1 = tmp->parent->vqs + 1;
-    tmp->curr_queues_1 = tmp->parent->curr_queues - 1;
-    if (tmp->parent->curr_queues == 0) {
-        tmp->curr_queues_1 = 0;
+    tmp->curr_qps_1 = tmp->parent->curr_qps - 1;
+    if (tmp->parent->curr_qps == 0) {
+        tmp->curr_qps_1 = 0;
     }
 
     return 0;
@@ -2964,9 +2964,9 @@ static int virtio_net_tx_waiting_pre_load(void *opaque)
     /* Reuse the pointer setup from save */
     virtio_net_tx_waiting_pre_save(opaque);
 
-    if (tmp->parent->curr_queues > tmp->parent->max_queues) {
-        error_report("virtio-net: curr_queues %x > max_queues %x",
-            tmp->parent->curr_queues, tmp->parent->max_queues);
+    if (tmp->parent->curr_qps > tmp->parent->max_qps) {
+        error_report("virtio-net: curr_qps %x > max_qps %x",
+            tmp->parent->curr_qps, tmp->parent->max_qps);
 
         return -EINVAL;
     }
@@ -2980,7 +2980,7 @@ static const VMStateDescription vmstate_virtio_net_tx_waiting = {
     .pre_save  = virtio_net_tx_waiting_pre_save,
     .fields    = (VMStateField[]) {
         VMSTATE_STRUCT_VARRAY_POINTER_UINT16(vqs_1, struct VirtIONetMigTmp,
-                                     curr_queues_1,
+                                     curr_qps_1,
                                      vmstate_virtio_net_queue_tx_waiting,
                                      struct VirtIONetQueue),
         VMSTATE_END_OF_LIST()
@@ -3122,9 +3122,9 @@ static const VMStateDescription vmstate_virtio_net_device = {
         VMSTATE_UINT8(nobcast, VirtIONet),
         VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
                          vmstate_virtio_net_has_ufo),
-        VMSTATE_SINGLE_TEST(max_queues, VirtIONet, max_queues_gt_1, 0,
+        VMSTATE_SINGLE_TEST(max_qps, VirtIONet, max_qps_gt_1, 0,
                             vmstate_info_uint16_equal, uint16_t),
-        VMSTATE_UINT16_TEST(curr_queues, VirtIONet, max_queues_gt_1),
+        VMSTATE_UINT16_TEST(curr_qps, VirtIONet, max_qps_gt_1),
         VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
                          vmstate_virtio_net_tx_waiting),
         VMSTATE_UINT64_TEST(curr_guest_offloads, VirtIONet,
@@ -3368,16 +3368,16 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    n->max_queues = MAX(n->nic_conf.peers.queues, 1);
-    if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
-        error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
+    n->max_qps = MAX(n->nic_conf.peers.queues, 1);
+    if (n->max_qps * 2 + 1 > VIRTIO_QUEUE_MAX) {
+        error_setg(errp, "Invalid number of qps (= %" PRIu32 "), "
                    "must be a positive integer less than %d.",
-                   n->max_queues, (VIRTIO_QUEUE_MAX - 1) / 2);
+                   n->max_qps, (VIRTIO_QUEUE_MAX - 1) / 2);
         virtio_cleanup(vdev);
         return;
     }
-    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_queues);
-    n->curr_queues = 1;
+    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_qps);
+    n->curr_qps = 1;
     n->tx_timeout = n->net_conf.txtimer;
 
     if (n->net_conf.tx && strcmp(n->net_conf.tx, "timer")
@@ -3391,7 +3391,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
     n->net_conf.tx_queue_size = MIN(virtio_net_max_tx_queue_size(n),
                                     n->net_conf.tx_queue_size);
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_qps; i++) {
         virtio_net_add_queue(n, i);
     }
 
@@ -3415,13 +3415,13 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
                               object_get_typename(OBJECT(dev)), dev->id, n);
     }
 
-    for (i = 0; i < n->max_queues; i++) {
+    for (i = 0; i < n->max_qps; i++) {
         n->nic->ncs[i].do_not_pad = true;
     }
 
     peer_test_vnet_hdr(n);
     if (peer_has_vnet_hdr(n)) {
-        for (i = 0; i < n->max_queues; i++) {
+        for (i = 0; i < n->max_qps; i++) {
             qemu_using_vnet_hdr(qemu_get_subqueue(n->nic, i)->peer, true);
         }
         n->host_hdr_len = sizeof(struct virtio_net_hdr);
@@ -3463,7 +3463,7 @@ static void virtio_net_device_unrealize(DeviceState *dev)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
     VirtIONet *n = VIRTIO_NET(dev);
-    int i, max_queues;
+    int i, max_qps;
 
     if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
         virtio_net_unload_ebpf(n);
@@ -3485,12 +3485,12 @@ static void virtio_net_device_unrealize(DeviceState *dev)
         remove_migration_state_change_notifier(&n->migration_state);
     }
 
-    max_queues = n->multiqueue ? n->max_queues : 1;
-    for (i = 0; i < max_queues; i++) {
+    max_qps = n->multiqueue ? n->max_qps : 1;
+    for (i = 0; i < max_qps; i++) {
         virtio_net_del_queue(n, i);
     }
     /* delete also control vq */
-    virtio_del_queue(vdev, max_queues * 2);
+    virtio_del_queue(vdev, max_qps * 2);
     qemu_announce_timer_del(&n->announce_timer, false);
     g_free(n->vqs);
     qemu_del_nic(n->nic);
diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
index 824a69c23f..a9b6dc252e 100644
--- a/include/hw/virtio/virtio-net.h
+++ b/include/hw/virtio/virtio-net.h
@@ -194,8 +194,8 @@ struct VirtIONet {
     NICConf nic_conf;
     DeviceState *qdev;
     int multiqueue;
-    uint16_t max_queues;
-    uint16_t curr_queues;
+    uint16_t max_qps;
+    uint16_t curr_qps;
     size_t config_size;
     char *netclient_name;
     char *netclient_type;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 19/21] vhost: record the last virtqueue index for the virtio device
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (17 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 18/21] virito-net: use "qps" instead of "queues" when possible Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 20/21] virtio-net: vhost control virtqueue support Jason Wang
  2021-09-03  9:10 ` [PATCH V2 21/21] vhost-vdpa: multiqueue support Jason Wang
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

This patch introduces a new field in the vhost_dev structure to record
the last virtqueue index for the virtio device. This will be useful
for the vhost backends with 1:N model to start or stop the device
after all the vhost_dev structures were started or stopped.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c        | 12 +++++++++---
 include/hw/virtio/vhost.h |  2 ++
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index b40fdfa625..a552be7380 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -231,9 +231,11 @@ fail:
     return NULL;
 }
 
-static void vhost_net_set_vq_index(struct vhost_net *net, int vq_index)
+static void vhost_net_set_vq_index(struct vhost_net *net, int vq_index,
+                                   int last_index)
 {
     net->dev.vq_index = vq_index;
+    net->dev.last_index = last_index;
 }
 
 static int vhost_net_start_one(struct vhost_net *net,
@@ -324,9 +326,13 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
     VirtIONet *n = VIRTIO_NET(dev);
     int nvhosts = data_qps + cvq;
     struct vhost_net *net;
-    int r, e, i;
+    int r, e, i, last_index = data_qps * 2;
     NetClientState *peer;
 
+    if (!cvq) {
+        last_index -= 1;
+    }
+
     if (!k->set_guest_notifiers) {
         error_report("binding does not support guest notifiers");
         return -ENOSYS;
@@ -341,7 +347,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
         }
 
         net = get_vhost_net(peer);
-        vhost_net_set_vq_index(net, i * 2);
+        vhost_net_set_vq_index(net, i * 2, last_index);
 
         /* Suppress the masking guest notifiers on vhost user
          * because vhost user doesn't interrupt masking/unmasking
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 1222b21b94..6684bde33d 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -74,6 +74,8 @@ struct vhost_dev {
     unsigned int nvqs;
     /* the first virtqueue which would be used by this vhost dev */
     int vq_index;
+    /* the last vq index for the virtio device (not vhost) */
+    int last_index;
     /* if non-zero, minimum required value for max_queues */
     int num_queues;
     uint64_t features;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 20/21] virtio-net: vhost control virtqueue support
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (18 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 19/21] vhost: record the last virtqueue index for the virtio device Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  2021-09-03  9:10 ` [PATCH V2 21/21] vhost-vdpa: multiqueue support Jason Wang
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

This patch implements the control virtqueue support for vhost. This
requires virtio-net to figure out the datapath queue pairs and control
virtqueue via is_datapath and pass the number of those two types
of virtqueues to vhost_net_start()/vhost_net_stop().

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/virtio-net.c            | 21 ++++++++++++++++++---
 include/hw/virtio/virtio-net.h |  1 +
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 0a5d9862ec..2523157177 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -244,6 +244,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     NetClientState *nc = qemu_get_queue(n->nic);
     int qps = n->multiqueue ? n->max_qps : 1;
+    int cvq = n->max_ncs - n->max_qps;
 
     if (!get_vhost_net(nc->peer)) {
         return;
@@ -285,14 +286,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
         }
 
         n->vhost_started = 1;
-        r = vhost_net_start(vdev, n->nic->ncs, qps, 0);
+        r = vhost_net_start(vdev, n->nic->ncs, qps, cvq);
         if (r < 0) {
             error_report("unable to start vhost net: %d: "
                          "falling back on userspace virtio", -r);
             n->vhost_started = 0;
         }
     } else {
-        vhost_net_stop(vdev, n->nic->ncs, qps, 0);
+        vhost_net_stop(vdev, n->nic->ncs, qps, cvq);
         n->vhost_started = 0;
     }
 }
@@ -3368,7 +3369,21 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    n->max_qps = MAX(n->nic_conf.peers.queues, 1);
+    n->max_ncs = MAX(n->nic_conf.peers.queues, 1);
+
+    /*
+     * Figure out the datapath queue pairs since the backend could
+     * provide control queue via peers as well.
+     */
+    if (n->nic_conf.peers.queues) {
+        for (i = 0; i < n->max_ncs; i++) {
+            if (n->nic_conf.peers.ncs[i]->is_datapath) {
+                ++n->max_qps;
+            }
+        }
+    }
+    n->max_qps = MAX(n->max_qps, 1);
+
     if (n->max_qps * 2 + 1 > VIRTIO_QUEUE_MAX) {
         error_setg(errp, "Invalid number of qps (= %" PRIu32 "), "
                    "must be a positive integer less than %d.",
diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
index a9b6dc252e..ed4659c189 100644
--- a/include/hw/virtio/virtio-net.h
+++ b/include/hw/virtio/virtio-net.h
@@ -194,6 +194,7 @@ struct VirtIONet {
     NICConf nic_conf;
     DeviceState *qdev;
     int multiqueue;
+    uint16_t max_ncs;
     uint16_t max_qps;
     uint16_t curr_qps;
     size_t config_size;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH V2 21/21] vhost-vdpa: multiqueue support
  2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
                   ` (19 preceding siblings ...)
  2021-09-03  9:10 ` [PATCH V2 20/21] virtio-net: vhost control virtqueue support Jason Wang
@ 2021-09-03  9:10 ` Jason Wang
  20 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-03  9:10 UTC (permalink / raw)
  To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, gdawar, lingshan.zhu, lulu

This patch implements the multiqueue support for vhost-vdpa. This is
done simply by reading the number of queue pairs from the config space
and initialize the datapath and control path net client.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c |   2 +-
 net/vhost-vdpa.c       | 104 +++++++++++++++++++++++++++++++++++++----
 2 files changed, 96 insertions(+), 10 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 94eb9d4069..b5df7594ff 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -578,7 +578,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
         vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
     }
 
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (dev->vq_index + dev->nvqs != dev->last_index) {
         return 0;
     }
 
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 834dab28dd..63cb83d6f4 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -18,6 +18,7 @@
 #include "qemu/error-report.h"
 #include "qemu/option.h"
 #include "qapi/error.h"
+#include <linux/vhost.h>
 #include <sys/ioctl.h>
 #include <err.h>
 #include "standard-headers/linux/virtio_net.h"
@@ -51,6 +52,14 @@ const int vdpa_feature_bits[] = {
     VIRTIO_NET_F_HOST_UFO,
     VIRTIO_NET_F_MRG_RXBUF,
     VIRTIO_NET_F_MTU,
+    VIRTIO_NET_F_CTRL_RX,
+    VIRTIO_NET_F_CTRL_RX_EXTRA,
+    VIRTIO_NET_F_CTRL_VLAN,
+    VIRTIO_NET_F_GUEST_ANNOUNCE,
+    VIRTIO_NET_F_CTRL_MAC_ADDR,
+    VIRTIO_NET_F_RSS,
+    VIRTIO_NET_F_MQ,
+    VIRTIO_NET_F_CTRL_VQ,
     VIRTIO_F_IOMMU_PLATFORM,
     VIRTIO_F_RING_PACKED,
     VIRTIO_NET_F_RSS,
@@ -81,7 +90,8 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
     return ret;
 }
 
-static int vhost_vdpa_add(NetClientState *ncs, void *be)
+static int vhost_vdpa_add(NetClientState *ncs, void *be, int qp_index,
+                          int nvqs)
 {
     VhostNetOptions options;
     struct vhost_net *net = NULL;
@@ -94,7 +104,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
     options.net_backend = ncs;
     options.opaque      = be;
     options.busyloop_timeout = 0;
-    options.nvqs = 2;
+    options.nvqs = nvqs;
 
     net = vhost_net_init(&options);
     if (!net) {
@@ -158,18 +168,28 @@ static NetClientInfo net_vhost_vdpa_info = {
 static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
                                            const char *device,
                                            const char *name,
-                                           int vdpa_device_fd)
+                                           int vdpa_device_fd,
+                                           int qp_index,
+                                           int nvqs,
+                                           bool is_datapath)
 {
     NetClientState *nc = NULL;
     VhostVDPAState *s;
     int ret = 0;
     assert(name);
-    nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
+    if (is_datapath) {
+        nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device,
+                                 name);
+    } else {
+        nc = qemu_new_net_control_client(&net_vhost_vdpa_info, peer,
+                                         device, name);
+    }
     snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
     s = DO_UPCAST(VhostVDPAState, nc, nc);
 
     s->vhost_vdpa.device_fd = vdpa_device_fd;
-    ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
+    s->vhost_vdpa.index = qp_index;
+    ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, qp_index, nvqs);
     if (ret) {
         qemu_del_net_client(nc);
         return NULL;
@@ -195,12 +215,52 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
     return 0;
 }
 
+static int vhost_vdpa_get_max_qps(int fd, int *has_cvq, Error **errp)
+{
+    unsigned long config_size = offsetof(struct vhost_vdpa_config, buf);
+    struct vhost_vdpa_config *config;
+    __virtio16 *max_qps;
+    uint64_t features;
+    int ret;
+
+    ret = ioctl(fd, VHOST_GET_FEATURES, &features);
+    if (ret) {
+        error_setg(errp, "Fail to query features from vhost-vDPA device");
+        return ret;
+    }
+
+    if (features & (1 << VIRTIO_NET_F_CTRL_VQ)) {
+        *has_cvq = 1;
+    } else {
+        *has_cvq = 0;
+    }
+
+    if (features & (1 << VIRTIO_NET_F_MQ)) {
+        config = g_malloc0(config_size + sizeof(*max_qps));
+        config->off = offsetof(struct virtio_net_config, max_virtqueue_pairs);
+        config->len = sizeof(*max_qps);
+
+        ret = ioctl(fd, VHOST_VDPA_GET_CONFIG, config);
+        if (ret) {
+            error_setg(errp, "Fail to get config from vhost-vDPA device");
+            return -ret;
+        }
+
+        max_qps = (__virtio16 *)&config->buf;
+
+        return lduw_le_p(max_qps);
+    }
+
+    return 1;
+}
+
 int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
                         NetClientState *peer, Error **errp)
 {
     const NetdevVhostVDPAOptions *opts;
     int vdpa_device_fd;
-    NetClientState *nc;
+    NetClientState **ncs, *nc;
+    int qps, i, has_cvq = 0;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -215,11 +275,37 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         return -errno;
     }
 
-    nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
-    if (!nc) {
+    qps = vhost_vdpa_get_max_qps(vdpa_device_fd, &has_cvq, errp);
+    if (qps < 0) {
         qemu_close(vdpa_device_fd);
-        return -1;
+        return qps;
+    }
+
+    ncs = g_malloc0(sizeof(*ncs) * qps);
+
+    for (i = 0; i < qps; i++) {
+        ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
+                                     vdpa_device_fd, i, 2, true);
+        if (!ncs[i])
+            goto err;
+    }
+
+    if (has_cvq) {
+        nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
+                                 vdpa_device_fd, i, 1, false);
+        if (!nc)
+            goto err;
     }
 
+    g_free(ncs);
     return 0;
+
+err:
+    if (i) {
+        qemu_del_net_client(ncs[0]);
+    }
+    qemu_close(vdpa_device_fd);
+    g_free(ncs);
+
+    return -1;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH V2 17/21] vhost-net: control virtqueue support
  2021-09-03  9:10 ` [PATCH V2 17/21] vhost-net: control virtqueue support Jason Wang
@ 2021-09-04 20:40   ` Michael S. Tsirkin
  2021-09-06  3:43     ` Jason Wang
  0 siblings, 1 reply; 29+ messages in thread
From: Michael S. Tsirkin @ 2021-09-04 20:40 UTC (permalink / raw)
  To: Jason Wang; +Cc: lulu, qemu-devel, gdawar, eperezma, elic, lingshan.zhu

On Fri, Sep 03, 2021 at 05:10:27PM +0800, Jason Wang wrote:
> We assume there's no cvq in the past, this is not true when we need
> control virtqueue support for vhost-user backends. So this patch
> implements the control virtqueue support for vhost-net. As datapath,
> the control virtqueue is also required to be coupled with the
> NetClientState. The vhost_net_start/stop() are tweaked to accept the
> number of datapath queue pairs plus the the number of control
> virtqueue for us to start and stop the vhost device.
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>


Fails build:

FAILED: libcommon.fa.p/hw_net_vhost_net-stub.c.o 
cc -Ilibcommon.fa.p -I. -Iqapi -Itrace -Iui -Iui/shader -I/usr/include/spice-1 -I/usr/include/spice-server -I/usr/include/cacard -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/nss3 -I/usr/include/nspr4 -I/usr/include/libmount -I/usr/include/blkid -I/usr/include/pixman-1 -I/usr/include/p11-kit-1 -I/usr/include/SDL2 -I/usr/include/libpng16 -I/usr/include/virgl -I/usr/include/libusb-1.0 -I/usr/include/slirp -I/usr/include/gtk-3.0 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/freetype2 -I/usr/include/fribidi -I/usr/include/libxml2 -I/usr/include/cairo -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/gio-unix-2.0 -I/usr/include/atk-1.0 -I/usr/include/at-spi2-atk/2.0 -I/usr/include/dbus-1.0 -I/usr/lib64/dbus-1.0/include -I/usr/include/at-spi-2.0 -I/usr/include/vte-2.91 -I/usr/include/capstone -fdiagnostics-color=auto -pipe -Wall -Winvalid-pch -Werror -std=gnu11 -O2 -g -isystem /scm/qemu/linux-headers -isystem linux-headers -iquote . -iquote /scm/qemu -iquote /scm/qemu/include -iquote /scm/qemu/disas/libvixl -iquote /scm/qemu/tcg/i386 -pthread -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -m64 -mcx16 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wold-style-declaration -Wold-style-definition -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined -Wimplicit-fallthrough=2 -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-psabi -fstack-protector-strong -fPIC -D_DEFAULT_SOURCE -D_XOPEN_SOURCE=600 -DNCURSES_WIDECHAR -DSTRUCT_IOVEC_DEFINED -D_REENTRANT -Wno-undef -MD -MQ libcommon.fa.p/hw_net_vhost_net-stub.c.o -MF libcommon.fa.p/hw_net_vhost_net-stub.c.o.d -o libcommon.fa.p/hw_net_vhost_net-stub.c.o -c ../hw/net/vhost_net-stub.c
../hw/net/vhost_net-stub.c:34:5: error: conflicting types for ‘vhost_net_start’
   34 | int vhost_net_start(VirtIODevice *dev,
      |     ^~~~~~~~~~~~~~~
In file included from ../hw/net/vhost_net-stub.c:19:
/scm/qemu/include/net/vhost_net.h:24:5: note: previous declaration of ‘vhost_net_start’ was here
   24 | int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
      |     ^~~~~~~~~~~~~~~
../hw/net/vhost_net-stub.c:40:6: error: conflicting types for ‘vhost_net_stop’
   40 | void vhost_net_stop(VirtIODevice *dev,
      |      ^~~~~~~~~~~~~~
In file included from ../hw/net/vhost_net-stub.c:19:
/scm/qemu/include/net/vhost_net.h:26:6: note: previous declaration of ‘vhost_net_stop’ was here
   26 | void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
      |      ^~~~~~~~~~~~~~
ninja: build stopped: subcommand failed.
make[1]: *** [Makefile:156: run-ninja] Error 1



> ---
>  hw/net/vhost_net.c      | 43 ++++++++++++++++++++++++++++++-----------
>  hw/net/virtio-net.c     |  4 ++--
>  include/net/vhost_net.h |  6 ++++--
>  3 files changed, 38 insertions(+), 15 deletions(-)
> 
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index 386ec2eaa2..7e0b60b4d9 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -315,11 +315,14 @@ static void vhost_net_stop_one(struct vhost_net *net,
>  }
>  
>  int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> -                    int total_queues)
> +                    int data_qps, int cvq)
>  {
>      BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
>      VirtioBusState *vbus = VIRTIO_BUS(qbus);
>      VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
> +    int total_notifiers = data_qps * 2 + cvq;
> +    VirtIONet *n = VIRTIO_NET(dev);
> +    int nvhosts = data_qps + cvq;
>      struct vhost_net *net;
>      int r, e, i;
>      NetClientState *peer;
> @@ -329,9 +332,14 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
>          return -ENOSYS;
>      }
>  
> -    for (i = 0; i < total_queues; i++) {
> +    for (i = 0; i < nvhosts; i++) {
> +
> +        if (i < data_qps) {
> +            peer = qemu_get_peer(ncs, i);
> +        } else { /* Control Virtqueue */
> +            peer = qemu_get_peer(ncs, n->max_queues);
> +        }
>  
> -        peer = qemu_get_peer(ncs, i);
>          net = get_vhost_net(peer);
>          vhost_net_set_vq_index(net, i * 2);
>  
> @@ -344,14 +352,18 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
>          }
>       }
>  
> -    r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
> +    r = k->set_guest_notifiers(qbus->parent, total_notifiers, true);
>      if (r < 0) {
>          error_report("Error binding guest notifier: %d", -r);
>          goto err;
>      }
>  
> -    for (i = 0; i < total_queues; i++) {
> -        peer = qemu_get_peer(ncs, i);
> +    for (i = 0; i < nvhosts; i++) {
> +        if (i < data_qps) {
> +            peer = qemu_get_peer(ncs, i);
> +        } else {
> +            peer = qemu_get_peer(ncs, n->max_queues);
> +        }
>          r = vhost_net_start_one(get_vhost_net(peer), dev);
>  
>          if (r < 0) {
> @@ -375,7 +387,7 @@ err_start:
>          peer = qemu_get_peer(ncs , i);
>          vhost_net_stop_one(get_vhost_net(peer), dev);
>      }
> -    e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
> +    e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
>      if (e < 0) {
>          fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e);
>          fflush(stderr);
> @@ -385,18 +397,27 @@ err:
>  }
>  
>  void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> -                    int total_queues)
> +                    int data_qps, int cvq)
>  {
>      BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
>      VirtioBusState *vbus = VIRTIO_BUS(qbus);
>      VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
> +    VirtIONet *n = VIRTIO_NET(dev);
> +    NetClientState *peer;
> +    int total_notifiers = data_qps * 2 + cvq;
> +    int nvhosts = data_qps + cvq;
>      int i, r;
>  
> -    for (i = 0; i < total_queues; i++) {
> -        vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
> +    for (i = 0; i < nvhosts; i++) {
> +        if (i < data_qps) {
> +            peer = qemu_get_peer(ncs, i);
> +        } else {
> +            peer = qemu_get_peer(ncs, n->max_queues);
> +        }
> +        vhost_net_stop_one(get_vhost_net(peer), dev);
>      }
>  
> -    r = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
> +    r = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
>      if (r < 0) {
>          fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", r);
>          fflush(stderr);
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 16d20cdee5..8fccbaa44c 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>          }
>  
>          n->vhost_started = 1;
> -        r = vhost_net_start(vdev, n->nic->ncs, queues);
> +        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
>          if (r < 0) {
>              error_report("unable to start vhost net: %d: "
>                           "falling back on userspace virtio", -r);
>              n->vhost_started = 0;
>          }
>      } else {
> -        vhost_net_stop(vdev, n->nic->ncs, queues);
> +        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
>          n->vhost_started = 0;
>      }
>  }
> diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
> index fba40cf695..e656e38af9 100644
> --- a/include/net/vhost_net.h
> +++ b/include/net/vhost_net.h
> @@ -21,8 +21,10 @@ typedef struct VhostNetOptions {
>  uint64_t vhost_net_get_max_queues(VHostNetState *net);
>  struct vhost_net *vhost_net_init(VhostNetOptions *options);
>  
> -int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
> -void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs, int total_queues);
> +int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> +                    int data_qps, int cvq);
> +void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> +                    int data_qps, int cvq);
>  
>  void vhost_net_cleanup(VHostNetState *net);
>  
> -- 
> 2.25.1



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH V2 12/21] vhost-vdpa: open device fd in net_init_vhost_vdpa()
  2021-09-03  9:10 ` [PATCH V2 12/21] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
@ 2021-09-04 20:41   ` Michael S. Tsirkin
  0 siblings, 0 replies; 29+ messages in thread
From: Michael S. Tsirkin @ 2021-09-04 20:41 UTC (permalink / raw)
  To: Jason Wang
  Cc: lulu, qemu-devel, gdawar, eperezma, elic, lingshan.zhu,
	Stefano Garzarella

On Fri, Sep 03, 2021 at 05:10:22PM +0800, Jason Wang wrote:
> This path switches to open device fd in net_init_vhost_vpda(). This is

patch?

> used to prepare for the multiqueue support.
> 
> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  net/vhost-vdpa.c | 23 +++++++++++++++--------
>  1 file changed, 15 insertions(+), 8 deletions(-)
> 
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 912686457c..73d29a74ef 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -156,24 +156,19 @@ static NetClientInfo net_vhost_vdpa_info = {
>  };
>  
>  static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
> -                               const char *name, const char *vhostdev)
> +                               const char *name, int vdpa_device_fd)
>  {
>      NetClientState *nc = NULL;
>      VhostVDPAState *s;
> -    int vdpa_device_fd = -1;
>      int ret = 0;
>      assert(name);
>      nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
>      snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
>      s = DO_UPCAST(VhostVDPAState, nc, nc);
> -    vdpa_device_fd = qemu_open_old(vhostdev, O_RDWR);
> -    if (vdpa_device_fd == -1) {
> -        return -errno;
> -    }
> +
>      s->vhost_vdpa.device_fd = vdpa_device_fd;
>      ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
>      if (ret) {
> -        qemu_close(vdpa_device_fd);
>          qemu_del_net_client(nc);
>      }
>      return ret;
> @@ -201,6 +196,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>                          NetClientState *peer, Error **errp)
>  {
>      const NetdevVhostVDPAOptions *opts;
> +    int vdpa_device_fd, ret;
>  
>      assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>      opts = &netdev->u.vhost_vdpa;
> @@ -209,5 +205,16 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>                            (char *)name, errp)) {
>          return -1;
>      }
> -    return net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, opts->vhostdev);
> +
> +    vdpa_device_fd = qemu_open_old(opts->vhostdev, O_RDWR);
> +    if (vdpa_device_fd == -1) {
> +        return -errno;
> +    }
> +
> +    ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
> +    if (ret) {
> +        qemu_close(vdpa_device_fd);
> +    }
> +
> +    return ret;
>  }
> -- 
> 2.25.1



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH V2 18/21] virito-net: use "qps" instead of "queues" when possible
  2021-09-03  9:10 ` [PATCH V2 18/21] virito-net: use "qps" instead of "queues" when possible Jason Wang
@ 2021-09-04 20:42   ` Michael S. Tsirkin
  2021-09-06  3:42     ` Jason Wang
  0 siblings, 1 reply; 29+ messages in thread
From: Michael S. Tsirkin @ 2021-09-04 20:42 UTC (permalink / raw)
  To: Jason Wang; +Cc: lulu, qemu-devel, gdawar, eperezma, elic, lingshan.zhu

On Fri, Sep 03, 2021 at 05:10:28PM +0800, Jason Wang wrote:
> Most of the time, "queues" really means queue pairs. So this patch
> switch to use "qps" to avoid confusion.
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>

This is far from a standard terminology, except for the people
like me, who's mind is permanently warped by close contact with infiniband
hardware. Please eschew abbreviation, just say queue_pairs.

> ---
>  hw/net/vhost_net.c             |   6 +-
>  hw/net/virtio-net.c            | 150 ++++++++++++++++-----------------
>  include/hw/virtio/virtio-net.h |   4 +-
>  3 files changed, 80 insertions(+), 80 deletions(-)
> 
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index 7e0b60b4d9..b40fdfa625 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -337,7 +337,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
>          if (i < data_qps) {
>              peer = qemu_get_peer(ncs, i);
>          } else { /* Control Virtqueue */
> -            peer = qemu_get_peer(ncs, n->max_queues);
> +            peer = qemu_get_peer(ncs, n->max_qps);
>          }
>  
>          net = get_vhost_net(peer);
> @@ -362,7 +362,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
>          if (i < data_qps) {
>              peer = qemu_get_peer(ncs, i);
>          } else {
> -            peer = qemu_get_peer(ncs, n->max_queues);
> +            peer = qemu_get_peer(ncs, n->max_qps);
>          }
>          r = vhost_net_start_one(get_vhost_net(peer), dev);
>  
> @@ -412,7 +412,7 @@ void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
>          if (i < data_qps) {
>              peer = qemu_get_peer(ncs, i);
>          } else {
> -            peer = qemu_get_peer(ncs, n->max_queues);
> +            peer = qemu_get_peer(ncs, n->max_qps);
>          }
>          vhost_net_stop_one(get_vhost_net(peer), dev);
>      }
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 8fccbaa44c..0a5d9862ec 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -54,7 +54,7 @@
>  #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
>  #define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
>  
> -/* for now, only allow larger queues; with virtio-1, guest can downsize */
> +/* for now, only allow larger qps; with virtio-1, guest can downsize */
>  #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
>  #define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
>  
> @@ -131,7 +131,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, uint8_t *config)
>      int ret = 0;
>      memset(&netcfg, 0 , sizeof(struct virtio_net_config));
>      virtio_stw_p(vdev, &netcfg.status, n->status);
> -    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_queues);
> +    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_qps);
>      virtio_stw_p(vdev, &netcfg.mtu, n->net_conf.mtu);
>      memcpy(netcfg.mac, n->mac, ETH_ALEN);
>      virtio_stl_p(vdev, &netcfg.speed, n->net_conf.speed);
> @@ -243,7 +243,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
>      NetClientState *nc = qemu_get_queue(n->nic);
> -    int queues = n->multiqueue ? n->max_queues : 1;
> +    int qps = n->multiqueue ? n->max_qps : 1;
>  
>      if (!get_vhost_net(nc->peer)) {
>          return;
> @@ -266,7 +266,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>          /* Any packets outstanding? Purge them to avoid touching rings
>           * when vhost is running.
>           */
> -        for (i = 0;  i < queues; i++) {
> +        for (i = 0;  i < qps; i++) {
>              NetClientState *qnc = qemu_get_subqueue(n->nic, i);
>  
>              /* Purge both directions: TX and RX. */
> @@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>          }
>  
>          n->vhost_started = 1;
> -        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
> +        r = vhost_net_start(vdev, n->nic->ncs, qps, 0);
>          if (r < 0) {
>              error_report("unable to start vhost net: %d: "
>                           "falling back on userspace virtio", -r);
>              n->vhost_started = 0;
>          }
>      } else {
> -        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
> +        vhost_net_stop(vdev, n->nic->ncs, qps, 0);
>          n->vhost_started = 0;
>      }
>  }
> @@ -309,11 +309,11 @@ static int virtio_net_set_vnet_endian_one(VirtIODevice *vdev,
>  }
>  
>  static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
> -                                       int queues, bool enable)
> +                                       int qps, bool enable)
>  {
>      int i;
>  
> -    for (i = 0; i < queues; i++) {
> +    for (i = 0; i < qps; i++) {
>          if (virtio_net_set_vnet_endian_one(vdev, ncs[i].peer, enable) < 0 &&
>              enable) {
>              while (--i >= 0) {
> @@ -330,7 +330,7 @@ static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
>  static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> -    int queues = n->multiqueue ? n->max_queues : 1;
> +    int qps = n->multiqueue ? n->max_qps : 1;
>  
>      if (virtio_net_started(n, status)) {
>          /* Before using the device, we tell the network backend about the
> @@ -339,14 +339,14 @@ static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
>           * virtio-net code.
>           */
>          n->needs_vnet_hdr_swap = virtio_net_set_vnet_endian(vdev, n->nic->ncs,
> -                                                            queues, true);
> +                                                            qps, true);
>      } else if (virtio_net_started(n, vdev->status)) {
>          /* After using the device, we need to reset the network backend to
>           * the default (guest native endianness), otherwise the guest may
>           * lose network connectivity if it is rebooted into a different
>           * endianness.
>           */
> -        virtio_net_set_vnet_endian(vdev, n->nic->ncs, queues, false);
> +        virtio_net_set_vnet_endian(vdev, n->nic->ncs, qps, false);
>      }
>  }
>  
> @@ -368,12 +368,12 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
>      virtio_net_vnet_endian_status(n, status);
>      virtio_net_vhost_status(n, status);
>  
> -    for (i = 0; i < n->max_queues; i++) {
> +    for (i = 0; i < n->max_qps; i++) {
>          NetClientState *ncs = qemu_get_subqueue(n->nic, i);
>          bool queue_started;
>          q = &n->vqs[i];
>  
> -        if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
> +        if ((!n->multiqueue && i != 0) || i >= n->curr_qps) {
>              queue_status = 0;
>          } else {
>              queue_status = status;
> @@ -540,7 +540,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
>      n->nouni = 0;
>      n->nobcast = 0;
>      /* multiqueue is disabled by default */
> -    n->curr_queues = 1;
> +    n->curr_qps = 1;
>      timer_del(n->announce_timer.tm);
>      n->announce_timer.round = 0;
>      n->status &= ~VIRTIO_NET_S_ANNOUNCE;
> @@ -556,7 +556,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
>      memset(n->vlans, 0, MAX_VLAN >> 3);
>  
>      /* Flush any async TX */
> -    for (i = 0;  i < n->max_queues; i++) {
> +    for (i = 0;  i < n->max_qps; i++) {
>          NetClientState *nc = qemu_get_subqueue(n->nic, i);
>  
>          if (nc->peer) {
> @@ -610,7 +610,7 @@ static void virtio_net_set_mrg_rx_bufs(VirtIONet *n, int mergeable_rx_bufs,
>              sizeof(struct virtio_net_hdr);
>      }
>  
> -    for (i = 0; i < n->max_queues; i++) {
> +    for (i = 0; i < n->max_qps; i++) {
>          nc = qemu_get_subqueue(n->nic, i);
>  
>          if (peer_has_vnet_hdr(n) &&
> @@ -655,7 +655,7 @@ static int peer_attach(VirtIONet *n, int index)
>          return 0;
>      }
>  
> -    if (n->max_queues == 1) {
> +    if (n->max_qps == 1) {
>          return 0;
>      }
>  
> @@ -681,7 +681,7 @@ static int peer_detach(VirtIONet *n, int index)
>      return tap_disable(nc->peer);
>  }
>  
> -static void virtio_net_set_queues(VirtIONet *n)
> +static void virtio_net_set_qps(VirtIONet *n)
>  {
>      int i;
>      int r;
> @@ -690,8 +690,8 @@ static void virtio_net_set_queues(VirtIONet *n)
>          return;
>      }
>  
> -    for (i = 0; i < n->max_queues; i++) {
> -        if (i < n->curr_queues) {
> +    for (i = 0; i < n->max_qps; i++) {
> +        if (i < n->curr_qps) {
>              r = peer_attach(n, i);
>              assert(!r);
>          } else {
> @@ -920,7 +920,7 @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
>          virtio_net_apply_guest_offloads(n);
>      }
>  
> -    for (i = 0;  i < n->max_queues; i++) {
> +    for (i = 0;  i < n->max_qps; i++) {
>          NetClientState *nc = qemu_get_subqueue(n->nic, i);
>  
>          if (!get_vhost_net(nc->peer)) {
> @@ -1247,7 +1247,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
>      struct virtio_net_rss_config cfg;
>      size_t s, offset = 0, size_get;
> -    uint16_t queues, i;
> +    uint16_t qps, i;
>      struct {
>          uint16_t us;
>          uint8_t b;
> @@ -1289,7 +1289,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
>      }
>      n->rss_data.default_queue = do_rss ?
>          virtio_lduw_p(vdev, &cfg.unclassified_queue) : 0;
> -    if (n->rss_data.default_queue >= n->max_queues) {
> +    if (n->rss_data.default_queue >= n->max_qps) {
>          err_msg = "Invalid default queue";
>          err_value = n->rss_data.default_queue;
>          goto error;
> @@ -1318,14 +1318,14 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
>      size_get = sizeof(temp);
>      s = iov_to_buf(iov, iov_cnt, offset, &temp, size_get);
>      if (s != size_get) {
> -        err_msg = "Can't get queues";
> +        err_msg = "Can't get qps";
>          err_value = (uint32_t)s;
>          goto error;
>      }
> -    queues = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_queues;
> -    if (queues == 0 || queues > n->max_queues) {
> -        err_msg = "Invalid number of queues";
> -        err_value = queues;
> +    qps = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_qps;
> +    if (qps == 0 || qps > n->max_qps) {
> +        err_msg = "Invalid number of qps";
> +        err_value = qps;
>          goto error;
>      }
>      if (temp.b > VIRTIO_NET_RSS_MAX_KEY_SIZE) {
> @@ -1340,7 +1340,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
>      }
>      if (!temp.b && !n->rss_data.hash_types) {
>          virtio_net_disable_rss(n);
> -        return queues;
> +        return qps;
>      }
>      offset += size_get;
>      size_get = temp.b;
> @@ -1373,7 +1373,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
>      trace_virtio_net_rss_enable(n->rss_data.hash_types,
>                                  n->rss_data.indirections_len,
>                                  temp.b);
> -    return queues;
> +    return qps;
>  error:
>      trace_virtio_net_rss_error(err_msg, err_value);
>      virtio_net_disable_rss(n);
> @@ -1384,15 +1384,15 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
>                                  struct iovec *iov, unsigned int iov_cnt)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> -    uint16_t queues;
> +    uint16_t qps;
>  
>      virtio_net_disable_rss(n);
>      if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
> -        queues = virtio_net_handle_rss(n, iov, iov_cnt, false);
> -        return queues ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
> +        qps = virtio_net_handle_rss(n, iov, iov_cnt, false);
> +        return qps ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
>      }
>      if (cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> -        queues = virtio_net_handle_rss(n, iov, iov_cnt, true);
> +        qps = virtio_net_handle_rss(n, iov, iov_cnt, true);
>      } else if (cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
>          struct virtio_net_ctrl_mq mq;
>          size_t s;
> @@ -1403,24 +1403,24 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
>          if (s != sizeof(mq)) {
>              return VIRTIO_NET_ERR;
>          }
> -        queues = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
> +        qps = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
>  
>      } else {
>          return VIRTIO_NET_ERR;
>      }
>  
> -    if (queues < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
> -        queues > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
> -        queues > n->max_queues ||
> +    if (qps < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
> +        qps > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
> +        qps > n->max_qps ||
>          !n->multiqueue) {
>          return VIRTIO_NET_ERR;
>      }
>  
> -    n->curr_queues = queues;
> -    /* stop the backend before changing the number of queues to avoid handling a
> +    n->curr_qps = qps;
> +    /* stop the backend before changing the number of qps to avoid handling a
>       * disabled queue */
>      virtio_net_set_status(vdev, vdev->status);
> -    virtio_net_set_queues(n);
> +    virtio_net_set_qps(n);
>  
>      return VIRTIO_NET_OK;
>  }
> @@ -1498,7 +1498,7 @@ static bool virtio_net_can_receive(NetClientState *nc)
>          return false;
>      }
>  
> -    if (nc->queue_index >= n->curr_queues) {
> +    if (nc->queue_index >= n->curr_qps) {
>          return false;
>      }
>  
> @@ -2753,11 +2753,11 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
>      virtio_del_queue(vdev, index * 2 + 1);
>  }
>  
> -static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
> +static void virtio_net_change_num_qps(VirtIONet *n, int new_max_qps)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
>      int old_num_queues = virtio_get_num_queues(vdev);
> -    int new_num_queues = new_max_queues * 2 + 1;
> +    int new_num_queues = new_max_qps * 2 + 1;
>      int i;
>  
>      assert(old_num_queues >= 3);
> @@ -2790,12 +2790,12 @@ static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
>  
>  static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue)
>  {
> -    int max = multiqueue ? n->max_queues : 1;
> +    int max = multiqueue ? n->max_qps : 1;
>  
>      n->multiqueue = multiqueue;
> -    virtio_net_change_num_queues(n, max);
> +    virtio_net_change_num_qps(n, max);
>  
> -    virtio_net_set_queues(n);
> +    virtio_net_set_qps(n);
>  }
>  
>  static int virtio_net_post_load_device(void *opaque, int version_id)
> @@ -2828,7 +2828,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
>       */
>      n->saved_guest_offloads = n->curr_guest_offloads;
>  
> -    virtio_net_set_queues(n);
> +    virtio_net_set_qps(n);
>  
>      /* Find the first multicast entry in the saved MAC filter */
>      for (i = 0; i < n->mac_table.in_use; i++) {
> @@ -2841,7 +2841,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
>      /* nc.link_down can't be migrated, so infer link_down according
>       * to link status bit in n->status */
>      link_down = (n->status & VIRTIO_NET_S_LINK_UP) == 0;
> -    for (i = 0; i < n->max_queues; i++) {
> +    for (i = 0; i < n->max_qps; i++) {
>          qemu_get_subqueue(n->nic, i)->link_down = link_down;
>      }
>  
> @@ -2906,9 +2906,9 @@ static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
>     },
>  };
>  
> -static bool max_queues_gt_1(void *opaque, int version_id)
> +static bool max_qps_gt_1(void *opaque, int version_id)
>  {
> -    return VIRTIO_NET(opaque)->max_queues > 1;
> +    return VIRTIO_NET(opaque)->max_qps > 1;
>  }
>  
>  static bool has_ctrl_guest_offloads(void *opaque, int version_id)
> @@ -2933,13 +2933,13 @@ static bool mac_table_doesnt_fit(void *opaque, int version_id)
>  struct VirtIONetMigTmp {
>      VirtIONet      *parent;
>      VirtIONetQueue *vqs_1;
> -    uint16_t        curr_queues_1;
> +    uint16_t        curr_qps_1;
>      uint8_t         has_ufo;
>      uint32_t        has_vnet_hdr;
>  };
>  
>  /* The 2nd and subsequent tx_waiting flags are loaded later than
> - * the 1st entry in the queues and only if there's more than one
> + * the 1st entry in the qps and only if there's more than one
>   * entry.  We use the tmp mechanism to calculate a temporary
>   * pointer and count and also validate the count.
>   */
> @@ -2949,9 +2949,9 @@ static int virtio_net_tx_waiting_pre_save(void *opaque)
>      struct VirtIONetMigTmp *tmp = opaque;
>  
>      tmp->vqs_1 = tmp->parent->vqs + 1;
> -    tmp->curr_queues_1 = tmp->parent->curr_queues - 1;
> -    if (tmp->parent->curr_queues == 0) {
> -        tmp->curr_queues_1 = 0;
> +    tmp->curr_qps_1 = tmp->parent->curr_qps - 1;
> +    if (tmp->parent->curr_qps == 0) {
> +        tmp->curr_qps_1 = 0;
>      }
>  
>      return 0;
> @@ -2964,9 +2964,9 @@ static int virtio_net_tx_waiting_pre_load(void *opaque)
>      /* Reuse the pointer setup from save */
>      virtio_net_tx_waiting_pre_save(opaque);
>  
> -    if (tmp->parent->curr_queues > tmp->parent->max_queues) {
> -        error_report("virtio-net: curr_queues %x > max_queues %x",
> -            tmp->parent->curr_queues, tmp->parent->max_queues);
> +    if (tmp->parent->curr_qps > tmp->parent->max_qps) {
> +        error_report("virtio-net: curr_qps %x > max_qps %x",
> +            tmp->parent->curr_qps, tmp->parent->max_qps);
>  
>          return -EINVAL;
>      }
> @@ -2980,7 +2980,7 @@ static const VMStateDescription vmstate_virtio_net_tx_waiting = {
>      .pre_save  = virtio_net_tx_waiting_pre_save,
>      .fields    = (VMStateField[]) {
>          VMSTATE_STRUCT_VARRAY_POINTER_UINT16(vqs_1, struct VirtIONetMigTmp,
> -                                     curr_queues_1,
> +                                     curr_qps_1,
>                                       vmstate_virtio_net_queue_tx_waiting,
>                                       struct VirtIONetQueue),
>          VMSTATE_END_OF_LIST()
> @@ -3122,9 +3122,9 @@ static const VMStateDescription vmstate_virtio_net_device = {
>          VMSTATE_UINT8(nobcast, VirtIONet),
>          VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
>                           vmstate_virtio_net_has_ufo),
> -        VMSTATE_SINGLE_TEST(max_queues, VirtIONet, max_queues_gt_1, 0,
> +        VMSTATE_SINGLE_TEST(max_qps, VirtIONet, max_qps_gt_1, 0,
>                              vmstate_info_uint16_equal, uint16_t),
> -        VMSTATE_UINT16_TEST(curr_queues, VirtIONet, max_queues_gt_1),
> +        VMSTATE_UINT16_TEST(curr_qps, VirtIONet, max_qps_gt_1),
>          VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
>                           vmstate_virtio_net_tx_waiting),
>          VMSTATE_UINT64_TEST(curr_guest_offloads, VirtIONet,
> @@ -3368,16 +3368,16 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>          return;
>      }
>  
> -    n->max_queues = MAX(n->nic_conf.peers.queues, 1);
> -    if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
> -        error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
> +    n->max_qps = MAX(n->nic_conf.peers.queues, 1);
> +    if (n->max_qps * 2 + 1 > VIRTIO_QUEUE_MAX) {
> +        error_setg(errp, "Invalid number of qps (= %" PRIu32 "), "
>                     "must be a positive integer less than %d.",
> -                   n->max_queues, (VIRTIO_QUEUE_MAX - 1) / 2);
> +                   n->max_qps, (VIRTIO_QUEUE_MAX - 1) / 2);
>          virtio_cleanup(vdev);
>          return;
>      }
> -    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_queues);
> -    n->curr_queues = 1;
> +    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_qps);
> +    n->curr_qps = 1;
>      n->tx_timeout = n->net_conf.txtimer;
>  
>      if (n->net_conf.tx && strcmp(n->net_conf.tx, "timer")
> @@ -3391,7 +3391,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>      n->net_conf.tx_queue_size = MIN(virtio_net_max_tx_queue_size(n),
>                                      n->net_conf.tx_queue_size);
>  
> -    for (i = 0; i < n->max_queues; i++) {
> +    for (i = 0; i < n->max_qps; i++) {
>          virtio_net_add_queue(n, i);
>      }
>  
> @@ -3415,13 +3415,13 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>                                object_get_typename(OBJECT(dev)), dev->id, n);
>      }
>  
> -    for (i = 0; i < n->max_queues; i++) {
> +    for (i = 0; i < n->max_qps; i++) {
>          n->nic->ncs[i].do_not_pad = true;
>      }
>  
>      peer_test_vnet_hdr(n);
>      if (peer_has_vnet_hdr(n)) {
> -        for (i = 0; i < n->max_queues; i++) {
> +        for (i = 0; i < n->max_qps; i++) {
>              qemu_using_vnet_hdr(qemu_get_subqueue(n->nic, i)->peer, true);
>          }
>          n->host_hdr_len = sizeof(struct virtio_net_hdr);
> @@ -3463,7 +3463,7 @@ static void virtio_net_device_unrealize(DeviceState *dev)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
>      VirtIONet *n = VIRTIO_NET(dev);
> -    int i, max_queues;
> +    int i, max_qps;
>  
>      if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
>          virtio_net_unload_ebpf(n);
> @@ -3485,12 +3485,12 @@ static void virtio_net_device_unrealize(DeviceState *dev)
>          remove_migration_state_change_notifier(&n->migration_state);
>      }
>  
> -    max_queues = n->multiqueue ? n->max_queues : 1;
> -    for (i = 0; i < max_queues; i++) {
> +    max_qps = n->multiqueue ? n->max_qps : 1;
> +    for (i = 0; i < max_qps; i++) {
>          virtio_net_del_queue(n, i);
>      }
>      /* delete also control vq */
> -    virtio_del_queue(vdev, max_queues * 2);
> +    virtio_del_queue(vdev, max_qps * 2);
>      qemu_announce_timer_del(&n->announce_timer, false);
>      g_free(n->vqs);
>      qemu_del_nic(n->nic);
> diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
> index 824a69c23f..a9b6dc252e 100644
> --- a/include/hw/virtio/virtio-net.h
> +++ b/include/hw/virtio/virtio-net.h
> @@ -194,8 +194,8 @@ struct VirtIONet {
>      NICConf nic_conf;
>      DeviceState *qdev;
>      int multiqueue;
> -    uint16_t max_queues;
> -    uint16_t curr_queues;
> +    uint16_t max_qps;
> +    uint16_t curr_qps;
>      size_t config_size;
>      char *netclient_name;
>      char *netclient_type;
> -- 
> 2.25.1



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH V2 18/21] virito-net: use "qps" instead of "queues" when possible
  2021-09-04 20:42   ` Michael S. Tsirkin
@ 2021-09-06  3:42     ` Jason Wang
  2021-09-06  5:49       ` Michael S. Tsirkin
  0 siblings, 1 reply; 29+ messages in thread
From: Jason Wang @ 2021-09-06  3:42 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cindy Lu, qemu-devel, Gautam Dawar, eperezma, Eli Cohen, Zhu Lingshan

On Sun, Sep 5, 2021 at 4:42 AM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Fri, Sep 03, 2021 at 05:10:28PM +0800, Jason Wang wrote:
> > Most of the time, "queues" really means queue pairs. So this patch
> > switch to use "qps" to avoid confusion.
> >
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
>
> This is far from a standard terminology, except for the people
> like me, who's mind is permanently warped by close contact with infiniband
> hardware. Please eschew abbreviation, just say queue_pairs.

Ok, I will do that in the next version.

Thanks

>
> > ---
> >  hw/net/vhost_net.c             |   6 +-
> >  hw/net/virtio-net.c            | 150 ++++++++++++++++-----------------
> >  include/hw/virtio/virtio-net.h |   4 +-
> >  3 files changed, 80 insertions(+), 80 deletions(-)
> >
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index 7e0b60b4d9..b40fdfa625 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -337,7 +337,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> >          if (i < data_qps) {
> >              peer = qemu_get_peer(ncs, i);
> >          } else { /* Control Virtqueue */
> > -            peer = qemu_get_peer(ncs, n->max_queues);
> > +            peer = qemu_get_peer(ncs, n->max_qps);
> >          }
> >
> >          net = get_vhost_net(peer);
> > @@ -362,7 +362,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> >          if (i < data_qps) {
> >              peer = qemu_get_peer(ncs, i);
> >          } else {
> > -            peer = qemu_get_peer(ncs, n->max_queues);
> > +            peer = qemu_get_peer(ncs, n->max_qps);
> >          }
> >          r = vhost_net_start_one(get_vhost_net(peer), dev);
> >
> > @@ -412,7 +412,7 @@ void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> >          if (i < data_qps) {
> >              peer = qemu_get_peer(ncs, i);
> >          } else {
> > -            peer = qemu_get_peer(ncs, n->max_queues);
> > +            peer = qemu_get_peer(ncs, n->max_qps);
> >          }
> >          vhost_net_stop_one(get_vhost_net(peer), dev);
> >      }
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 8fccbaa44c..0a5d9862ec 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -54,7 +54,7 @@
> >  #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
> >  #define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
> >
> > -/* for now, only allow larger queues; with virtio-1, guest can downsize */
> > +/* for now, only allow larger qps; with virtio-1, guest can downsize */
> >  #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
> >  #define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
> >
> > @@ -131,7 +131,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, uint8_t *config)
> >      int ret = 0;
> >      memset(&netcfg, 0 , sizeof(struct virtio_net_config));
> >      virtio_stw_p(vdev, &netcfg.status, n->status);
> > -    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_queues);
> > +    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_qps);
> >      virtio_stw_p(vdev, &netcfg.mtu, n->net_conf.mtu);
> >      memcpy(netcfg.mac, n->mac, ETH_ALEN);
> >      virtio_stl_p(vdev, &netcfg.speed, n->net_conf.speed);
> > @@ -243,7 +243,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> >  {
> >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> >      NetClientState *nc = qemu_get_queue(n->nic);
> > -    int queues = n->multiqueue ? n->max_queues : 1;
> > +    int qps = n->multiqueue ? n->max_qps : 1;
> >
> >      if (!get_vhost_net(nc->peer)) {
> >          return;
> > @@ -266,7 +266,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> >          /* Any packets outstanding? Purge them to avoid touching rings
> >           * when vhost is running.
> >           */
> > -        for (i = 0;  i < queues; i++) {
> > +        for (i = 0;  i < qps; i++) {
> >              NetClientState *qnc = qemu_get_subqueue(n->nic, i);
> >
> >              /* Purge both directions: TX and RX. */
> > @@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> >          }
> >
> >          n->vhost_started = 1;
> > -        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
> > +        r = vhost_net_start(vdev, n->nic->ncs, qps, 0);
> >          if (r < 0) {
> >              error_report("unable to start vhost net: %d: "
> >                           "falling back on userspace virtio", -r);
> >              n->vhost_started = 0;
> >          }
> >      } else {
> > -        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
> > +        vhost_net_stop(vdev, n->nic->ncs, qps, 0);
> >          n->vhost_started = 0;
> >      }
> >  }
> > @@ -309,11 +309,11 @@ static int virtio_net_set_vnet_endian_one(VirtIODevice *vdev,
> >  }
> >
> >  static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
> > -                                       int queues, bool enable)
> > +                                       int qps, bool enable)
> >  {
> >      int i;
> >
> > -    for (i = 0; i < queues; i++) {
> > +    for (i = 0; i < qps; i++) {
> >          if (virtio_net_set_vnet_endian_one(vdev, ncs[i].peer, enable) < 0 &&
> >              enable) {
> >              while (--i >= 0) {
> > @@ -330,7 +330,7 @@ static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
> >  static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
> >  {
> >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > -    int queues = n->multiqueue ? n->max_queues : 1;
> > +    int qps = n->multiqueue ? n->max_qps : 1;
> >
> >      if (virtio_net_started(n, status)) {
> >          /* Before using the device, we tell the network backend about the
> > @@ -339,14 +339,14 @@ static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
> >           * virtio-net code.
> >           */
> >          n->needs_vnet_hdr_swap = virtio_net_set_vnet_endian(vdev, n->nic->ncs,
> > -                                                            queues, true);
> > +                                                            qps, true);
> >      } else if (virtio_net_started(n, vdev->status)) {
> >          /* After using the device, we need to reset the network backend to
> >           * the default (guest native endianness), otherwise the guest may
> >           * lose network connectivity if it is rebooted into a different
> >           * endianness.
> >           */
> > -        virtio_net_set_vnet_endian(vdev, n->nic->ncs, queues, false);
> > +        virtio_net_set_vnet_endian(vdev, n->nic->ncs, qps, false);
> >      }
> >  }
> >
> > @@ -368,12 +368,12 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
> >      virtio_net_vnet_endian_status(n, status);
> >      virtio_net_vhost_status(n, status);
> >
> > -    for (i = 0; i < n->max_queues; i++) {
> > +    for (i = 0; i < n->max_qps; i++) {
> >          NetClientState *ncs = qemu_get_subqueue(n->nic, i);
> >          bool queue_started;
> >          q = &n->vqs[i];
> >
> > -        if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
> > +        if ((!n->multiqueue && i != 0) || i >= n->curr_qps) {
> >              queue_status = 0;
> >          } else {
> >              queue_status = status;
> > @@ -540,7 +540,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
> >      n->nouni = 0;
> >      n->nobcast = 0;
> >      /* multiqueue is disabled by default */
> > -    n->curr_queues = 1;
> > +    n->curr_qps = 1;
> >      timer_del(n->announce_timer.tm);
> >      n->announce_timer.round = 0;
> >      n->status &= ~VIRTIO_NET_S_ANNOUNCE;
> > @@ -556,7 +556,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
> >      memset(n->vlans, 0, MAX_VLAN >> 3);
> >
> >      /* Flush any async TX */
> > -    for (i = 0;  i < n->max_queues; i++) {
> > +    for (i = 0;  i < n->max_qps; i++) {
> >          NetClientState *nc = qemu_get_subqueue(n->nic, i);
> >
> >          if (nc->peer) {
> > @@ -610,7 +610,7 @@ static void virtio_net_set_mrg_rx_bufs(VirtIONet *n, int mergeable_rx_bufs,
> >              sizeof(struct virtio_net_hdr);
> >      }
> >
> > -    for (i = 0; i < n->max_queues; i++) {
> > +    for (i = 0; i < n->max_qps; i++) {
> >          nc = qemu_get_subqueue(n->nic, i);
> >
> >          if (peer_has_vnet_hdr(n) &&
> > @@ -655,7 +655,7 @@ static int peer_attach(VirtIONet *n, int index)
> >          return 0;
> >      }
> >
> > -    if (n->max_queues == 1) {
> > +    if (n->max_qps == 1) {
> >          return 0;
> >      }
> >
> > @@ -681,7 +681,7 @@ static int peer_detach(VirtIONet *n, int index)
> >      return tap_disable(nc->peer);
> >  }
> >
> > -static void virtio_net_set_queues(VirtIONet *n)
> > +static void virtio_net_set_qps(VirtIONet *n)
> >  {
> >      int i;
> >      int r;
> > @@ -690,8 +690,8 @@ static void virtio_net_set_queues(VirtIONet *n)
> >          return;
> >      }
> >
> > -    for (i = 0; i < n->max_queues; i++) {
> > -        if (i < n->curr_queues) {
> > +    for (i = 0; i < n->max_qps; i++) {
> > +        if (i < n->curr_qps) {
> >              r = peer_attach(n, i);
> >              assert(!r);
> >          } else {
> > @@ -920,7 +920,7 @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
> >          virtio_net_apply_guest_offloads(n);
> >      }
> >
> > -    for (i = 0;  i < n->max_queues; i++) {
> > +    for (i = 0;  i < n->max_qps; i++) {
> >          NetClientState *nc = qemu_get_subqueue(n->nic, i);
> >
> >          if (!get_vhost_net(nc->peer)) {
> > @@ -1247,7 +1247,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> >      struct virtio_net_rss_config cfg;
> >      size_t s, offset = 0, size_get;
> > -    uint16_t queues, i;
> > +    uint16_t qps, i;
> >      struct {
> >          uint16_t us;
> >          uint8_t b;
> > @@ -1289,7 +1289,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> >      }
> >      n->rss_data.default_queue = do_rss ?
> >          virtio_lduw_p(vdev, &cfg.unclassified_queue) : 0;
> > -    if (n->rss_data.default_queue >= n->max_queues) {
> > +    if (n->rss_data.default_queue >= n->max_qps) {
> >          err_msg = "Invalid default queue";
> >          err_value = n->rss_data.default_queue;
> >          goto error;
> > @@ -1318,14 +1318,14 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> >      size_get = sizeof(temp);
> >      s = iov_to_buf(iov, iov_cnt, offset, &temp, size_get);
> >      if (s != size_get) {
> > -        err_msg = "Can't get queues";
> > +        err_msg = "Can't get qps";
> >          err_value = (uint32_t)s;
> >          goto error;
> >      }
> > -    queues = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_queues;
> > -    if (queues == 0 || queues > n->max_queues) {
> > -        err_msg = "Invalid number of queues";
> > -        err_value = queues;
> > +    qps = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_qps;
> > +    if (qps == 0 || qps > n->max_qps) {
> > +        err_msg = "Invalid number of qps";
> > +        err_value = qps;
> >          goto error;
> >      }
> >      if (temp.b > VIRTIO_NET_RSS_MAX_KEY_SIZE) {
> > @@ -1340,7 +1340,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> >      }
> >      if (!temp.b && !n->rss_data.hash_types) {
> >          virtio_net_disable_rss(n);
> > -        return queues;
> > +        return qps;
> >      }
> >      offset += size_get;
> >      size_get = temp.b;
> > @@ -1373,7 +1373,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> >      trace_virtio_net_rss_enable(n->rss_data.hash_types,
> >                                  n->rss_data.indirections_len,
> >                                  temp.b);
> > -    return queues;
> > +    return qps;
> >  error:
> >      trace_virtio_net_rss_error(err_msg, err_value);
> >      virtio_net_disable_rss(n);
> > @@ -1384,15 +1384,15 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> >                                  struct iovec *iov, unsigned int iov_cnt)
> >  {
> >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > -    uint16_t queues;
> > +    uint16_t qps;
> >
> >      virtio_net_disable_rss(n);
> >      if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
> > -        queues = virtio_net_handle_rss(n, iov, iov_cnt, false);
> > -        return queues ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
> > +        qps = virtio_net_handle_rss(n, iov, iov_cnt, false);
> > +        return qps ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
> >      }
> >      if (cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> > -        queues = virtio_net_handle_rss(n, iov, iov_cnt, true);
> > +        qps = virtio_net_handle_rss(n, iov, iov_cnt, true);
> >      } else if (cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> >          struct virtio_net_ctrl_mq mq;
> >          size_t s;
> > @@ -1403,24 +1403,24 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> >          if (s != sizeof(mq)) {
> >              return VIRTIO_NET_ERR;
> >          }
> > -        queues = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
> > +        qps = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
> >
> >      } else {
> >          return VIRTIO_NET_ERR;
> >      }
> >
> > -    if (queues < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
> > -        queues > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
> > -        queues > n->max_queues ||
> > +    if (qps < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
> > +        qps > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
> > +        qps > n->max_qps ||
> >          !n->multiqueue) {
> >          return VIRTIO_NET_ERR;
> >      }
> >
> > -    n->curr_queues = queues;
> > -    /* stop the backend before changing the number of queues to avoid handling a
> > +    n->curr_qps = qps;
> > +    /* stop the backend before changing the number of qps to avoid handling a
> >       * disabled queue */
> >      virtio_net_set_status(vdev, vdev->status);
> > -    virtio_net_set_queues(n);
> > +    virtio_net_set_qps(n);
> >
> >      return VIRTIO_NET_OK;
> >  }
> > @@ -1498,7 +1498,7 @@ static bool virtio_net_can_receive(NetClientState *nc)
> >          return false;
> >      }
> >
> > -    if (nc->queue_index >= n->curr_queues) {
> > +    if (nc->queue_index >= n->curr_qps) {
> >          return false;
> >      }
> >
> > @@ -2753,11 +2753,11 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
> >      virtio_del_queue(vdev, index * 2 + 1);
> >  }
> >
> > -static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
> > +static void virtio_net_change_num_qps(VirtIONet *n, int new_max_qps)
> >  {
> >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> >      int old_num_queues = virtio_get_num_queues(vdev);
> > -    int new_num_queues = new_max_queues * 2 + 1;
> > +    int new_num_queues = new_max_qps * 2 + 1;
> >      int i;
> >
> >      assert(old_num_queues >= 3);
> > @@ -2790,12 +2790,12 @@ static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
> >
> >  static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue)
> >  {
> > -    int max = multiqueue ? n->max_queues : 1;
> > +    int max = multiqueue ? n->max_qps : 1;
> >
> >      n->multiqueue = multiqueue;
> > -    virtio_net_change_num_queues(n, max);
> > +    virtio_net_change_num_qps(n, max);
> >
> > -    virtio_net_set_queues(n);
> > +    virtio_net_set_qps(n);
> >  }
> >
> >  static int virtio_net_post_load_device(void *opaque, int version_id)
> > @@ -2828,7 +2828,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> >       */
> >      n->saved_guest_offloads = n->curr_guest_offloads;
> >
> > -    virtio_net_set_queues(n);
> > +    virtio_net_set_qps(n);
> >
> >      /* Find the first multicast entry in the saved MAC filter */
> >      for (i = 0; i < n->mac_table.in_use; i++) {
> > @@ -2841,7 +2841,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> >      /* nc.link_down can't be migrated, so infer link_down according
> >       * to link status bit in n->status */
> >      link_down = (n->status & VIRTIO_NET_S_LINK_UP) == 0;
> > -    for (i = 0; i < n->max_queues; i++) {
> > +    for (i = 0; i < n->max_qps; i++) {
> >          qemu_get_subqueue(n->nic, i)->link_down = link_down;
> >      }
> >
> > @@ -2906,9 +2906,9 @@ static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> >     },
> >  };
> >
> > -static bool max_queues_gt_1(void *opaque, int version_id)
> > +static bool max_qps_gt_1(void *opaque, int version_id)
> >  {
> > -    return VIRTIO_NET(opaque)->max_queues > 1;
> > +    return VIRTIO_NET(opaque)->max_qps > 1;
> >  }
> >
> >  static bool has_ctrl_guest_offloads(void *opaque, int version_id)
> > @@ -2933,13 +2933,13 @@ static bool mac_table_doesnt_fit(void *opaque, int version_id)
> >  struct VirtIONetMigTmp {
> >      VirtIONet      *parent;
> >      VirtIONetQueue *vqs_1;
> > -    uint16_t        curr_queues_1;
> > +    uint16_t        curr_qps_1;
> >      uint8_t         has_ufo;
> >      uint32_t        has_vnet_hdr;
> >  };
> >
> >  /* The 2nd and subsequent tx_waiting flags are loaded later than
> > - * the 1st entry in the queues and only if there's more than one
> > + * the 1st entry in the qps and only if there's more than one
> >   * entry.  We use the tmp mechanism to calculate a temporary
> >   * pointer and count and also validate the count.
> >   */
> > @@ -2949,9 +2949,9 @@ static int virtio_net_tx_waiting_pre_save(void *opaque)
> >      struct VirtIONetMigTmp *tmp = opaque;
> >
> >      tmp->vqs_1 = tmp->parent->vqs + 1;
> > -    tmp->curr_queues_1 = tmp->parent->curr_queues - 1;
> > -    if (tmp->parent->curr_queues == 0) {
> > -        tmp->curr_queues_1 = 0;
> > +    tmp->curr_qps_1 = tmp->parent->curr_qps - 1;
> > +    if (tmp->parent->curr_qps == 0) {
> > +        tmp->curr_qps_1 = 0;
> >      }
> >
> >      return 0;
> > @@ -2964,9 +2964,9 @@ static int virtio_net_tx_waiting_pre_load(void *opaque)
> >      /* Reuse the pointer setup from save */
> >      virtio_net_tx_waiting_pre_save(opaque);
> >
> > -    if (tmp->parent->curr_queues > tmp->parent->max_queues) {
> > -        error_report("virtio-net: curr_queues %x > max_queues %x",
> > -            tmp->parent->curr_queues, tmp->parent->max_queues);
> > +    if (tmp->parent->curr_qps > tmp->parent->max_qps) {
> > +        error_report("virtio-net: curr_qps %x > max_qps %x",
> > +            tmp->parent->curr_qps, tmp->parent->max_qps);
> >
> >          return -EINVAL;
> >      }
> > @@ -2980,7 +2980,7 @@ static const VMStateDescription vmstate_virtio_net_tx_waiting = {
> >      .pre_save  = virtio_net_tx_waiting_pre_save,
> >      .fields    = (VMStateField[]) {
> >          VMSTATE_STRUCT_VARRAY_POINTER_UINT16(vqs_1, struct VirtIONetMigTmp,
> > -                                     curr_queues_1,
> > +                                     curr_qps_1,
> >                                       vmstate_virtio_net_queue_tx_waiting,
> >                                       struct VirtIONetQueue),
> >          VMSTATE_END_OF_LIST()
> > @@ -3122,9 +3122,9 @@ static const VMStateDescription vmstate_virtio_net_device = {
> >          VMSTATE_UINT8(nobcast, VirtIONet),
> >          VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
> >                           vmstate_virtio_net_has_ufo),
> > -        VMSTATE_SINGLE_TEST(max_queues, VirtIONet, max_queues_gt_1, 0,
> > +        VMSTATE_SINGLE_TEST(max_qps, VirtIONet, max_qps_gt_1, 0,
> >                              vmstate_info_uint16_equal, uint16_t),
> > -        VMSTATE_UINT16_TEST(curr_queues, VirtIONet, max_queues_gt_1),
> > +        VMSTATE_UINT16_TEST(curr_qps, VirtIONet, max_qps_gt_1),
> >          VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
> >                           vmstate_virtio_net_tx_waiting),
> >          VMSTATE_UINT64_TEST(curr_guest_offloads, VirtIONet,
> > @@ -3368,16 +3368,16 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> >          return;
> >      }
> >
> > -    n->max_queues = MAX(n->nic_conf.peers.queues, 1);
> > -    if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
> > -        error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
> > +    n->max_qps = MAX(n->nic_conf.peers.queues, 1);
> > +    if (n->max_qps * 2 + 1 > VIRTIO_QUEUE_MAX) {
> > +        error_setg(errp, "Invalid number of qps (= %" PRIu32 "), "
> >                     "must be a positive integer less than %d.",
> > -                   n->max_queues, (VIRTIO_QUEUE_MAX - 1) / 2);
> > +                   n->max_qps, (VIRTIO_QUEUE_MAX - 1) / 2);
> >          virtio_cleanup(vdev);
> >          return;
> >      }
> > -    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_queues);
> > -    n->curr_queues = 1;
> > +    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_qps);
> > +    n->curr_qps = 1;
> >      n->tx_timeout = n->net_conf.txtimer;
> >
> >      if (n->net_conf.tx && strcmp(n->net_conf.tx, "timer")
> > @@ -3391,7 +3391,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> >      n->net_conf.tx_queue_size = MIN(virtio_net_max_tx_queue_size(n),
> >                                      n->net_conf.tx_queue_size);
> >
> > -    for (i = 0; i < n->max_queues; i++) {
> > +    for (i = 0; i < n->max_qps; i++) {
> >          virtio_net_add_queue(n, i);
> >      }
> >
> > @@ -3415,13 +3415,13 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> >                                object_get_typename(OBJECT(dev)), dev->id, n);
> >      }
> >
> > -    for (i = 0; i < n->max_queues; i++) {
> > +    for (i = 0; i < n->max_qps; i++) {
> >          n->nic->ncs[i].do_not_pad = true;
> >      }
> >
> >      peer_test_vnet_hdr(n);
> >      if (peer_has_vnet_hdr(n)) {
> > -        for (i = 0; i < n->max_queues; i++) {
> > +        for (i = 0; i < n->max_qps; i++) {
> >              qemu_using_vnet_hdr(qemu_get_subqueue(n->nic, i)->peer, true);
> >          }
> >          n->host_hdr_len = sizeof(struct virtio_net_hdr);
> > @@ -3463,7 +3463,7 @@ static void virtio_net_device_unrealize(DeviceState *dev)
> >  {
> >      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> >      VirtIONet *n = VIRTIO_NET(dev);
> > -    int i, max_queues;
> > +    int i, max_qps;
> >
> >      if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
> >          virtio_net_unload_ebpf(n);
> > @@ -3485,12 +3485,12 @@ static void virtio_net_device_unrealize(DeviceState *dev)
> >          remove_migration_state_change_notifier(&n->migration_state);
> >      }
> >
> > -    max_queues = n->multiqueue ? n->max_queues : 1;
> > -    for (i = 0; i < max_queues; i++) {
> > +    max_qps = n->multiqueue ? n->max_qps : 1;
> > +    for (i = 0; i < max_qps; i++) {
> >          virtio_net_del_queue(n, i);
> >      }
> >      /* delete also control vq */
> > -    virtio_del_queue(vdev, max_queues * 2);
> > +    virtio_del_queue(vdev, max_qps * 2);
> >      qemu_announce_timer_del(&n->announce_timer, false);
> >      g_free(n->vqs);
> >      qemu_del_nic(n->nic);
> > diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
> > index 824a69c23f..a9b6dc252e 100644
> > --- a/include/hw/virtio/virtio-net.h
> > +++ b/include/hw/virtio/virtio-net.h
> > @@ -194,8 +194,8 @@ struct VirtIONet {
> >      NICConf nic_conf;
> >      DeviceState *qdev;
> >      int multiqueue;
> > -    uint16_t max_queues;
> > -    uint16_t curr_queues;
> > +    uint16_t max_qps;
> > +    uint16_t curr_qps;
> >      size_t config_size;
> >      char *netclient_name;
> >      char *netclient_type;
> > --
> > 2.25.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH V2 17/21] vhost-net: control virtqueue support
  2021-09-04 20:40   ` Michael S. Tsirkin
@ 2021-09-06  3:43     ` Jason Wang
  0 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-06  3:43 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cindy Lu, qemu-devel, Gautam Dawar, eperezma, Eli Cohen, Zhu Lingshan

On Sun, Sep 5, 2021 at 4:40 AM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Fri, Sep 03, 2021 at 05:10:27PM +0800, Jason Wang wrote:
> > We assume there's no cvq in the past, this is not true when we need
> > control virtqueue support for vhost-user backends. So this patch
> > implements the control virtqueue support for vhost-net. As datapath,
> > the control virtqueue is also required to be coupled with the
> > NetClientState. The vhost_net_start/stop() are tweaked to accept the
> > number of datapath queue pairs plus the the number of control
> > virtqueue for us to start and stop the vhost device.
> >
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
>
>
> Fails build:
>
> FAILED: libcommon.fa.p/hw_net_vhost_net-stub.c.o
> cc -Ilibcommon.fa.p -I. -Iqapi -Itrace -Iui -Iui/shader -I/usr/include/spice-1 -I/usr/include/spice-server -I/usr/include/cacard -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/nss3 -I/usr/include/nspr4 -I/usr/include/libmount -I/usr/include/blkid -I/usr/include/pixman-1 -I/usr/include/p11-kit-1 -I/usr/include/SDL2 -I/usr/include/libpng16 -I/usr/include/virgl -I/usr/include/libusb-1.0 -I/usr/include/slirp -I/usr/include/gtk-3.0 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/freetype2 -I/usr/include/fribidi -I/usr/include/libxml2 -I/usr/include/cairo -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/gio-unix-2.0 -I/usr/include/atk-1.0 -I/usr/include/at-spi2-atk/2.0 -I/usr/include/dbus-1.0 -I/usr/lib64/dbus-1.0/include -I/usr/include/at-spi-2.0 -I/usr/include/vte-2.91 -I/usr/include/capstone -fdiagnostics-color=auto -pipe -Wall -Winvalid-pch -Werror -std=gnu11 -O2 -g -isystem /scm/qemu/linux-headers -isystem linux-headers -iquote . -iquote /scm/qemu -iquote /scm/qemu/include -iquote /scm/qemu/disas/libvixl -iquote /scm/qemu/tcg/i386 -pthread -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -m64 -mcx16 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wold-style-declaration -Wold-style-definition -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined -Wimplicit-fallthrough=2 -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-psabi -fstack-protector-strong -fPIC -D_DEFAULT_SOURCE -D_XOPEN_SOURCE=600 -DNCURSES_WIDECHAR -DSTRUCT_IOVEC_DEFINED -D_REENTRANT -Wno-undef -MD -MQ libcommon.fa.p/hw_net_vhost_net-stub.c.o -MF libcommon.fa.p/hw_net_vhost_net-stub.c.o.d -o libcommon.fa.p/hw_net_vhost_net-stub.c.o -c ../hw/net/vhost_net-stub.c
> ../hw/net/vhost_net-stub.c:34:5: error: conflicting types for ‘vhost_net_start’
>    34 | int vhost_net_start(VirtIODevice *dev,
>       |     ^~~~~~~~~~~~~~~
> In file included from ../hw/net/vhost_net-stub.c:19:
> /scm/qemu/include/net/vhost_net.h:24:5: note: previous declaration of ‘vhost_net_start’ was here
>    24 | int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
>       |     ^~~~~~~~~~~~~~~
> ../hw/net/vhost_net-stub.c:40:6: error: conflicting types for ‘vhost_net_stop’
>    40 | void vhost_net_stop(VirtIODevice *dev,
>       |      ^~~~~~~~~~~~~~
> In file included from ../hw/net/vhost_net-stub.c:19:
> /scm/qemu/include/net/vhost_net.h:26:6: note: previous declaration of ‘vhost_net_stop’ was here
>    26 | void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
>       |      ^~~~~~~~~~~~~~
> ninja: build stopped: subcommand failed.
> make[1]: *** [Makefile:156: run-ninja] Error 1

Will fix this.

Thanks

>
>
>
> > ---
> >  hw/net/vhost_net.c      | 43 ++++++++++++++++++++++++++++++-----------
> >  hw/net/virtio-net.c     |  4 ++--
> >  include/net/vhost_net.h |  6 ++++--
> >  3 files changed, 38 insertions(+), 15 deletions(-)
> >
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index 386ec2eaa2..7e0b60b4d9 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -315,11 +315,14 @@ static void vhost_net_stop_one(struct vhost_net *net,
> >  }
> >
> >  int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> > -                    int total_queues)
> > +                    int data_qps, int cvq)
> >  {
> >      BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
> >      VirtioBusState *vbus = VIRTIO_BUS(qbus);
> >      VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
> > +    int total_notifiers = data_qps * 2 + cvq;
> > +    VirtIONet *n = VIRTIO_NET(dev);
> > +    int nvhosts = data_qps + cvq;
> >      struct vhost_net *net;
> >      int r, e, i;
> >      NetClientState *peer;
> > @@ -329,9 +332,14 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> >          return -ENOSYS;
> >      }
> >
> > -    for (i = 0; i < total_queues; i++) {
> > +    for (i = 0; i < nvhosts; i++) {
> > +
> > +        if (i < data_qps) {
> > +            peer = qemu_get_peer(ncs, i);
> > +        } else { /* Control Virtqueue */
> > +            peer = qemu_get_peer(ncs, n->max_queues);
> > +        }
> >
> > -        peer = qemu_get_peer(ncs, i);
> >          net = get_vhost_net(peer);
> >          vhost_net_set_vq_index(net, i * 2);
> >
> > @@ -344,14 +352,18 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> >          }
> >       }
> >
> > -    r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
> > +    r = k->set_guest_notifiers(qbus->parent, total_notifiers, true);
> >      if (r < 0) {
> >          error_report("Error binding guest notifier: %d", -r);
> >          goto err;
> >      }
> >
> > -    for (i = 0; i < total_queues; i++) {
> > -        peer = qemu_get_peer(ncs, i);
> > +    for (i = 0; i < nvhosts; i++) {
> > +        if (i < data_qps) {
> > +            peer = qemu_get_peer(ncs, i);
> > +        } else {
> > +            peer = qemu_get_peer(ncs, n->max_queues);
> > +        }
> >          r = vhost_net_start_one(get_vhost_net(peer), dev);
> >
> >          if (r < 0) {
> > @@ -375,7 +387,7 @@ err_start:
> >          peer = qemu_get_peer(ncs , i);
> >          vhost_net_stop_one(get_vhost_net(peer), dev);
> >      }
> > -    e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
> > +    e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
> >      if (e < 0) {
> >          fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e);
> >          fflush(stderr);
> > @@ -385,18 +397,27 @@ err:
> >  }
> >
> >  void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> > -                    int total_queues)
> > +                    int data_qps, int cvq)
> >  {
> >      BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
> >      VirtioBusState *vbus = VIRTIO_BUS(qbus);
> >      VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
> > +    VirtIONet *n = VIRTIO_NET(dev);
> > +    NetClientState *peer;
> > +    int total_notifiers = data_qps * 2 + cvq;
> > +    int nvhosts = data_qps + cvq;
> >      int i, r;
> >
> > -    for (i = 0; i < total_queues; i++) {
> > -        vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
> > +    for (i = 0; i < nvhosts; i++) {
> > +        if (i < data_qps) {
> > +            peer = qemu_get_peer(ncs, i);
> > +        } else {
> > +            peer = qemu_get_peer(ncs, n->max_queues);
> > +        }
> > +        vhost_net_stop_one(get_vhost_net(peer), dev);
> >      }
> >
> > -    r = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
> > +    r = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
> >      if (r < 0) {
> >          fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", r);
> >          fflush(stderr);
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 16d20cdee5..8fccbaa44c 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> >          }
> >
> >          n->vhost_started = 1;
> > -        r = vhost_net_start(vdev, n->nic->ncs, queues);
> > +        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
> >          if (r < 0) {
> >              error_report("unable to start vhost net: %d: "
> >                           "falling back on userspace virtio", -r);
> >              n->vhost_started = 0;
> >          }
> >      } else {
> > -        vhost_net_stop(vdev, n->nic->ncs, queues);
> > +        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
> >          n->vhost_started = 0;
> >      }
> >  }
> > diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
> > index fba40cf695..e656e38af9 100644
> > --- a/include/net/vhost_net.h
> > +++ b/include/net/vhost_net.h
> > @@ -21,8 +21,10 @@ typedef struct VhostNetOptions {
> >  uint64_t vhost_net_get_max_queues(VHostNetState *net);
> >  struct vhost_net *vhost_net_init(VhostNetOptions *options);
> >
> > -int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
> > -void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs, int total_queues);
> > +int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> > +                    int data_qps, int cvq);
> > +void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> > +                    int data_qps, int cvq);
> >
> >  void vhost_net_cleanup(VHostNetState *net);
> >
> > --
> > 2.25.1
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH V2 18/21] virito-net: use "qps" instead of "queues" when possible
  2021-09-06  3:42     ` Jason Wang
@ 2021-09-06  5:49       ` Michael S. Tsirkin
  2021-09-06  6:54         ` Jason Wang
  0 siblings, 1 reply; 29+ messages in thread
From: Michael S. Tsirkin @ 2021-09-06  5:49 UTC (permalink / raw)
  To: Jason Wang
  Cc: Cindy Lu, qemu-devel, Gautam Dawar, eperezma, Eli Cohen, Zhu Lingshan

On Mon, Sep 06, 2021 at 11:42:41AM +0800, Jason Wang wrote:
> On Sun, Sep 5, 2021 at 4:42 AM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Fri, Sep 03, 2021 at 05:10:28PM +0800, Jason Wang wrote:
> > > Most of the time, "queues" really means queue pairs. So this patch
> > > switch to use "qps" to avoid confusion.
> > >
> > > Signed-off-by: Jason Wang <jasowang@redhat.com>
> >
> > This is far from a standard terminology, except for the people
> > like me, who's mind is permanently warped by close contact with infiniband
> > hardware. Please eschew abbreviation, just say queue_pairs.
> 
> Ok, I will do that in the next version.
> 
> Thanks


Also, s/virito/virtio/
Happens to me too, often enough that I have an abbreviation set up in
vimrc.

> >
> > > ---
> > >  hw/net/vhost_net.c             |   6 +-
> > >  hw/net/virtio-net.c            | 150 ++++++++++++++++-----------------
> > >  include/hw/virtio/virtio-net.h |   4 +-
> > >  3 files changed, 80 insertions(+), 80 deletions(-)
> > >
> > > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > > index 7e0b60b4d9..b40fdfa625 100644
> > > --- a/hw/net/vhost_net.c
> > > +++ b/hw/net/vhost_net.c
> > > @@ -337,7 +337,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> > >          if (i < data_qps) {
> > >              peer = qemu_get_peer(ncs, i);
> > >          } else { /* Control Virtqueue */
> > > -            peer = qemu_get_peer(ncs, n->max_queues);
> > > +            peer = qemu_get_peer(ncs, n->max_qps);
> > >          }
> > >
> > >          net = get_vhost_net(peer);
> > > @@ -362,7 +362,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> > >          if (i < data_qps) {
> > >              peer = qemu_get_peer(ncs, i);
> > >          } else {
> > > -            peer = qemu_get_peer(ncs, n->max_queues);
> > > +            peer = qemu_get_peer(ncs, n->max_qps);
> > >          }
> > >          r = vhost_net_start_one(get_vhost_net(peer), dev);
> > >
> > > @@ -412,7 +412,7 @@ void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> > >          if (i < data_qps) {
> > >              peer = qemu_get_peer(ncs, i);
> > >          } else {
> > > -            peer = qemu_get_peer(ncs, n->max_queues);
> > > +            peer = qemu_get_peer(ncs, n->max_qps);
> > >          }
> > >          vhost_net_stop_one(get_vhost_net(peer), dev);
> > >      }
> > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > index 8fccbaa44c..0a5d9862ec 100644
> > > --- a/hw/net/virtio-net.c
> > > +++ b/hw/net/virtio-net.c
> > > @@ -54,7 +54,7 @@
> > >  #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
> > >  #define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
> > >
> > > -/* for now, only allow larger queues; with virtio-1, guest can downsize */
> > > +/* for now, only allow larger qps; with virtio-1, guest can downsize */
> > >  #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
> > >  #define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
> > >
> > > @@ -131,7 +131,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, uint8_t *config)
> > >      int ret = 0;
> > >      memset(&netcfg, 0 , sizeof(struct virtio_net_config));
> > >      virtio_stw_p(vdev, &netcfg.status, n->status);
> > > -    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_queues);
> > > +    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_qps);
> > >      virtio_stw_p(vdev, &netcfg.mtu, n->net_conf.mtu);
> > >      memcpy(netcfg.mac, n->mac, ETH_ALEN);
> > >      virtio_stl_p(vdev, &netcfg.speed, n->net_conf.speed);
> > > @@ -243,7 +243,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > >  {
> > >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > >      NetClientState *nc = qemu_get_queue(n->nic);
> > > -    int queues = n->multiqueue ? n->max_queues : 1;
> > > +    int qps = n->multiqueue ? n->max_qps : 1;
> > >
> > >      if (!get_vhost_net(nc->peer)) {
> > >          return;
> > > @@ -266,7 +266,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > >          /* Any packets outstanding? Purge them to avoid touching rings
> > >           * when vhost is running.
> > >           */
> > > -        for (i = 0;  i < queues; i++) {
> > > +        for (i = 0;  i < qps; i++) {
> > >              NetClientState *qnc = qemu_get_subqueue(n->nic, i);
> > >
> > >              /* Purge both directions: TX and RX. */
> > > @@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > >          }
> > >
> > >          n->vhost_started = 1;
> > > -        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
> > > +        r = vhost_net_start(vdev, n->nic->ncs, qps, 0);
> > >          if (r < 0) {
> > >              error_report("unable to start vhost net: %d: "
> > >                           "falling back on userspace virtio", -r);
> > >              n->vhost_started = 0;
> > >          }
> > >      } else {
> > > -        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
> > > +        vhost_net_stop(vdev, n->nic->ncs, qps, 0);
> > >          n->vhost_started = 0;
> > >      }
> > >  }
> > > @@ -309,11 +309,11 @@ static int virtio_net_set_vnet_endian_one(VirtIODevice *vdev,
> > >  }
> > >
> > >  static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
> > > -                                       int queues, bool enable)
> > > +                                       int qps, bool enable)
> > >  {
> > >      int i;
> > >
> > > -    for (i = 0; i < queues; i++) {
> > > +    for (i = 0; i < qps; i++) {
> > >          if (virtio_net_set_vnet_endian_one(vdev, ncs[i].peer, enable) < 0 &&
> > >              enable) {
> > >              while (--i >= 0) {
> > > @@ -330,7 +330,7 @@ static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
> > >  static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
> > >  {
> > >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > -    int queues = n->multiqueue ? n->max_queues : 1;
> > > +    int qps = n->multiqueue ? n->max_qps : 1;
> > >
> > >      if (virtio_net_started(n, status)) {
> > >          /* Before using the device, we tell the network backend about the
> > > @@ -339,14 +339,14 @@ static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
> > >           * virtio-net code.
> > >           */
> > >          n->needs_vnet_hdr_swap = virtio_net_set_vnet_endian(vdev, n->nic->ncs,
> > > -                                                            queues, true);
> > > +                                                            qps, true);
> > >      } else if (virtio_net_started(n, vdev->status)) {
> > >          /* After using the device, we need to reset the network backend to
> > >           * the default (guest native endianness), otherwise the guest may
> > >           * lose network connectivity if it is rebooted into a different
> > >           * endianness.
> > >           */
> > > -        virtio_net_set_vnet_endian(vdev, n->nic->ncs, queues, false);
> > > +        virtio_net_set_vnet_endian(vdev, n->nic->ncs, qps, false);
> > >      }
> > >  }
> > >
> > > @@ -368,12 +368,12 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
> > >      virtio_net_vnet_endian_status(n, status);
> > >      virtio_net_vhost_status(n, status);
> > >
> > > -    for (i = 0; i < n->max_queues; i++) {
> > > +    for (i = 0; i < n->max_qps; i++) {
> > >          NetClientState *ncs = qemu_get_subqueue(n->nic, i);
> > >          bool queue_started;
> > >          q = &n->vqs[i];
> > >
> > > -        if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
> > > +        if ((!n->multiqueue && i != 0) || i >= n->curr_qps) {
> > >              queue_status = 0;
> > >          } else {
> > >              queue_status = status;
> > > @@ -540,7 +540,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
> > >      n->nouni = 0;
> > >      n->nobcast = 0;
> > >      /* multiqueue is disabled by default */
> > > -    n->curr_queues = 1;
> > > +    n->curr_qps = 1;
> > >      timer_del(n->announce_timer.tm);
> > >      n->announce_timer.round = 0;
> > >      n->status &= ~VIRTIO_NET_S_ANNOUNCE;
> > > @@ -556,7 +556,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
> > >      memset(n->vlans, 0, MAX_VLAN >> 3);
> > >
> > >      /* Flush any async TX */
> > > -    for (i = 0;  i < n->max_queues; i++) {
> > > +    for (i = 0;  i < n->max_qps; i++) {
> > >          NetClientState *nc = qemu_get_subqueue(n->nic, i);
> > >
> > >          if (nc->peer) {
> > > @@ -610,7 +610,7 @@ static void virtio_net_set_mrg_rx_bufs(VirtIONet *n, int mergeable_rx_bufs,
> > >              sizeof(struct virtio_net_hdr);
> > >      }
> > >
> > > -    for (i = 0; i < n->max_queues; i++) {
> > > +    for (i = 0; i < n->max_qps; i++) {
> > >          nc = qemu_get_subqueue(n->nic, i);
> > >
> > >          if (peer_has_vnet_hdr(n) &&
> > > @@ -655,7 +655,7 @@ static int peer_attach(VirtIONet *n, int index)
> > >          return 0;
> > >      }
> > >
> > > -    if (n->max_queues == 1) {
> > > +    if (n->max_qps == 1) {
> > >          return 0;
> > >      }
> > >
> > > @@ -681,7 +681,7 @@ static int peer_detach(VirtIONet *n, int index)
> > >      return tap_disable(nc->peer);
> > >  }
> > >
> > > -static void virtio_net_set_queues(VirtIONet *n)
> > > +static void virtio_net_set_qps(VirtIONet *n)
> > >  {
> > >      int i;
> > >      int r;
> > > @@ -690,8 +690,8 @@ static void virtio_net_set_queues(VirtIONet *n)
> > >          return;
> > >      }
> > >
> > > -    for (i = 0; i < n->max_queues; i++) {
> > > -        if (i < n->curr_queues) {
> > > +    for (i = 0; i < n->max_qps; i++) {
> > > +        if (i < n->curr_qps) {
> > >              r = peer_attach(n, i);
> > >              assert(!r);
> > >          } else {
> > > @@ -920,7 +920,7 @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
> > >          virtio_net_apply_guest_offloads(n);
> > >      }
> > >
> > > -    for (i = 0;  i < n->max_queues; i++) {
> > > +    for (i = 0;  i < n->max_qps; i++) {
> > >          NetClientState *nc = qemu_get_subqueue(n->nic, i);
> > >
> > >          if (!get_vhost_net(nc->peer)) {
> > > @@ -1247,7 +1247,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> > >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > >      struct virtio_net_rss_config cfg;
> > >      size_t s, offset = 0, size_get;
> > > -    uint16_t queues, i;
> > > +    uint16_t qps, i;
> > >      struct {
> > >          uint16_t us;
> > >          uint8_t b;
> > > @@ -1289,7 +1289,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> > >      }
> > >      n->rss_data.default_queue = do_rss ?
> > >          virtio_lduw_p(vdev, &cfg.unclassified_queue) : 0;
> > > -    if (n->rss_data.default_queue >= n->max_queues) {
> > > +    if (n->rss_data.default_queue >= n->max_qps) {
> > >          err_msg = "Invalid default queue";
> > >          err_value = n->rss_data.default_queue;
> > >          goto error;
> > > @@ -1318,14 +1318,14 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> > >      size_get = sizeof(temp);
> > >      s = iov_to_buf(iov, iov_cnt, offset, &temp, size_get);
> > >      if (s != size_get) {
> > > -        err_msg = "Can't get queues";
> > > +        err_msg = "Can't get qps";
> > >          err_value = (uint32_t)s;
> > >          goto error;
> > >      }
> > > -    queues = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_queues;
> > > -    if (queues == 0 || queues > n->max_queues) {
> > > -        err_msg = "Invalid number of queues";
> > > -        err_value = queues;
> > > +    qps = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_qps;
> > > +    if (qps == 0 || qps > n->max_qps) {
> > > +        err_msg = "Invalid number of qps";
> > > +        err_value = qps;
> > >          goto error;
> > >      }
> > >      if (temp.b > VIRTIO_NET_RSS_MAX_KEY_SIZE) {
> > > @@ -1340,7 +1340,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> > >      }
> > >      if (!temp.b && !n->rss_data.hash_types) {
> > >          virtio_net_disable_rss(n);
> > > -        return queues;
> > > +        return qps;
> > >      }
> > >      offset += size_get;
> > >      size_get = temp.b;
> > > @@ -1373,7 +1373,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> > >      trace_virtio_net_rss_enable(n->rss_data.hash_types,
> > >                                  n->rss_data.indirections_len,
> > >                                  temp.b);
> > > -    return queues;
> > > +    return qps;
> > >  error:
> > >      trace_virtio_net_rss_error(err_msg, err_value);
> > >      virtio_net_disable_rss(n);
> > > @@ -1384,15 +1384,15 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> > >                                  struct iovec *iov, unsigned int iov_cnt)
> > >  {
> > >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > -    uint16_t queues;
> > > +    uint16_t qps;
> > >
> > >      virtio_net_disable_rss(n);
> > >      if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
> > > -        queues = virtio_net_handle_rss(n, iov, iov_cnt, false);
> > > -        return queues ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
> > > +        qps = virtio_net_handle_rss(n, iov, iov_cnt, false);
> > > +        return qps ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
> > >      }
> > >      if (cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> > > -        queues = virtio_net_handle_rss(n, iov, iov_cnt, true);
> > > +        qps = virtio_net_handle_rss(n, iov, iov_cnt, true);
> > >      } else if (cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> > >          struct virtio_net_ctrl_mq mq;
> > >          size_t s;
> > > @@ -1403,24 +1403,24 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> > >          if (s != sizeof(mq)) {
> > >              return VIRTIO_NET_ERR;
> > >          }
> > > -        queues = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
> > > +        qps = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
> > >
> > >      } else {
> > >          return VIRTIO_NET_ERR;
> > >      }
> > >
> > > -    if (queues < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
> > > -        queues > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
> > > -        queues > n->max_queues ||
> > > +    if (qps < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
> > > +        qps > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
> > > +        qps > n->max_qps ||
> > >          !n->multiqueue) {
> > >          return VIRTIO_NET_ERR;
> > >      }
> > >
> > > -    n->curr_queues = queues;
> > > -    /* stop the backend before changing the number of queues to avoid handling a
> > > +    n->curr_qps = qps;
> > > +    /* stop the backend before changing the number of qps to avoid handling a
> > >       * disabled queue */
> > >      virtio_net_set_status(vdev, vdev->status);
> > > -    virtio_net_set_queues(n);
> > > +    virtio_net_set_qps(n);
> > >
> > >      return VIRTIO_NET_OK;
> > >  }
> > > @@ -1498,7 +1498,7 @@ static bool virtio_net_can_receive(NetClientState *nc)
> > >          return false;
> > >      }
> > >
> > > -    if (nc->queue_index >= n->curr_queues) {
> > > +    if (nc->queue_index >= n->curr_qps) {
> > >          return false;
> > >      }
> > >
> > > @@ -2753,11 +2753,11 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
> > >      virtio_del_queue(vdev, index * 2 + 1);
> > >  }
> > >
> > > -static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
> > > +static void virtio_net_change_num_qps(VirtIONet *n, int new_max_qps)
> > >  {
> > >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > >      int old_num_queues = virtio_get_num_queues(vdev);
> > > -    int new_num_queues = new_max_queues * 2 + 1;
> > > +    int new_num_queues = new_max_qps * 2 + 1;
> > >      int i;
> > >
> > >      assert(old_num_queues >= 3);
> > > @@ -2790,12 +2790,12 @@ static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
> > >
> > >  static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue)
> > >  {
> > > -    int max = multiqueue ? n->max_queues : 1;
> > > +    int max = multiqueue ? n->max_qps : 1;
> > >
> > >      n->multiqueue = multiqueue;
> > > -    virtio_net_change_num_queues(n, max);
> > > +    virtio_net_change_num_qps(n, max);
> > >
> > > -    virtio_net_set_queues(n);
> > > +    virtio_net_set_qps(n);
> > >  }
> > >
> > >  static int virtio_net_post_load_device(void *opaque, int version_id)
> > > @@ -2828,7 +2828,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > >       */
> > >      n->saved_guest_offloads = n->curr_guest_offloads;
> > >
> > > -    virtio_net_set_queues(n);
> > > +    virtio_net_set_qps(n);
> > >
> > >      /* Find the first multicast entry in the saved MAC filter */
> > >      for (i = 0; i < n->mac_table.in_use; i++) {
> > > @@ -2841,7 +2841,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > >      /* nc.link_down can't be migrated, so infer link_down according
> > >       * to link status bit in n->status */
> > >      link_down = (n->status & VIRTIO_NET_S_LINK_UP) == 0;
> > > -    for (i = 0; i < n->max_queues; i++) {
> > > +    for (i = 0; i < n->max_qps; i++) {
> > >          qemu_get_subqueue(n->nic, i)->link_down = link_down;
> > >      }
> > >
> > > @@ -2906,9 +2906,9 @@ static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> > >     },
> > >  };
> > >
> > > -static bool max_queues_gt_1(void *opaque, int version_id)
> > > +static bool max_qps_gt_1(void *opaque, int version_id)
> > >  {
> > > -    return VIRTIO_NET(opaque)->max_queues > 1;
> > > +    return VIRTIO_NET(opaque)->max_qps > 1;
> > >  }
> > >
> > >  static bool has_ctrl_guest_offloads(void *opaque, int version_id)
> > > @@ -2933,13 +2933,13 @@ static bool mac_table_doesnt_fit(void *opaque, int version_id)
> > >  struct VirtIONetMigTmp {
> > >      VirtIONet      *parent;
> > >      VirtIONetQueue *vqs_1;
> > > -    uint16_t        curr_queues_1;
> > > +    uint16_t        curr_qps_1;
> > >      uint8_t         has_ufo;
> > >      uint32_t        has_vnet_hdr;
> > >  };
> > >
> > >  /* The 2nd and subsequent tx_waiting flags are loaded later than
> > > - * the 1st entry in the queues and only if there's more than one
> > > + * the 1st entry in the qps and only if there's more than one
> > >   * entry.  We use the tmp mechanism to calculate a temporary
> > >   * pointer and count and also validate the count.
> > >   */
> > > @@ -2949,9 +2949,9 @@ static int virtio_net_tx_waiting_pre_save(void *opaque)
> > >      struct VirtIONetMigTmp *tmp = opaque;
> > >
> > >      tmp->vqs_1 = tmp->parent->vqs + 1;
> > > -    tmp->curr_queues_1 = tmp->parent->curr_queues - 1;
> > > -    if (tmp->parent->curr_queues == 0) {
> > > -        tmp->curr_queues_1 = 0;
> > > +    tmp->curr_qps_1 = tmp->parent->curr_qps - 1;
> > > +    if (tmp->parent->curr_qps == 0) {
> > > +        tmp->curr_qps_1 = 0;
> > >      }
> > >
> > >      return 0;
> > > @@ -2964,9 +2964,9 @@ static int virtio_net_tx_waiting_pre_load(void *opaque)
> > >      /* Reuse the pointer setup from save */
> > >      virtio_net_tx_waiting_pre_save(opaque);
> > >
> > > -    if (tmp->parent->curr_queues > tmp->parent->max_queues) {
> > > -        error_report("virtio-net: curr_queues %x > max_queues %x",
> > > -            tmp->parent->curr_queues, tmp->parent->max_queues);
> > > +    if (tmp->parent->curr_qps > tmp->parent->max_qps) {
> > > +        error_report("virtio-net: curr_qps %x > max_qps %x",
> > > +            tmp->parent->curr_qps, tmp->parent->max_qps);
> > >
> > >          return -EINVAL;
> > >      }
> > > @@ -2980,7 +2980,7 @@ static const VMStateDescription vmstate_virtio_net_tx_waiting = {
> > >      .pre_save  = virtio_net_tx_waiting_pre_save,
> > >      .fields    = (VMStateField[]) {
> > >          VMSTATE_STRUCT_VARRAY_POINTER_UINT16(vqs_1, struct VirtIONetMigTmp,
> > > -                                     curr_queues_1,
> > > +                                     curr_qps_1,
> > >                                       vmstate_virtio_net_queue_tx_waiting,
> > >                                       struct VirtIONetQueue),
> > >          VMSTATE_END_OF_LIST()
> > > @@ -3122,9 +3122,9 @@ static const VMStateDescription vmstate_virtio_net_device = {
> > >          VMSTATE_UINT8(nobcast, VirtIONet),
> > >          VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
> > >                           vmstate_virtio_net_has_ufo),
> > > -        VMSTATE_SINGLE_TEST(max_queues, VirtIONet, max_queues_gt_1, 0,
> > > +        VMSTATE_SINGLE_TEST(max_qps, VirtIONet, max_qps_gt_1, 0,
> > >                              vmstate_info_uint16_equal, uint16_t),
> > > -        VMSTATE_UINT16_TEST(curr_queues, VirtIONet, max_queues_gt_1),
> > > +        VMSTATE_UINT16_TEST(curr_qps, VirtIONet, max_qps_gt_1),
> > >          VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
> > >                           vmstate_virtio_net_tx_waiting),
> > >          VMSTATE_UINT64_TEST(curr_guest_offloads, VirtIONet,
> > > @@ -3368,16 +3368,16 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > >          return;
> > >      }
> > >
> > > -    n->max_queues = MAX(n->nic_conf.peers.queues, 1);
> > > -    if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
> > > -        error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
> > > +    n->max_qps = MAX(n->nic_conf.peers.queues, 1);
> > > +    if (n->max_qps * 2 + 1 > VIRTIO_QUEUE_MAX) {
> > > +        error_setg(errp, "Invalid number of qps (= %" PRIu32 "), "
> > >                     "must be a positive integer less than %d.",
> > > -                   n->max_queues, (VIRTIO_QUEUE_MAX - 1) / 2);
> > > +                   n->max_qps, (VIRTIO_QUEUE_MAX - 1) / 2);
> > >          virtio_cleanup(vdev);
> > >          return;
> > >      }
> > > -    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_queues);
> > > -    n->curr_queues = 1;
> > > +    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_qps);
> > > +    n->curr_qps = 1;
> > >      n->tx_timeout = n->net_conf.txtimer;
> > >
> > >      if (n->net_conf.tx && strcmp(n->net_conf.tx, "timer")
> > > @@ -3391,7 +3391,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > >      n->net_conf.tx_queue_size = MIN(virtio_net_max_tx_queue_size(n),
> > >                                      n->net_conf.tx_queue_size);
> > >
> > > -    for (i = 0; i < n->max_queues; i++) {
> > > +    for (i = 0; i < n->max_qps; i++) {
> > >          virtio_net_add_queue(n, i);
> > >      }
> > >
> > > @@ -3415,13 +3415,13 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > >                                object_get_typename(OBJECT(dev)), dev->id, n);
> > >      }
> > >
> > > -    for (i = 0; i < n->max_queues; i++) {
> > > +    for (i = 0; i < n->max_qps; i++) {
> > >          n->nic->ncs[i].do_not_pad = true;
> > >      }
> > >
> > >      peer_test_vnet_hdr(n);
> > >      if (peer_has_vnet_hdr(n)) {
> > > -        for (i = 0; i < n->max_queues; i++) {
> > > +        for (i = 0; i < n->max_qps; i++) {
> > >              qemu_using_vnet_hdr(qemu_get_subqueue(n->nic, i)->peer, true);
> > >          }
> > >          n->host_hdr_len = sizeof(struct virtio_net_hdr);
> > > @@ -3463,7 +3463,7 @@ static void virtio_net_device_unrealize(DeviceState *dev)
> > >  {
> > >      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > >      VirtIONet *n = VIRTIO_NET(dev);
> > > -    int i, max_queues;
> > > +    int i, max_qps;
> > >
> > >      if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
> > >          virtio_net_unload_ebpf(n);
> > > @@ -3485,12 +3485,12 @@ static void virtio_net_device_unrealize(DeviceState *dev)
> > >          remove_migration_state_change_notifier(&n->migration_state);
> > >      }
> > >
> > > -    max_queues = n->multiqueue ? n->max_queues : 1;
> > > -    for (i = 0; i < max_queues; i++) {
> > > +    max_qps = n->multiqueue ? n->max_qps : 1;
> > > +    for (i = 0; i < max_qps; i++) {
> > >          virtio_net_del_queue(n, i);
> > >      }
> > >      /* delete also control vq */
> > > -    virtio_del_queue(vdev, max_queues * 2);
> > > +    virtio_del_queue(vdev, max_qps * 2);
> > >      qemu_announce_timer_del(&n->announce_timer, false);
> > >      g_free(n->vqs);
> > >      qemu_del_nic(n->nic);
> > > diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
> > > index 824a69c23f..a9b6dc252e 100644
> > > --- a/include/hw/virtio/virtio-net.h
> > > +++ b/include/hw/virtio/virtio-net.h
> > > @@ -194,8 +194,8 @@ struct VirtIONet {
> > >      NICConf nic_conf;
> > >      DeviceState *qdev;
> > >      int multiqueue;
> > > -    uint16_t max_queues;
> > > -    uint16_t curr_queues;
> > > +    uint16_t max_qps;
> > > +    uint16_t curr_qps;
> > >      size_t config_size;
> > >      char *netclient_name;
> > >      char *netclient_type;
> > > --
> > > 2.25.1
> >



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH V2 18/21] virito-net: use "qps" instead of "queues" when possible
  2021-09-06  5:49       ` Michael S. Tsirkin
@ 2021-09-06  6:54         ` Jason Wang
  0 siblings, 0 replies; 29+ messages in thread
From: Jason Wang @ 2021-09-06  6:54 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cindy Lu, qemu-devel, Gautam Dawar, eperezma, Eli Cohen, Zhu Lingshan

On Mon, Sep 6, 2021 at 1:49 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Mon, Sep 06, 2021 at 11:42:41AM +0800, Jason Wang wrote:
> > On Sun, Sep 5, 2021 at 4:42 AM Michael S. Tsirkin <mst@redhat.com> wrote:
> > >
> > > On Fri, Sep 03, 2021 at 05:10:28PM +0800, Jason Wang wrote:
> > > > Most of the time, "queues" really means queue pairs. So this patch
> > > > switch to use "qps" to avoid confusion.
> > > >
> > > > Signed-off-by: Jason Wang <jasowang@redhat.com>
> > >
> > > This is far from a standard terminology, except for the people
> > > like me, who's mind is permanently warped by close contact with infiniband
> > > hardware. Please eschew abbreviation, just say queue_pairs.
> >
> > Ok, I will do that in the next version.
> >
> > Thanks
>
>
> Also, s/virito/virtio/
> Happens to me too, often enough that I have an abbreviation set up in
> vimrc.

Let me try to set it up.

Thanks

>
> > >
> > > > ---
> > > >  hw/net/vhost_net.c             |   6 +-
> > > >  hw/net/virtio-net.c            | 150 ++++++++++++++++-----------------
> > > >  include/hw/virtio/virtio-net.h |   4 +-
> > > >  3 files changed, 80 insertions(+), 80 deletions(-)
> > > >
> > > > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > > > index 7e0b60b4d9..b40fdfa625 100644
> > > > --- a/hw/net/vhost_net.c
> > > > +++ b/hw/net/vhost_net.c
> > > > @@ -337,7 +337,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> > > >          if (i < data_qps) {
> > > >              peer = qemu_get_peer(ncs, i);
> > > >          } else { /* Control Virtqueue */
> > > > -            peer = qemu_get_peer(ncs, n->max_queues);
> > > > +            peer = qemu_get_peer(ncs, n->max_qps);
> > > >          }
> > > >
> > > >          net = get_vhost_net(peer);
> > > > @@ -362,7 +362,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> > > >          if (i < data_qps) {
> > > >              peer = qemu_get_peer(ncs, i);
> > > >          } else {
> > > > -            peer = qemu_get_peer(ncs, n->max_queues);
> > > > +            peer = qemu_get_peer(ncs, n->max_qps);
> > > >          }
> > > >          r = vhost_net_start_one(get_vhost_net(peer), dev);
> > > >
> > > > @@ -412,7 +412,7 @@ void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> > > >          if (i < data_qps) {
> > > >              peer = qemu_get_peer(ncs, i);
> > > >          } else {
> > > > -            peer = qemu_get_peer(ncs, n->max_queues);
> > > > +            peer = qemu_get_peer(ncs, n->max_qps);
> > > >          }
> > > >          vhost_net_stop_one(get_vhost_net(peer), dev);
> > > >      }
> > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > index 8fccbaa44c..0a5d9862ec 100644
> > > > --- a/hw/net/virtio-net.c
> > > > +++ b/hw/net/virtio-net.c
> > > > @@ -54,7 +54,7 @@
> > > >  #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
> > > >  #define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
> > > >
> > > > -/* for now, only allow larger queues; with virtio-1, guest can downsize */
> > > > +/* for now, only allow larger qps; with virtio-1, guest can downsize */
> > > >  #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
> > > >  #define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
> > > >
> > > > @@ -131,7 +131,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, uint8_t *config)
> > > >      int ret = 0;
> > > >      memset(&netcfg, 0 , sizeof(struct virtio_net_config));
> > > >      virtio_stw_p(vdev, &netcfg.status, n->status);
> > > > -    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_queues);
> > > > +    virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_qps);
> > > >      virtio_stw_p(vdev, &netcfg.mtu, n->net_conf.mtu);
> > > >      memcpy(netcfg.mac, n->mac, ETH_ALEN);
> > > >      virtio_stl_p(vdev, &netcfg.speed, n->net_conf.speed);
> > > > @@ -243,7 +243,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > > >  {
> > > >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > >      NetClientState *nc = qemu_get_queue(n->nic);
> > > > -    int queues = n->multiqueue ? n->max_queues : 1;
> > > > +    int qps = n->multiqueue ? n->max_qps : 1;
> > > >
> > > >      if (!get_vhost_net(nc->peer)) {
> > > >          return;
> > > > @@ -266,7 +266,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > > >          /* Any packets outstanding? Purge them to avoid touching rings
> > > >           * when vhost is running.
> > > >           */
> > > > -        for (i = 0;  i < queues; i++) {
> > > > +        for (i = 0;  i < qps; i++) {
> > > >              NetClientState *qnc = qemu_get_subqueue(n->nic, i);
> > > >
> > > >              /* Purge both directions: TX and RX. */
> > > > @@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > > >          }
> > > >
> > > >          n->vhost_started = 1;
> > > > -        r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
> > > > +        r = vhost_net_start(vdev, n->nic->ncs, qps, 0);
> > > >          if (r < 0) {
> > > >              error_report("unable to start vhost net: %d: "
> > > >                           "falling back on userspace virtio", -r);
> > > >              n->vhost_started = 0;
> > > >          }
> > > >      } else {
> > > > -        vhost_net_stop(vdev, n->nic->ncs, queues, 0);
> > > > +        vhost_net_stop(vdev, n->nic->ncs, qps, 0);
> > > >          n->vhost_started = 0;
> > > >      }
> > > >  }
> > > > @@ -309,11 +309,11 @@ static int virtio_net_set_vnet_endian_one(VirtIODevice *vdev,
> > > >  }
> > > >
> > > >  static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
> > > > -                                       int queues, bool enable)
> > > > +                                       int qps, bool enable)
> > > >  {
> > > >      int i;
> > > >
> > > > -    for (i = 0; i < queues; i++) {
> > > > +    for (i = 0; i < qps; i++) {
> > > >          if (virtio_net_set_vnet_endian_one(vdev, ncs[i].peer, enable) < 0 &&
> > > >              enable) {
> > > >              while (--i >= 0) {
> > > > @@ -330,7 +330,7 @@ static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
> > > >  static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
> > > >  {
> > > >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > > -    int queues = n->multiqueue ? n->max_queues : 1;
> > > > +    int qps = n->multiqueue ? n->max_qps : 1;
> > > >
> > > >      if (virtio_net_started(n, status)) {
> > > >          /* Before using the device, we tell the network backend about the
> > > > @@ -339,14 +339,14 @@ static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
> > > >           * virtio-net code.
> > > >           */
> > > >          n->needs_vnet_hdr_swap = virtio_net_set_vnet_endian(vdev, n->nic->ncs,
> > > > -                                                            queues, true);
> > > > +                                                            qps, true);
> > > >      } else if (virtio_net_started(n, vdev->status)) {
> > > >          /* After using the device, we need to reset the network backend to
> > > >           * the default (guest native endianness), otherwise the guest may
> > > >           * lose network connectivity if it is rebooted into a different
> > > >           * endianness.
> > > >           */
> > > > -        virtio_net_set_vnet_endian(vdev, n->nic->ncs, queues, false);
> > > > +        virtio_net_set_vnet_endian(vdev, n->nic->ncs, qps, false);
> > > >      }
> > > >  }
> > > >
> > > > @@ -368,12 +368,12 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
> > > >      virtio_net_vnet_endian_status(n, status);
> > > >      virtio_net_vhost_status(n, status);
> > > >
> > > > -    for (i = 0; i < n->max_queues; i++) {
> > > > +    for (i = 0; i < n->max_qps; i++) {
> > > >          NetClientState *ncs = qemu_get_subqueue(n->nic, i);
> > > >          bool queue_started;
> > > >          q = &n->vqs[i];
> > > >
> > > > -        if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
> > > > +        if ((!n->multiqueue && i != 0) || i >= n->curr_qps) {
> > > >              queue_status = 0;
> > > >          } else {
> > > >              queue_status = status;
> > > > @@ -540,7 +540,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
> > > >      n->nouni = 0;
> > > >      n->nobcast = 0;
> > > >      /* multiqueue is disabled by default */
> > > > -    n->curr_queues = 1;
> > > > +    n->curr_qps = 1;
> > > >      timer_del(n->announce_timer.tm);
> > > >      n->announce_timer.round = 0;
> > > >      n->status &= ~VIRTIO_NET_S_ANNOUNCE;
> > > > @@ -556,7 +556,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
> > > >      memset(n->vlans, 0, MAX_VLAN >> 3);
> > > >
> > > >      /* Flush any async TX */
> > > > -    for (i = 0;  i < n->max_queues; i++) {
> > > > +    for (i = 0;  i < n->max_qps; i++) {
> > > >          NetClientState *nc = qemu_get_subqueue(n->nic, i);
> > > >
> > > >          if (nc->peer) {
> > > > @@ -610,7 +610,7 @@ static void virtio_net_set_mrg_rx_bufs(VirtIONet *n, int mergeable_rx_bufs,
> > > >              sizeof(struct virtio_net_hdr);
> > > >      }
> > > >
> > > > -    for (i = 0; i < n->max_queues; i++) {
> > > > +    for (i = 0; i < n->max_qps; i++) {
> > > >          nc = qemu_get_subqueue(n->nic, i);
> > > >
> > > >          if (peer_has_vnet_hdr(n) &&
> > > > @@ -655,7 +655,7 @@ static int peer_attach(VirtIONet *n, int index)
> > > >          return 0;
> > > >      }
> > > >
> > > > -    if (n->max_queues == 1) {
> > > > +    if (n->max_qps == 1) {
> > > >          return 0;
> > > >      }
> > > >
> > > > @@ -681,7 +681,7 @@ static int peer_detach(VirtIONet *n, int index)
> > > >      return tap_disable(nc->peer);
> > > >  }
> > > >
> > > > -static void virtio_net_set_queues(VirtIONet *n)
> > > > +static void virtio_net_set_qps(VirtIONet *n)
> > > >  {
> > > >      int i;
> > > >      int r;
> > > > @@ -690,8 +690,8 @@ static void virtio_net_set_queues(VirtIONet *n)
> > > >          return;
> > > >      }
> > > >
> > > > -    for (i = 0; i < n->max_queues; i++) {
> > > > -        if (i < n->curr_queues) {
> > > > +    for (i = 0; i < n->max_qps; i++) {
> > > > +        if (i < n->curr_qps) {
> > > >              r = peer_attach(n, i);
> > > >              assert(!r);
> > > >          } else {
> > > > @@ -920,7 +920,7 @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
> > > >          virtio_net_apply_guest_offloads(n);
> > > >      }
> > > >
> > > > -    for (i = 0;  i < n->max_queues; i++) {
> > > > +    for (i = 0;  i < n->max_qps; i++) {
> > > >          NetClientState *nc = qemu_get_subqueue(n->nic, i);
> > > >
> > > >          if (!get_vhost_net(nc->peer)) {
> > > > @@ -1247,7 +1247,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> > > >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > >      struct virtio_net_rss_config cfg;
> > > >      size_t s, offset = 0, size_get;
> > > > -    uint16_t queues, i;
> > > > +    uint16_t qps, i;
> > > >      struct {
> > > >          uint16_t us;
> > > >          uint8_t b;
> > > > @@ -1289,7 +1289,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> > > >      }
> > > >      n->rss_data.default_queue = do_rss ?
> > > >          virtio_lduw_p(vdev, &cfg.unclassified_queue) : 0;
> > > > -    if (n->rss_data.default_queue >= n->max_queues) {
> > > > +    if (n->rss_data.default_queue >= n->max_qps) {
> > > >          err_msg = "Invalid default queue";
> > > >          err_value = n->rss_data.default_queue;
> > > >          goto error;
> > > > @@ -1318,14 +1318,14 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> > > >      size_get = sizeof(temp);
> > > >      s = iov_to_buf(iov, iov_cnt, offset, &temp, size_get);
> > > >      if (s != size_get) {
> > > > -        err_msg = "Can't get queues";
> > > > +        err_msg = "Can't get qps";
> > > >          err_value = (uint32_t)s;
> > > >          goto error;
> > > >      }
> > > > -    queues = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_queues;
> > > > -    if (queues == 0 || queues > n->max_queues) {
> > > > -        err_msg = "Invalid number of queues";
> > > > -        err_value = queues;
> > > > +    qps = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_qps;
> > > > +    if (qps == 0 || qps > n->max_qps) {
> > > > +        err_msg = "Invalid number of qps";
> > > > +        err_value = qps;
> > > >          goto error;
> > > >      }
> > > >      if (temp.b > VIRTIO_NET_RSS_MAX_KEY_SIZE) {
> > > > @@ -1340,7 +1340,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> > > >      }
> > > >      if (!temp.b && !n->rss_data.hash_types) {
> > > >          virtio_net_disable_rss(n);
> > > > -        return queues;
> > > > +        return qps;
> > > >      }
> > > >      offset += size_get;
> > > >      size_get = temp.b;
> > > > @@ -1373,7 +1373,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
> > > >      trace_virtio_net_rss_enable(n->rss_data.hash_types,
> > > >                                  n->rss_data.indirections_len,
> > > >                                  temp.b);
> > > > -    return queues;
> > > > +    return qps;
> > > >  error:
> > > >      trace_virtio_net_rss_error(err_msg, err_value);
> > > >      virtio_net_disable_rss(n);
> > > > @@ -1384,15 +1384,15 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> > > >                                  struct iovec *iov, unsigned int iov_cnt)
> > > >  {
> > > >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > > -    uint16_t queues;
> > > > +    uint16_t qps;
> > > >
> > > >      virtio_net_disable_rss(n);
> > > >      if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
> > > > -        queues = virtio_net_handle_rss(n, iov, iov_cnt, false);
> > > > -        return queues ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
> > > > +        qps = virtio_net_handle_rss(n, iov, iov_cnt, false);
> > > > +        return qps ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
> > > >      }
> > > >      if (cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
> > > > -        queues = virtio_net_handle_rss(n, iov, iov_cnt, true);
> > > > +        qps = virtio_net_handle_rss(n, iov, iov_cnt, true);
> > > >      } else if (cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
> > > >          struct virtio_net_ctrl_mq mq;
> > > >          size_t s;
> > > > @@ -1403,24 +1403,24 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> > > >          if (s != sizeof(mq)) {
> > > >              return VIRTIO_NET_ERR;
> > > >          }
> > > > -        queues = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
> > > > +        qps = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
> > > >
> > > >      } else {
> > > >          return VIRTIO_NET_ERR;
> > > >      }
> > > >
> > > > -    if (queues < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
> > > > -        queues > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
> > > > -        queues > n->max_queues ||
> > > > +    if (qps < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
> > > > +        qps > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
> > > > +        qps > n->max_qps ||
> > > >          !n->multiqueue) {
> > > >          return VIRTIO_NET_ERR;
> > > >      }
> > > >
> > > > -    n->curr_queues = queues;
> > > > -    /* stop the backend before changing the number of queues to avoid handling a
> > > > +    n->curr_qps = qps;
> > > > +    /* stop the backend before changing the number of qps to avoid handling a
> > > >       * disabled queue */
> > > >      virtio_net_set_status(vdev, vdev->status);
> > > > -    virtio_net_set_queues(n);
> > > > +    virtio_net_set_qps(n);
> > > >
> > > >      return VIRTIO_NET_OK;
> > > >  }
> > > > @@ -1498,7 +1498,7 @@ static bool virtio_net_can_receive(NetClientState *nc)
> > > >          return false;
> > > >      }
> > > >
> > > > -    if (nc->queue_index >= n->curr_queues) {
> > > > +    if (nc->queue_index >= n->curr_qps) {
> > > >          return false;
> > > >      }
> > > >
> > > > @@ -2753,11 +2753,11 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
> > > >      virtio_del_queue(vdev, index * 2 + 1);
> > > >  }
> > > >
> > > > -static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
> > > > +static void virtio_net_change_num_qps(VirtIONet *n, int new_max_qps)
> > > >  {
> > > >      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > >      int old_num_queues = virtio_get_num_queues(vdev);
> > > > -    int new_num_queues = new_max_queues * 2 + 1;
> > > > +    int new_num_queues = new_max_qps * 2 + 1;
> > > >      int i;
> > > >
> > > >      assert(old_num_queues >= 3);
> > > > @@ -2790,12 +2790,12 @@ static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
> > > >
> > > >  static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue)
> > > >  {
> > > > -    int max = multiqueue ? n->max_queues : 1;
> > > > +    int max = multiqueue ? n->max_qps : 1;
> > > >
> > > >      n->multiqueue = multiqueue;
> > > > -    virtio_net_change_num_queues(n, max);
> > > > +    virtio_net_change_num_qps(n, max);
> > > >
> > > > -    virtio_net_set_queues(n);
> > > > +    virtio_net_set_qps(n);
> > > >  }
> > > >
> > > >  static int virtio_net_post_load_device(void *opaque, int version_id)
> > > > @@ -2828,7 +2828,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > >       */
> > > >      n->saved_guest_offloads = n->curr_guest_offloads;
> > > >
> > > > -    virtio_net_set_queues(n);
> > > > +    virtio_net_set_qps(n);
> > > >
> > > >      /* Find the first multicast entry in the saved MAC filter */
> > > >      for (i = 0; i < n->mac_table.in_use; i++) {
> > > > @@ -2841,7 +2841,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
> > > >      /* nc.link_down can't be migrated, so infer link_down according
> > > >       * to link status bit in n->status */
> > > >      link_down = (n->status & VIRTIO_NET_S_LINK_UP) == 0;
> > > > -    for (i = 0; i < n->max_queues; i++) {
> > > > +    for (i = 0; i < n->max_qps; i++) {
> > > >          qemu_get_subqueue(n->nic, i)->link_down = link_down;
> > > >      }
> > > >
> > > > @@ -2906,9 +2906,9 @@ static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
> > > >     },
> > > >  };
> > > >
> > > > -static bool max_queues_gt_1(void *opaque, int version_id)
> > > > +static bool max_qps_gt_1(void *opaque, int version_id)
> > > >  {
> > > > -    return VIRTIO_NET(opaque)->max_queues > 1;
> > > > +    return VIRTIO_NET(opaque)->max_qps > 1;
> > > >  }
> > > >
> > > >  static bool has_ctrl_guest_offloads(void *opaque, int version_id)
> > > > @@ -2933,13 +2933,13 @@ static bool mac_table_doesnt_fit(void *opaque, int version_id)
> > > >  struct VirtIONetMigTmp {
> > > >      VirtIONet      *parent;
> > > >      VirtIONetQueue *vqs_1;
> > > > -    uint16_t        curr_queues_1;
> > > > +    uint16_t        curr_qps_1;
> > > >      uint8_t         has_ufo;
> > > >      uint32_t        has_vnet_hdr;
> > > >  };
> > > >
> > > >  /* The 2nd and subsequent tx_waiting flags are loaded later than
> > > > - * the 1st entry in the queues and only if there's more than one
> > > > + * the 1st entry in the qps and only if there's more than one
> > > >   * entry.  We use the tmp mechanism to calculate a temporary
> > > >   * pointer and count and also validate the count.
> > > >   */
> > > > @@ -2949,9 +2949,9 @@ static int virtio_net_tx_waiting_pre_save(void *opaque)
> > > >      struct VirtIONetMigTmp *tmp = opaque;
> > > >
> > > >      tmp->vqs_1 = tmp->parent->vqs + 1;
> > > > -    tmp->curr_queues_1 = tmp->parent->curr_queues - 1;
> > > > -    if (tmp->parent->curr_queues == 0) {
> > > > -        tmp->curr_queues_1 = 0;
> > > > +    tmp->curr_qps_1 = tmp->parent->curr_qps - 1;
> > > > +    if (tmp->parent->curr_qps == 0) {
> > > > +        tmp->curr_qps_1 = 0;
> > > >      }
> > > >
> > > >      return 0;
> > > > @@ -2964,9 +2964,9 @@ static int virtio_net_tx_waiting_pre_load(void *opaque)
> > > >      /* Reuse the pointer setup from save */
> > > >      virtio_net_tx_waiting_pre_save(opaque);
> > > >
> > > > -    if (tmp->parent->curr_queues > tmp->parent->max_queues) {
> > > > -        error_report("virtio-net: curr_queues %x > max_queues %x",
> > > > -            tmp->parent->curr_queues, tmp->parent->max_queues);
> > > > +    if (tmp->parent->curr_qps > tmp->parent->max_qps) {
> > > > +        error_report("virtio-net: curr_qps %x > max_qps %x",
> > > > +            tmp->parent->curr_qps, tmp->parent->max_qps);
> > > >
> > > >          return -EINVAL;
> > > >      }
> > > > @@ -2980,7 +2980,7 @@ static const VMStateDescription vmstate_virtio_net_tx_waiting = {
> > > >      .pre_save  = virtio_net_tx_waiting_pre_save,
> > > >      .fields    = (VMStateField[]) {
> > > >          VMSTATE_STRUCT_VARRAY_POINTER_UINT16(vqs_1, struct VirtIONetMigTmp,
> > > > -                                     curr_queues_1,
> > > > +                                     curr_qps_1,
> > > >                                       vmstate_virtio_net_queue_tx_waiting,
> > > >                                       struct VirtIONetQueue),
> > > >          VMSTATE_END_OF_LIST()
> > > > @@ -3122,9 +3122,9 @@ static const VMStateDescription vmstate_virtio_net_device = {
> > > >          VMSTATE_UINT8(nobcast, VirtIONet),
> > > >          VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
> > > >                           vmstate_virtio_net_has_ufo),
> > > > -        VMSTATE_SINGLE_TEST(max_queues, VirtIONet, max_queues_gt_1, 0,
> > > > +        VMSTATE_SINGLE_TEST(max_qps, VirtIONet, max_qps_gt_1, 0,
> > > >                              vmstate_info_uint16_equal, uint16_t),
> > > > -        VMSTATE_UINT16_TEST(curr_queues, VirtIONet, max_queues_gt_1),
> > > > +        VMSTATE_UINT16_TEST(curr_qps, VirtIONet, max_qps_gt_1),
> > > >          VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
> > > >                           vmstate_virtio_net_tx_waiting),
> > > >          VMSTATE_UINT64_TEST(curr_guest_offloads, VirtIONet,
> > > > @@ -3368,16 +3368,16 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > >          return;
> > > >      }
> > > >
> > > > -    n->max_queues = MAX(n->nic_conf.peers.queues, 1);
> > > > -    if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
> > > > -        error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
> > > > +    n->max_qps = MAX(n->nic_conf.peers.queues, 1);
> > > > +    if (n->max_qps * 2 + 1 > VIRTIO_QUEUE_MAX) {
> > > > +        error_setg(errp, "Invalid number of qps (= %" PRIu32 "), "
> > > >                     "must be a positive integer less than %d.",
> > > > -                   n->max_queues, (VIRTIO_QUEUE_MAX - 1) / 2);
> > > > +                   n->max_qps, (VIRTIO_QUEUE_MAX - 1) / 2);
> > > >          virtio_cleanup(vdev);
> > > >          return;
> > > >      }
> > > > -    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_queues);
> > > > -    n->curr_queues = 1;
> > > > +    n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_qps);
> > > > +    n->curr_qps = 1;
> > > >      n->tx_timeout = n->net_conf.txtimer;
> > > >
> > > >      if (n->net_conf.tx && strcmp(n->net_conf.tx, "timer")
> > > > @@ -3391,7 +3391,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > >      n->net_conf.tx_queue_size = MIN(virtio_net_max_tx_queue_size(n),
> > > >                                      n->net_conf.tx_queue_size);
> > > >
> > > > -    for (i = 0; i < n->max_queues; i++) {
> > > > +    for (i = 0; i < n->max_qps; i++) {
> > > >          virtio_net_add_queue(n, i);
> > > >      }
> > > >
> > > > @@ -3415,13 +3415,13 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > >                                object_get_typename(OBJECT(dev)), dev->id, n);
> > > >      }
> > > >
> > > > -    for (i = 0; i < n->max_queues; i++) {
> > > > +    for (i = 0; i < n->max_qps; i++) {
> > > >          n->nic->ncs[i].do_not_pad = true;
> > > >      }
> > > >
> > > >      peer_test_vnet_hdr(n);
> > > >      if (peer_has_vnet_hdr(n)) {
> > > > -        for (i = 0; i < n->max_queues; i++) {
> > > > +        for (i = 0; i < n->max_qps; i++) {
> > > >              qemu_using_vnet_hdr(qemu_get_subqueue(n->nic, i)->peer, true);
> > > >          }
> > > >          n->host_hdr_len = sizeof(struct virtio_net_hdr);
> > > > @@ -3463,7 +3463,7 @@ static void virtio_net_device_unrealize(DeviceState *dev)
> > > >  {
> > > >      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > >      VirtIONet *n = VIRTIO_NET(dev);
> > > > -    int i, max_queues;
> > > > +    int i, max_qps;
> > > >
> > > >      if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
> > > >          virtio_net_unload_ebpf(n);
> > > > @@ -3485,12 +3485,12 @@ static void virtio_net_device_unrealize(DeviceState *dev)
> > > >          remove_migration_state_change_notifier(&n->migration_state);
> > > >      }
> > > >
> > > > -    max_queues = n->multiqueue ? n->max_queues : 1;
> > > > -    for (i = 0; i < max_queues; i++) {
> > > > +    max_qps = n->multiqueue ? n->max_qps : 1;
> > > > +    for (i = 0; i < max_qps; i++) {
> > > >          virtio_net_del_queue(n, i);
> > > >      }
> > > >      /* delete also control vq */
> > > > -    virtio_del_queue(vdev, max_queues * 2);
> > > > +    virtio_del_queue(vdev, max_qps * 2);
> > > >      qemu_announce_timer_del(&n->announce_timer, false);
> > > >      g_free(n->vqs);
> > > >      qemu_del_nic(n->nic);
> > > > diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
> > > > index 824a69c23f..a9b6dc252e 100644
> > > > --- a/include/hw/virtio/virtio-net.h
> > > > +++ b/include/hw/virtio/virtio-net.h
> > > > @@ -194,8 +194,8 @@ struct VirtIONet {
> > > >      NICConf nic_conf;
> > > >      DeviceState *qdev;
> > > >      int multiqueue;
> > > > -    uint16_t max_queues;
> > > > -    uint16_t curr_queues;
> > > > +    uint16_t max_qps;
> > > > +    uint16_t curr_qps;
> > > >      size_t config_size;
> > > >      char *netclient_name;
> > > >      char *netclient_type;
> > > > --
> > > > 2.25.1
> > >
>



^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2021-09-06  6:57 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-03  9:10 [PATCH V2 00/21] vhost-vDPA multiqueue Jason Wang
2021-09-03  9:10 ` [PATCH V2 01/21] vhost-vdpa: remove unused variable "acked_features" Jason Wang
2021-09-03  9:10 ` [PATCH V2 02/21] vhost-vdpa: correctly return err in vhost_vdpa_set_backend_cap() Jason Wang
2021-09-03  9:10 ` [PATCH V2 03/21] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
2021-09-03  9:10 ` [PATCH V2 04/21] vhost: use unsigned int for nvqs Jason Wang
2021-09-03  9:10 ` [PATCH V2 05/21] vhost_net: do not assume nvqs is always 2 Jason Wang
2021-09-03  9:10 ` [PATCH V2 06/21] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
2021-09-03  9:10 ` [PATCH V2 07/21] vhost-vdpa: don't cleanup twice " Jason Wang
2021-09-03  9:10 ` [PATCH V2 08/21] vhost-vdpa: fix leaking of vhost_net " Jason Wang
2021-09-03  9:10 ` [PATCH V2 09/21] vhost-vdpa: tweak the error label " Jason Wang
2021-09-03  9:10 ` [PATCH V2 10/21] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
2021-09-03  9:10 ` [PATCH V2 11/21] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
2021-09-03  9:10 ` [PATCH V2 12/21] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
2021-09-04 20:41   ` Michael S. Tsirkin
2021-09-03  9:10 ` [PATCH V2 13/21] vhost-vdpa: classify one time request Jason Wang
2021-09-03  9:10 ` [PATCH V2 14/21] vhost-vdpa: prepare for the multiqueue support Jason Wang
2021-09-03  9:10 ` [PATCH V2 15/21] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
2021-09-03  9:10 ` [PATCH V2 16/21] net: introduce control client Jason Wang
2021-09-03  9:10 ` [PATCH V2 17/21] vhost-net: control virtqueue support Jason Wang
2021-09-04 20:40   ` Michael S. Tsirkin
2021-09-06  3:43     ` Jason Wang
2021-09-03  9:10 ` [PATCH V2 18/21] virito-net: use "qps" instead of "queues" when possible Jason Wang
2021-09-04 20:42   ` Michael S. Tsirkin
2021-09-06  3:42     ` Jason Wang
2021-09-06  5:49       ` Michael S. Tsirkin
2021-09-06  6:54         ` Jason Wang
2021-09-03  9:10 ` [PATCH V2 19/21] vhost: record the last virtqueue index for the virtio device Jason Wang
2021-09-03  9:10 ` [PATCH V2 20/21] virtio-net: vhost control virtqueue support Jason Wang
2021-09-03  9:10 ` [PATCH V2 21/21] vhost-vdpa: multiqueue support Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.