* [PATCH 00/18] vhost-vDPA multiqueue
@ 2021-06-21 4:16 Jason Wang
2021-06-21 4:16 ` [PATCH 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
` (18 more replies)
0 siblings, 19 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
Hi All:
This patch implements the multiqueue support for vhost-vDPA. The most
important requirement the control virtqueue support. The virtio-net
and vhost-net core are tweak to support control virtqueue as if what
data queue pairs are done: a dedicated vhost_net device which is
coupled with the NetClientState is intrdouced so most of the existing
vhost codes could be reused with minor changes. With the control
virtqueue, vhost-vDPA are extend to support creating and destroying
multiqueue queue pairs plus the control virtqueue.
Tests are done via the vp_vdpa driver in L1 guest.
Please reivew.
Thanks
Jason Wang (18):
vhost_net: remove the meaningless assignment in vhost_net_start_one()
vhost: use unsigned int for nvqs
vhost_net: do not assume nvqs is always 2
vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
vhost-vdpa: tweak the error label in vhost_vdpa_add()
vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
vhost-vdpa: remove the unncessary queue_index assignment
vhost-vdpa: open device fd in net_init_vhost_vdpa()
vhost-vdpa: classify one time request
vhost-vdpa: prepare for the multiqueue support
vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
net: introduce control client
vhost-net: control virtqueue support
virito-net: use "qps" instead of "queues" when possible
virtio-net: vhost control virtqueue support
vhost-vdpa: multiqueue support
hw/net/vhost_net.c | 48 +++++++---
hw/net/virtio-net.c | 165 ++++++++++++++++++---------------
hw/virtio/vhost-vdpa.c | 55 ++++++++++-
include/hw/virtio/vhost-vdpa.h | 1 +
include/hw/virtio/vhost.h | 2 +-
include/hw/virtio/virtio-net.h | 5 +-
include/net/net.h | 5 +
include/net/vhost_net.h | 7 +-
net/net.c | 24 ++++-
net/tap.c | 1 +
net/vhost-user.c | 1 +
net/vhost-vdpa.c | 150 +++++++++++++++++++++++-------
12 files changed, 326 insertions(+), 138 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one()
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-21 11:45 ` Eli Cohen
2021-06-21 4:16 ` [PATCH 02/18] vhost: use unsigned int for nvqs Jason Wang
` (17 subsequent siblings)
18 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
The nvqs and vqs has been initialized during vhost_net_init() and is
not expected to change during the life cycle of vhost_net
structure. So this patch removes the meaningless assignment.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/vhost_net.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 44c1ed92dc..6bd4184f96 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -238,9 +238,6 @@ static int vhost_net_start_one(struct vhost_net *net,
struct vhost_vring_file file = { };
int r;
- net->dev.nvqs = 2;
- net->dev.vqs = net->vqs;
-
r = vhost_dev_enable_notifiers(&net->dev, dev);
if (r < 0) {
goto fail_notifiers;
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 02/18] vhost: use unsigned int for nvqs
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
2021-06-21 4:16 ` [PATCH 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-21 11:46 ` Eli Cohen
2021-06-21 4:16 ` [PATCH 03/18] vhost_net: do not assume nvqs is always 2 Jason Wang
` (16 subsequent siblings)
18 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
Switch to use unsigned int for nvqs since it's not expected to be
negative.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
include/hw/virtio/vhost.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 21a9a52088..ddd7d3d594 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -71,7 +71,7 @@ struct vhost_dev {
int n_tmp_sections;
MemoryRegionSection *tmp_sections;
struct vhost_virtqueue *vqs;
- int nvqs;
+ unsigned int nvqs;
/* the first virtqueue which would be used by this vhost dev */
int vq_index;
/* if non-zero, minimum required value for max_queues */
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 03/18] vhost_net: do not assume nvqs is always 2
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
2021-06-21 4:16 ` [PATCH 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
2021-06-21 4:16 ` [PATCH 02/18] vhost: use unsigned int for nvqs Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-23 14:49 ` Stefano Garzarella
2021-06-24 6:22 ` Eli Cohen
2021-06-21 4:16 ` [PATCH 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
` (15 subsequent siblings)
18 siblings, 2 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
This patch switches to initialize dev.nvqs from the VhostNetOptions
instead of assuming it was 2. This is useful for implementing control
virtqueue support which will be a single vhost_net structure with a
single cvq.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/vhost_net.c | 2 +-
include/net/vhost_net.h | 1 +
net/tap.c | 1 +
net/vhost-user.c | 1 +
net/vhost-vdpa.c | 1 +
5 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 6bd4184f96..ef1370bd92 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -163,9 +163,9 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
goto fail;
}
net->nc = options->net_backend;
+ net->dev.nvqs = options->nvqs;
net->dev.max_queues = 1;
- net->dev.nvqs = 2;
net->dev.vqs = net->vqs;
if (backend_kernel) {
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index 172b0051d8..fba40cf695 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -14,6 +14,7 @@ typedef struct VhostNetOptions {
VhostBackendType backend_type;
NetClientState *net_backend;
uint32_t busyloop_timeout;
+ unsigned int nvqs;
void *opaque;
} VhostNetOptions;
diff --git a/net/tap.c b/net/tap.c
index f5686bbf77..f716be3e3f 100644
--- a/net/tap.c
+++ b/net/tap.c
@@ -749,6 +749,7 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
qemu_set_nonblock(vhostfd);
}
options.opaque = (void *)(uintptr_t)vhostfd;
+ options.nvqs = 2;
s->vhost_net = vhost_net_init(&options);
if (!s->vhost_net) {
diff --git a/net/vhost-user.c b/net/vhost-user.c
index ffbd94d944..b93918c5a4 100644
--- a/net/vhost-user.c
+++ b/net/vhost-user.c
@@ -85,6 +85,7 @@ static int vhost_user_start(int queues, NetClientState *ncs[],
options.net_backend = ncs[i];
options.opaque = be;
options.busyloop_timeout = 0;
+ options.nvqs = 2;
net = vhost_net_init(&options);
if (!net) {
error_report("failed to init vhost_net for queue %d", i);
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 19187dce8c..18b45ad777 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -105,6 +105,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
options.net_backend = ncs;
options.opaque = be;
options.busyloop_timeout = 0;
+ options.nvqs = 2;
net = vhost_net_init(&options);
if (!net) {
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (2 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 03/18] vhost_net: do not assume nvqs is always 2 Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-23 14:53 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 05/18] vhost-vdpa: don't cleanup twice " Jason Wang
` (14 subsequent siblings)
18 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
The VhostVDPAState is just allocated by qemu_new_net_client() via
g_malloc0() in net_vhost_vdpa_init(). So s->vhost_net is NULL for
sure, let's remove this unnecessary check in vhost_vdpa_add().
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 18b45ad777..728e63ff54 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -112,10 +112,6 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
error_report("failed to init vhost_net for queue");
goto err;
}
- if (s->vhost_net) {
- vhost_net_cleanup(s->vhost_net);
- g_free(s->vhost_net);
- }
s->vhost_net = net;
ret = vhost_vdpa_net_check_device_id(net);
if (ret) {
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 05/18] vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (3 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-23 14:56 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net " Jason Wang
` (13 subsequent siblings)
18 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
The previous vhost_net_cleanup is sufficient for freeing, calling
vhost_vdpa_del() in this case will lead an extra round of free. Note
that this kind of "double free" is safe since vhost_dev_cleanup() zero
the whole structure.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 11 -----------
1 file changed, 11 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 728e63ff54..f5689a7c32 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -82,16 +82,6 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
return ret;
}
-static void vhost_vdpa_del(NetClientState *ncs)
-{
- VhostVDPAState *s;
- assert(ncs->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
- s = DO_UPCAST(VhostVDPAState, nc, ncs);
- if (s->vhost_net) {
- vhost_net_cleanup(s->vhost_net);
- }
-}
-
static int vhost_vdpa_add(NetClientState *ncs, void *be)
{
VhostNetOptions options;
@@ -122,7 +112,6 @@ err:
if (net) {
vhost_net_cleanup(net);
}
- vhost_vdpa_del(ncs);
return -1;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (4 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 05/18] vhost-vdpa: don't cleanup twice " Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-23 15:00 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 07/18] vhost-vdpa: tweak the error label " Jason Wang
` (12 subsequent siblings)
18 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index f5689a7c32..21f09c546f 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -111,6 +111,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
err:
if (net) {
vhost_net_cleanup(net);
+ g_free(net);
}
return -1;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 07/18] vhost-vdpa: tweak the error label in vhost_vdpa_add()
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (5 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net " Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-23 15:03 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
` (11 subsequent siblings)
18 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
Introduce new error label to avoid the unnecessary checking of net
pointer.
Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 21f09c546f..0da7bc347a 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -100,19 +100,18 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
net = vhost_net_init(&options);
if (!net) {
error_report("failed to init vhost_net for queue");
- goto err;
+ goto err_init;
}
s->vhost_net = net;
ret = vhost_vdpa_net_check_device_id(net);
if (ret) {
- goto err;
+ goto err_check;
}
return 0;
-err:
- if (net) {
- vhost_net_cleanup(net);
- g_free(net);
- }
+err_check:
+ vhost_net_cleanup(net);
+ g_free(net);
+err_init:
return -1;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (6 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 07/18] vhost-vdpa: tweak the error label " Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-23 15:04 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 09/18] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
` (10 subsequent siblings)
18 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
Vhost_vdpa_add() can fail for various reasons, so the assertion of the
succeed is wrong. Instead, we should free the NetClientState and
propagate the error to the caller
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 0da7bc347a..87b181a74e 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -174,7 +174,10 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
}
s->vhost_vdpa.device_fd = vdpa_device_fd;
ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
- assert(s->vhost_net);
+ if (ret) {
+ qemu_close(vdpa_device_fd);
+ qemu_del_net_client(nc);
+ }
return ret;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 09/18] vhost-vdpa: remove the unncessary queue_index assignment
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (7 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-23 15:05 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
` (9 subsequent siblings)
18 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
The queue_index of NetClientState should be assigned in set_netdev()
afterwards, so trying to net_vhost_vdpa_init() is meaningless. This
patch removes this.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 87b181a74e..572aed4ca2 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -166,7 +166,6 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
assert(name);
nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
- nc->queue_index = 0;
s = DO_UPCAST(VhostVDPAState, nc, nc);
vdpa_device_fd = qemu_open_old(vhostdev, O_RDWR);
if (vdpa_device_fd == -1) {
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa()
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (8 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 09/18] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-23 15:07 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 11/18] vhost-vdpa: classify one time request Jason Wang
` (8 subsequent siblings)
18 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
This path switches to open device fd in net_init_vhost_vpda(). This is
used to prepare for the multiqueue support.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 572aed4ca2..e63a54a938 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -157,24 +157,19 @@ static NetClientInfo net_vhost_vdpa_info = {
};
static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
- const char *name, const char *vhostdev)
+ const char *name, int vdpa_device_fd)
{
NetClientState *nc = NULL;
VhostVDPAState *s;
- int vdpa_device_fd = -1;
int ret = 0;
assert(name);
nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
s = DO_UPCAST(VhostVDPAState, nc, nc);
- vdpa_device_fd = qemu_open_old(vhostdev, O_RDWR);
- if (vdpa_device_fd == -1) {
- return -errno;
- }
+
s->vhost_vdpa.device_fd = vdpa_device_fd;
ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
if (ret) {
- qemu_close(vdpa_device_fd);
qemu_del_net_client(nc);
}
return ret;
@@ -202,6 +197,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
NetClientState *peer, Error **errp)
{
const NetdevVhostVDPAOptions *opts;
+ int vdpa_device_fd, ret;
assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
opts = &netdev->u.vhost_vdpa;
@@ -210,5 +206,16 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
(char *)name, errp)) {
return -1;
}
- return net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, opts->vhostdev);
+
+ vdpa_device_fd = qemu_open_old(opts->vhostdev, O_RDWR);
+ if (vdpa_device_fd == -1) {
+ return -errno;
+ }
+
+ ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
+ if (ret) {
+ qemu_close(vdpa_device_fd);
+ }
+
+ return ret;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 11/18] vhost-vdpa: classify one time request
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (9 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-21 4:16 ` [PATCH 12/18] vhost-vdpa: prepare for the multiqueue support Jason Wang
` (7 subsequent siblings)
18 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
Vhost-vdpa uses one device multiqueue queue (pairs) model. So we need
to classify the one time request (e.g SET_OWNER) and make sure those
request were only called once per device.
This is used for multiqueue support.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/virtio/vhost-vdpa.c | 51 ++++++++++++++++++++++++++++++++--
include/hw/virtio/vhost-vdpa.h | 1 +
2 files changed, 49 insertions(+), 3 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 61ba313331..397f47bc11 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -265,6 +265,13 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
}
+static bool vhost_vdpa_one_time_request(struct vhost_dev *dev)
+{
+ struct vhost_vdpa *v = dev->opaque;
+
+ return v->index != 0;
+}
+
static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque)
{
struct vhost_vdpa *v;
@@ -277,6 +284,10 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque)
v->listener = vhost_vdpa_memory_listener;
v->msg_type = VHOST_IOTLB_MSG_V2;
+ if (vhost_vdpa_one_time_request(dev)) {
+ return 0;
+ }
+
vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
VIRTIO_CONFIG_S_DRIVER);
@@ -387,6 +398,10 @@ static int vhost_vdpa_memslots_limit(struct vhost_dev *dev)
static int vhost_vdpa_set_mem_table(struct vhost_dev *dev,
struct vhost_memory *mem)
{
+ if (vhost_vdpa_one_time_request(dev)) {
+ return 0;
+ }
+
trace_vhost_vdpa_set_mem_table(dev, mem->nregions, mem->padding);
if (trace_event_get_state_backends(TRACE_VHOST_VDPA_SET_MEM_TABLE) &&
trace_event_get_state_backends(TRACE_VHOST_VDPA_DUMP_REGIONS)) {
@@ -410,6 +425,11 @@ static int vhost_vdpa_set_features(struct vhost_dev *dev,
uint64_t features)
{
int ret;
+
+ if (vhost_vdpa_one_time_request(dev)) {
+ return 0;
+ }
+
trace_vhost_vdpa_set_features(dev, features);
ret = vhost_vdpa_call(dev, VHOST_SET_FEATURES, &features);
uint8_t status = 0;
@@ -429,6 +449,10 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
int r;
+ if (vhost_vdpa_one_time_request(dev)) {
+ return 0;
+ }
+
if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
return 0;
}
@@ -458,6 +482,10 @@ static int vhost_vdpa_reset_device(struct vhost_dev *dev)
int ret;
uint8_t status = 0;
+ if (vhost_vdpa_one_time_request(dev)) {
+ return 0;
+ }
+
ret = vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &status);
trace_vhost_vdpa_reset_device(dev, status);
return ret;
@@ -545,11 +573,21 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
{
struct vhost_vdpa *v = dev->opaque;
trace_vhost_vdpa_dev_start(dev, started);
+
if (started) {
- uint8_t status = 0;
- memory_listener_register(&v->listener, &address_space_memory);
vhost_vdpa_host_notifiers_init(dev);
vhost_vdpa_set_vring_ready(dev);
+ } else {
+ vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
+ }
+
+ if (vhost_vdpa_one_time_request(dev)) {
+ return 0;
+ }
+
+ if (started) {
+ uint8_t status = 0;
+ memory_listener_register(&v->listener, &address_space_memory);
vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
vhost_vdpa_call(dev, VHOST_VDPA_GET_STATUS, &status);
@@ -558,7 +596,6 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
vhost_vdpa_reset_device(dev);
vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
VIRTIO_CONFIG_S_DRIVER);
- vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
memory_listener_unregister(&v->listener);
return 0;
@@ -568,6 +605,10 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
static int vhost_vdpa_set_log_base(struct vhost_dev *dev, uint64_t base,
struct vhost_log *log)
{
+ if (vhost_vdpa_one_time_request(dev)) {
+ return 0;
+ }
+
trace_vhost_vdpa_set_log_base(dev, base, log->size, log->refcnt, log->fd,
log->log);
return vhost_vdpa_call(dev, VHOST_SET_LOG_BASE, &base);
@@ -633,6 +674,10 @@ static int vhost_vdpa_get_features(struct vhost_dev *dev,
static int vhost_vdpa_set_owner(struct vhost_dev *dev)
{
+ if (vhost_vdpa_one_time_request(dev)) {
+ return 0;
+ }
+
trace_vhost_vdpa_set_owner(dev);
return vhost_vdpa_call(dev, VHOST_SET_OWNER, NULL);
}
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 9188226d8b..e98e327f12 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -21,6 +21,7 @@ typedef struct VhostVDPAHostNotifier {
typedef struct vhost_vdpa {
int device_fd;
+ int index;
uint32_t msg_type;
MemoryListener listener;
struct vhost_dev *dev;
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 12/18] vhost-vdpa: prepare for the multiqueue support
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (10 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 11/18] vhost-vdpa: classify one time request Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-21 4:16 ` [PATCH 13/18] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
` (6 subsequent siblings)
18 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
Unlike vhost-kernel, vhost-vdpa adapts a single device multiqueue
model. So we need to simply use virtqueue index as the vhost virtqueue
index. This is a must for multiqueue to work for vhost-vdpa.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/virtio/vhost-vdpa.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 397f47bc11..e7e6b23108 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -495,8 +495,8 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev *dev, int idx)
{
assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
- trace_vhost_vdpa_get_vq_index(dev, idx, idx - dev->vq_index);
- return idx - dev->vq_index;
+ trace_vhost_vdpa_get_vq_index(dev, idx, idx);
+ return idx;
}
static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 13/18] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (11 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 12/18] vhost-vdpa: prepare for the multiqueue support Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-21 4:16 ` [PATCH 14/18] net: introduce control client Jason Wang
` (5 subsequent siblings)
18 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
This patch switches to let net_vhost_vdpa_init() to return
NetClientState *. This is used for the callers to allocate multiqueue
NetClientState for multiqueue support.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index e63a54a938..cc11b2ec40 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -156,8 +156,10 @@ static NetClientInfo net_vhost_vdpa_info = {
.has_ufo = vhost_vdpa_has_ufo,
};
-static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
- const char *name, int vdpa_device_fd)
+static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
+ const char *device,
+ const char *name,
+ int vdpa_device_fd)
{
NetClientState *nc = NULL;
VhostVDPAState *s;
@@ -171,8 +173,9 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
if (ret) {
qemu_del_net_client(nc);
+ return NULL;
}
- return ret;
+ return nc;
}
static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
@@ -197,7 +200,8 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
NetClientState *peer, Error **errp)
{
const NetdevVhostVDPAOptions *opts;
- int vdpa_device_fd, ret;
+ int vdpa_device_fd;
+ NetClientState *nc;
assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
opts = &netdev->u.vhost_vdpa;
@@ -212,10 +216,11 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
return -errno;
}
- ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
- if (ret) {
+ nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
+ if (!nc) {
qemu_close(vdpa_device_fd);
+ return -1;
}
- return ret;
+ return 0;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 14/18] net: introduce control client
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (12 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 13/18] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-21 4:16 ` [PATCH 15/18] vhost-net: control virtqueue support Jason Wang
` (4 subsequent siblings)
18 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
This patch introduces a boolean for the device has control queue which
can accepts control command via network queue.
The first user would be the control virtqueue support for vhost.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
include/net/net.h | 5 +++++
net/net.c | 24 +++++++++++++++++++++---
2 files changed, 26 insertions(+), 3 deletions(-)
diff --git a/include/net/net.h b/include/net/net.h
index 5d1508081f..4f400b8a09 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -103,6 +103,7 @@ struct NetClientState {
int vnet_hdr_len;
bool is_netdev;
bool do_not_pad; /* do not pad to the minimum ethernet frame length */
+ bool is_datapath;
QTAILQ_HEAD(, NetFilterState) filters;
};
@@ -134,6 +135,10 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
NetClientState *peer,
const char *model,
const char *name);
+NetClientState *qemu_new_net_control_client(NetClientInfo *info,
+ NetClientState *peer,
+ const char *model,
+ const char *name);
NICState *qemu_new_nic(NetClientInfo *info,
NICConf *conf,
const char *model,
diff --git a/net/net.c b/net/net.c
index 76bbb7c31b..fcaf9c7715 100644
--- a/net/net.c
+++ b/net/net.c
@@ -237,7 +237,8 @@ static void qemu_net_client_setup(NetClientState *nc,
NetClientState *peer,
const char *model,
const char *name,
- NetClientDestructor *destructor)
+ NetClientDestructor *destructor,
+ bool is_datapath)
{
nc->info = info;
nc->model = g_strdup(model);
@@ -256,6 +257,7 @@ static void qemu_net_client_setup(NetClientState *nc,
nc->incoming_queue = qemu_new_net_queue(qemu_deliver_packet_iov, nc);
nc->destructor = destructor;
+ nc->is_datapath = is_datapath;
QTAILQ_INIT(&nc->filters);
}
@@ -270,7 +272,23 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
nc = g_malloc0(info->size);
qemu_net_client_setup(nc, info, peer, model, name,
- qemu_net_client_destructor);
+ qemu_net_client_destructor, true);
+
+ return nc;
+}
+
+NetClientState *qemu_new_net_control_client(NetClientInfo *info,
+ NetClientState *peer,
+ const char *model,
+ const char *name)
+{
+ NetClientState *nc;
+
+ assert(info->size >= sizeof(NetClientState));
+
+ nc = g_malloc0(info->size);
+ qemu_net_client_setup(nc, info, peer, model, name,
+ qemu_net_client_destructor, false);
return nc;
}
@@ -295,7 +313,7 @@ NICState *qemu_new_nic(NetClientInfo *info,
for (i = 0; i < queues; i++) {
qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
- NULL);
+ NULL, true);
nic->ncs[i].queue_index = i;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 15/18] vhost-net: control virtqueue support
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (13 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 14/18] net: introduce control client Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-24 7:42 ` Eli Cohen
2021-06-30 17:33 ` Eugenio Perez Martin
2021-06-21 4:16 ` [PATCH 16/18] virito-net: use "qps" instead of "queues" when possible Jason Wang
` (3 subsequent siblings)
18 siblings, 2 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
We assume there's no cvq in the past, this is not true when we need
control virtqueue support for vhost-user backends. So this patch
implements the control virtqueue support for vhost-net. As datapath,
the control virtqueue is also required to be coupled with the
NetClientState. The vhost_net_start/stop() are tweaked to accept the
number of datapath queue pairs plus the the number of control
virtqueue for us to start and stop the vhost device.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/vhost_net.c | 43 ++++++++++++++++++++++++++++++-----------
hw/net/virtio-net.c | 4 ++--
include/net/vhost_net.h | 6 ++++--
3 files changed, 38 insertions(+), 15 deletions(-)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index ef1370bd92..fe2fd7e3d5 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -311,11 +311,14 @@ static void vhost_net_stop_one(struct vhost_net *net,
}
int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
- int total_queues)
+ int data_qps, int cvq)
{
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
VirtioBusState *vbus = VIRTIO_BUS(qbus);
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+ int total_notifiers = data_qps * 2 + cvq;
+ VirtIONet *n = VIRTIO_NET(dev);
+ int nvhosts = data_qps + cvq;
struct vhost_net *net;
int r, e, i;
NetClientState *peer;
@@ -325,9 +328,14 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
return -ENOSYS;
}
- for (i = 0; i < total_queues; i++) {
+ for (i = 0; i < nvhosts; i++) {
+
+ if (i < data_qps) {
+ peer = qemu_get_peer(ncs, i);
+ } else { /* Control Virtqueue */
+ peer = qemu_get_peer(ncs, n->max_qps);
+ }
- peer = qemu_get_peer(ncs, i);
net = get_vhost_net(peer);
vhost_net_set_vq_index(net, i * 2);
@@ -340,14 +348,18 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
}
}
- r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
+ r = k->set_guest_notifiers(qbus->parent, total_notifiers, true);
if (r < 0) {
error_report("Error binding guest notifier: %d", -r);
goto err;
}
- for (i = 0; i < total_queues; i++) {
- peer = qemu_get_peer(ncs, i);
+ for (i = 0; i < nvhosts; i++) {
+ if (i < data_qps) {
+ peer = qemu_get_peer(ncs, i);
+ } else {
+ peer = qemu_get_peer(ncs, n->max_qps);
+ }
r = vhost_net_start_one(get_vhost_net(peer), dev);
if (r < 0) {
@@ -371,7 +383,7 @@ err_start:
peer = qemu_get_peer(ncs , i);
vhost_net_stop_one(get_vhost_net(peer), dev);
}
- e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
+ e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
if (e < 0) {
fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e);
fflush(stderr);
@@ -381,18 +393,27 @@ err:
}
void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
- int total_queues)
+ int data_qps, int cvq)
{
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
VirtioBusState *vbus = VIRTIO_BUS(qbus);
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+ VirtIONet *n = VIRTIO_NET(dev);
+ NetClientState *peer;
+ int total_notifiers = data_qps * 2 + cvq;
+ int nvhosts = data_qps + cvq;
int i, r;
- for (i = 0; i < total_queues; i++) {
- vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
+ for (i = 0; i < nvhosts; i++) {
+ if (i < data_qps) {
+ peer = qemu_get_peer(ncs, i);
+ } else {
+ peer = qemu_get_peer(ncs, n->max_qps);
+ }
+ vhost_net_stop_one(get_vhost_net(peer), dev);
}
- r = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
+ r = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
if (r < 0) {
fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", r);
fflush(stderr);
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index bd7958b9f0..614660274c 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
}
n->vhost_started = 1;
- r = vhost_net_start(vdev, n->nic->ncs, queues);
+ r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
if (r < 0) {
error_report("unable to start vhost net: %d: "
"falling back on userspace virtio", -r);
n->vhost_started = 0;
}
} else {
- vhost_net_stop(vdev, n->nic->ncs, queues);
+ vhost_net_stop(vdev, n->nic->ncs, queues, 0);
n->vhost_started = 0;
}
}
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index fba40cf695..e656e38af9 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -21,8 +21,10 @@ typedef struct VhostNetOptions {
uint64_t vhost_net_get_max_queues(VHostNetState *net);
struct vhost_net *vhost_net_init(VhostNetOptions *options);
-int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
-void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs, int total_queues);
+int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
+ int data_qps, int cvq);
+void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
+ int data_qps, int cvq);
void vhost_net_cleanup(VHostNetState *net);
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 16/18] virito-net: use "qps" instead of "queues" when possible
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (14 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 15/18] vhost-net: control virtqueue support Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-21 4:16 ` [PATCH 17/18] virtio-net: vhost control virtqueue support Jason Wang
` (2 subsequent siblings)
18 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
Most of the time, "queues" really means queue pairs. So this patch
switch to use "qps" to avoid confusion.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/virtio-net.c | 150 ++++++++++++++++-----------------
include/hw/virtio/virtio-net.h | 4 +-
2 files changed, 77 insertions(+), 77 deletions(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 614660274c..36bd197087 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -54,7 +54,7 @@
#define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
#define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
-/* for now, only allow larger queues; with virtio-1, guest can downsize */
+/* for now, only allow larger qps; with virtio-1, guest can downsize */
#define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
#define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
@@ -131,7 +131,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, uint8_t *config)
int ret = 0;
memset(&netcfg, 0 , sizeof(struct virtio_net_config));
virtio_stw_p(vdev, &netcfg.status, n->status);
- virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_queues);
+ virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_qps);
virtio_stw_p(vdev, &netcfg.mtu, n->net_conf.mtu);
memcpy(netcfg.mac, n->mac, ETH_ALEN);
virtio_stl_p(vdev, &netcfg.speed, n->net_conf.speed);
@@ -243,7 +243,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
{
VirtIODevice *vdev = VIRTIO_DEVICE(n);
NetClientState *nc = qemu_get_queue(n->nic);
- int queues = n->multiqueue ? n->max_queues : 1;
+ int qps = n->multiqueue ? n->max_qps : 1;
if (!get_vhost_net(nc->peer)) {
return;
@@ -266,7 +266,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
/* Any packets outstanding? Purge them to avoid touching rings
* when vhost is running.
*/
- for (i = 0; i < queues; i++) {
+ for (i = 0; i < qps; i++) {
NetClientState *qnc = qemu_get_subqueue(n->nic, i);
/* Purge both directions: TX and RX. */
@@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
}
n->vhost_started = 1;
- r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
+ r = vhost_net_start(vdev, n->nic->ncs, qps, 0);
if (r < 0) {
error_report("unable to start vhost net: %d: "
"falling back on userspace virtio", -r);
n->vhost_started = 0;
}
} else {
- vhost_net_stop(vdev, n->nic->ncs, queues, 0);
+ vhost_net_stop(vdev, n->nic->ncs, qps, 0);
n->vhost_started = 0;
}
}
@@ -309,11 +309,11 @@ static int virtio_net_set_vnet_endian_one(VirtIODevice *vdev,
}
static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
- int queues, bool enable)
+ int qps, bool enable)
{
int i;
- for (i = 0; i < queues; i++) {
+ for (i = 0; i < qps; i++) {
if (virtio_net_set_vnet_endian_one(vdev, ncs[i].peer, enable) < 0 &&
enable) {
while (--i >= 0) {
@@ -330,7 +330,7 @@ static bool virtio_net_set_vnet_endian(VirtIODevice *vdev, NetClientState *ncs,
static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
{
VirtIODevice *vdev = VIRTIO_DEVICE(n);
- int queues = n->multiqueue ? n->max_queues : 1;
+ int qps = n->multiqueue ? n->max_qps : 1;
if (virtio_net_started(n, status)) {
/* Before using the device, we tell the network backend about the
@@ -339,14 +339,14 @@ static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
* virtio-net code.
*/
n->needs_vnet_hdr_swap = virtio_net_set_vnet_endian(vdev, n->nic->ncs,
- queues, true);
+ qps, true);
} else if (virtio_net_started(n, vdev->status)) {
/* After using the device, we need to reset the network backend to
* the default (guest native endianness), otherwise the guest may
* lose network connectivity if it is rebooted into a different
* endianness.
*/
- virtio_net_set_vnet_endian(vdev, n->nic->ncs, queues, false);
+ virtio_net_set_vnet_endian(vdev, n->nic->ncs, qps, false);
}
}
@@ -368,12 +368,12 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
virtio_net_vnet_endian_status(n, status);
virtio_net_vhost_status(n, status);
- for (i = 0; i < n->max_queues; i++) {
+ for (i = 0; i < n->max_qps; i++) {
NetClientState *ncs = qemu_get_subqueue(n->nic, i);
bool queue_started;
q = &n->vqs[i];
- if ((!n->multiqueue && i != 0) || i >= n->curr_queues) {
+ if ((!n->multiqueue && i != 0) || i >= n->curr_qps) {
queue_status = 0;
} else {
queue_status = status;
@@ -540,7 +540,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
n->nouni = 0;
n->nobcast = 0;
/* multiqueue is disabled by default */
- n->curr_queues = 1;
+ n->curr_qps = 1;
timer_del(n->announce_timer.tm);
n->announce_timer.round = 0;
n->status &= ~VIRTIO_NET_S_ANNOUNCE;
@@ -556,7 +556,7 @@ static void virtio_net_reset(VirtIODevice *vdev)
memset(n->vlans, 0, MAX_VLAN >> 3);
/* Flush any async TX */
- for (i = 0; i < n->max_queues; i++) {
+ for (i = 0; i < n->max_qps; i++) {
NetClientState *nc = qemu_get_subqueue(n->nic, i);
if (nc->peer) {
@@ -610,7 +610,7 @@ static void virtio_net_set_mrg_rx_bufs(VirtIONet *n, int mergeable_rx_bufs,
sizeof(struct virtio_net_hdr);
}
- for (i = 0; i < n->max_queues; i++) {
+ for (i = 0; i < n->max_qps; i++) {
nc = qemu_get_subqueue(n->nic, i);
if (peer_has_vnet_hdr(n) &&
@@ -655,7 +655,7 @@ static int peer_attach(VirtIONet *n, int index)
return 0;
}
- if (n->max_queues == 1) {
+ if (n->max_qps == 1) {
return 0;
}
@@ -681,7 +681,7 @@ static int peer_detach(VirtIONet *n, int index)
return tap_disable(nc->peer);
}
-static void virtio_net_set_queues(VirtIONet *n)
+static void virtio_net_set_qps(VirtIONet *n)
{
int i;
int r;
@@ -690,8 +690,8 @@ static void virtio_net_set_queues(VirtIONet *n)
return;
}
- for (i = 0; i < n->max_queues; i++) {
- if (i < n->curr_queues) {
+ for (i = 0; i < n->max_qps; i++) {
+ if (i < n->curr_qps) {
r = peer_attach(n, i);
assert(!r);
} else {
@@ -920,7 +920,7 @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
virtio_net_apply_guest_offloads(n);
}
- for (i = 0; i < n->max_queues; i++) {
+ for (i = 0; i < n->max_qps; i++) {
NetClientState *nc = qemu_get_subqueue(n->nic, i);
if (!get_vhost_net(nc->peer)) {
@@ -1247,7 +1247,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
VirtIODevice *vdev = VIRTIO_DEVICE(n);
struct virtio_net_rss_config cfg;
size_t s, offset = 0, size_get;
- uint16_t queues, i;
+ uint16_t qps, i;
struct {
uint16_t us;
uint8_t b;
@@ -1289,7 +1289,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
}
n->rss_data.default_queue = do_rss ?
virtio_lduw_p(vdev, &cfg.unclassified_queue) : 0;
- if (n->rss_data.default_queue >= n->max_queues) {
+ if (n->rss_data.default_queue >= n->max_qps) {
err_msg = "Invalid default queue";
err_value = n->rss_data.default_queue;
goto error;
@@ -1318,14 +1318,14 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
size_get = sizeof(temp);
s = iov_to_buf(iov, iov_cnt, offset, &temp, size_get);
if (s != size_get) {
- err_msg = "Can't get queues";
+ err_msg = "Can't get qps";
err_value = (uint32_t)s;
goto error;
}
- queues = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_queues;
- if (queues == 0 || queues > n->max_queues) {
- err_msg = "Invalid number of queues";
- err_value = queues;
+ qps = do_rss ? virtio_lduw_p(vdev, &temp.us) : n->curr_qps;
+ if (qps == 0 || qps > n->max_qps) {
+ err_msg = "Invalid number of qps";
+ err_value = qps;
goto error;
}
if (temp.b > VIRTIO_NET_RSS_MAX_KEY_SIZE) {
@@ -1340,7 +1340,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
}
if (!temp.b && !n->rss_data.hash_types) {
virtio_net_disable_rss(n);
- return queues;
+ return qps;
}
offset += size_get;
size_get = temp.b;
@@ -1373,7 +1373,7 @@ static uint16_t virtio_net_handle_rss(VirtIONet *n,
trace_virtio_net_rss_enable(n->rss_data.hash_types,
n->rss_data.indirections_len,
temp.b);
- return queues;
+ return qps;
error:
trace_virtio_net_rss_error(err_msg, err_value);
virtio_net_disable_rss(n);
@@ -1384,15 +1384,15 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
struct iovec *iov, unsigned int iov_cnt)
{
VirtIODevice *vdev = VIRTIO_DEVICE(n);
- uint16_t queues;
+ uint16_t qps;
virtio_net_disable_rss(n);
if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
- queues = virtio_net_handle_rss(n, iov, iov_cnt, false);
- return queues ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
+ qps = virtio_net_handle_rss(n, iov, iov_cnt, false);
+ return qps ? VIRTIO_NET_OK : VIRTIO_NET_ERR;
}
if (cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) {
- queues = virtio_net_handle_rss(n, iov, iov_cnt, true);
+ qps = virtio_net_handle_rss(n, iov, iov_cnt, true);
} else if (cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
struct virtio_net_ctrl_mq mq;
size_t s;
@@ -1403,24 +1403,24 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
if (s != sizeof(mq)) {
return VIRTIO_NET_ERR;
}
- queues = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
+ qps = virtio_lduw_p(vdev, &mq.virtqueue_pairs);
} else {
return VIRTIO_NET_ERR;
}
- if (queues < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
- queues > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
- queues > n->max_queues ||
+ if (qps < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
+ qps > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
+ qps > n->max_qps ||
!n->multiqueue) {
return VIRTIO_NET_ERR;
}
- n->curr_queues = queues;
- /* stop the backend before changing the number of queues to avoid handling a
+ n->curr_qps = qps;
+ /* stop the backend before changing the number of qps to avoid handling a
* disabled queue */
virtio_net_set_status(vdev, vdev->status);
- virtio_net_set_queues(n);
+ virtio_net_set_qps(n);
return VIRTIO_NET_OK;
}
@@ -1498,7 +1498,7 @@ static bool virtio_net_can_receive(NetClientState *nc)
return false;
}
- if (nc->queue_index >= n->curr_queues) {
+ if (nc->queue_index >= n->curr_qps) {
return false;
}
@@ -2753,11 +2753,11 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
virtio_del_queue(vdev, index * 2 + 1);
}
-static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
+static void virtio_net_change_num_qps(VirtIONet *n, int new_max_qps)
{
VirtIODevice *vdev = VIRTIO_DEVICE(n);
int old_num_queues = virtio_get_num_queues(vdev);
- int new_num_queues = new_max_queues * 2 + 1;
+ int new_num_queues = new_max_qps * 2 + 1;
int i;
assert(old_num_queues >= 3);
@@ -2790,12 +2790,12 @@ static void virtio_net_change_num_queues(VirtIONet *n, int new_max_queues)
static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue)
{
- int max = multiqueue ? n->max_queues : 1;
+ int max = multiqueue ? n->max_qps : 1;
n->multiqueue = multiqueue;
- virtio_net_change_num_queues(n, max);
+ virtio_net_change_num_qps(n, max);
- virtio_net_set_queues(n);
+ virtio_net_set_qps(n);
}
static int virtio_net_post_load_device(void *opaque, int version_id)
@@ -2828,7 +2828,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
*/
n->saved_guest_offloads = n->curr_guest_offloads;
- virtio_net_set_queues(n);
+ virtio_net_set_qps(n);
/* Find the first multicast entry in the saved MAC filter */
for (i = 0; i < n->mac_table.in_use; i++) {
@@ -2841,7 +2841,7 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
/* nc.link_down can't be migrated, so infer link_down according
* to link status bit in n->status */
link_down = (n->status & VIRTIO_NET_S_LINK_UP) == 0;
- for (i = 0; i < n->max_queues; i++) {
+ for (i = 0; i < n->max_qps; i++) {
qemu_get_subqueue(n->nic, i)->link_down = link_down;
}
@@ -2906,9 +2906,9 @@ static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
},
};
-static bool max_queues_gt_1(void *opaque, int version_id)
+static bool max_qps_gt_1(void *opaque, int version_id)
{
- return VIRTIO_NET(opaque)->max_queues > 1;
+ return VIRTIO_NET(opaque)->max_qps > 1;
}
static bool has_ctrl_guest_offloads(void *opaque, int version_id)
@@ -2933,13 +2933,13 @@ static bool mac_table_doesnt_fit(void *opaque, int version_id)
struct VirtIONetMigTmp {
VirtIONet *parent;
VirtIONetQueue *vqs_1;
- uint16_t curr_queues_1;
+ uint16_t curr_qps_1;
uint8_t has_ufo;
uint32_t has_vnet_hdr;
};
/* The 2nd and subsequent tx_waiting flags are loaded later than
- * the 1st entry in the queues and only if there's more than one
+ * the 1st entry in the qps and only if there's more than one
* entry. We use the tmp mechanism to calculate a temporary
* pointer and count and also validate the count.
*/
@@ -2949,9 +2949,9 @@ static int virtio_net_tx_waiting_pre_save(void *opaque)
struct VirtIONetMigTmp *tmp = opaque;
tmp->vqs_1 = tmp->parent->vqs + 1;
- tmp->curr_queues_1 = tmp->parent->curr_queues - 1;
- if (tmp->parent->curr_queues == 0) {
- tmp->curr_queues_1 = 0;
+ tmp->curr_qps_1 = tmp->parent->curr_qps - 1;
+ if (tmp->parent->curr_qps == 0) {
+ tmp->curr_qps_1 = 0;
}
return 0;
@@ -2964,9 +2964,9 @@ static int virtio_net_tx_waiting_pre_load(void *opaque)
/* Reuse the pointer setup from save */
virtio_net_tx_waiting_pre_save(opaque);
- if (tmp->parent->curr_queues > tmp->parent->max_queues) {
- error_report("virtio-net: curr_queues %x > max_queues %x",
- tmp->parent->curr_queues, tmp->parent->max_queues);
+ if (tmp->parent->curr_qps > tmp->parent->max_qps) {
+ error_report("virtio-net: curr_qps %x > max_qps %x",
+ tmp->parent->curr_qps, tmp->parent->max_qps);
return -EINVAL;
}
@@ -2980,7 +2980,7 @@ static const VMStateDescription vmstate_virtio_net_tx_waiting = {
.pre_save = virtio_net_tx_waiting_pre_save,
.fields = (VMStateField[]) {
VMSTATE_STRUCT_VARRAY_POINTER_UINT16(vqs_1, struct VirtIONetMigTmp,
- curr_queues_1,
+ curr_qps_1,
vmstate_virtio_net_queue_tx_waiting,
struct VirtIONetQueue),
VMSTATE_END_OF_LIST()
@@ -3122,9 +3122,9 @@ static const VMStateDescription vmstate_virtio_net_device = {
VMSTATE_UINT8(nobcast, VirtIONet),
VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
vmstate_virtio_net_has_ufo),
- VMSTATE_SINGLE_TEST(max_queues, VirtIONet, max_queues_gt_1, 0,
+ VMSTATE_SINGLE_TEST(max_qps, VirtIONet, max_qps_gt_1, 0,
vmstate_info_uint16_equal, uint16_t),
- VMSTATE_UINT16_TEST(curr_queues, VirtIONet, max_queues_gt_1),
+ VMSTATE_UINT16_TEST(curr_qps, VirtIONet, max_qps_gt_1),
VMSTATE_WITH_TMP(VirtIONet, struct VirtIONetMigTmp,
vmstate_virtio_net_tx_waiting),
VMSTATE_UINT64_TEST(curr_guest_offloads, VirtIONet,
@@ -3367,16 +3367,16 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
return;
}
- n->max_queues = MAX(n->nic_conf.peers.queues, 1);
- if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
- error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
+ n->max_qps = MAX(n->nic_conf.peers.queues, 1);
+ if (n->max_qps * 2 + 1 > VIRTIO_QUEUE_MAX) {
+ error_setg(errp, "Invalid number of qps (= %" PRIu32 "), "
"must be a positive integer less than %d.",
- n->max_queues, (VIRTIO_QUEUE_MAX - 1) / 2);
+ n->max_qps, (VIRTIO_QUEUE_MAX - 1) / 2);
virtio_cleanup(vdev);
return;
}
- n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_queues);
- n->curr_queues = 1;
+ n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_qps);
+ n->curr_qps = 1;
n->tx_timeout = n->net_conf.txtimer;
if (n->net_conf.tx && strcmp(n->net_conf.tx, "timer")
@@ -3390,7 +3390,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
n->net_conf.tx_queue_size = MIN(virtio_net_max_tx_queue_size(n),
n->net_conf.tx_queue_size);
- for (i = 0; i < n->max_queues; i++) {
+ for (i = 0; i < n->max_qps; i++) {
virtio_net_add_queue(n, i);
}
@@ -3414,13 +3414,13 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
object_get_typename(OBJECT(dev)), dev->id, n);
}
- for (i = 0; i < n->max_queues; i++) {
+ for (i = 0; i < n->max_qps; i++) {
n->nic->ncs[i].do_not_pad = true;
}
peer_test_vnet_hdr(n);
if (peer_has_vnet_hdr(n)) {
- for (i = 0; i < n->max_queues; i++) {
+ for (i = 0; i < n->max_qps; i++) {
qemu_using_vnet_hdr(qemu_get_subqueue(n->nic, i)->peer, true);
}
n->host_hdr_len = sizeof(struct virtio_net_hdr);
@@ -3462,7 +3462,7 @@ static void virtio_net_device_unrealize(DeviceState *dev)
{
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
VirtIONet *n = VIRTIO_NET(dev);
- int i, max_queues;
+ int i, max_qps;
if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
virtio_net_unload_ebpf(n);
@@ -3484,12 +3484,12 @@ static void virtio_net_device_unrealize(DeviceState *dev)
remove_migration_state_change_notifier(&n->migration_state);
}
- max_queues = n->multiqueue ? n->max_queues : 1;
- for (i = 0; i < max_queues; i++) {
+ max_qps = n->multiqueue ? n->max_qps : 1;
+ for (i = 0; i < max_qps; i++) {
virtio_net_del_queue(n, i);
}
/* delete also control vq */
- virtio_del_queue(vdev, max_queues * 2);
+ virtio_del_queue(vdev, max_qps * 2);
qemu_announce_timer_del(&n->announce_timer, false);
g_free(n->vqs);
qemu_del_nic(n->nic);
diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
index 824a69c23f..a9b6dc252e 100644
--- a/include/hw/virtio/virtio-net.h
+++ b/include/hw/virtio/virtio-net.h
@@ -194,8 +194,8 @@ struct VirtIONet {
NICConf nic_conf;
DeviceState *qdev;
int multiqueue;
- uint16_t max_queues;
- uint16_t curr_queues;
+ uint16_t max_qps;
+ uint16_t curr_qps;
size_t config_size;
char *netclient_name;
char *netclient_type;
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 17/18] virtio-net: vhost control virtqueue support
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (15 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 16/18] virito-net: use "qps" instead of "queues" when possible Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-06-21 4:16 ` [PATCH 18/18] vhost-vdpa: multiqueue support Jason Wang
2021-06-21 4:33 ` [PATCH 00/18] vhost-vDPA multiqueue no-reply
18 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
This patch implements the control virtqueue support for vhost. This
requires virtio-net to figure out the datapath queue pairs and control
virtqueue via is_datapath and pass the number of those two types
of virtqueues to vhost_net_start()/vhost_net_stop().
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/virtio-net.c | 20 +++++++++++++++++---
include/hw/virtio/virtio-net.h | 1 +
2 files changed, 18 insertions(+), 3 deletions(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 36bd197087..5074b521cf 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -244,6 +244,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
VirtIODevice *vdev = VIRTIO_DEVICE(n);
NetClientState *nc = qemu_get_queue(n->nic);
int qps = n->multiqueue ? n->max_qps : 1;
+ int cvq = n->max_ncs - n->max_qps;
if (!get_vhost_net(nc->peer)) {
return;
@@ -285,14 +286,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
}
n->vhost_started = 1;
- r = vhost_net_start(vdev, n->nic->ncs, qps, 0);
+ r = vhost_net_start(vdev, n->nic->ncs, qps, cvq);
if (r < 0) {
error_report("unable to start vhost net: %d: "
"falling back on userspace virtio", -r);
n->vhost_started = 0;
}
} else {
- vhost_net_stop(vdev, n->nic->ncs, qps, 0);
+ vhost_net_stop(vdev, n->nic->ncs, qps, cvq);
n->vhost_started = 0;
}
}
@@ -3367,7 +3368,20 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
return;
}
- n->max_qps = MAX(n->nic_conf.peers.queues, 1);
+ n->max_ncs = MAX(n->nic_conf.peers.queues, 1);
+
+ /* Figure out the datapath queue pairs since the bakcend could
+ * provide control queue via peers as well.
+ */
+ if (n->nic_conf.peers.queues) {
+ for (i = 0; i < n->max_ncs; i++) {
+ if (n->nic_conf.peers.ncs[i]->is_datapath) {
+ ++n->max_qps;
+ }
+ }
+ }
+ n->max_qps = MAX(n->max_qps, 1);
+
if (n->max_qps * 2 + 1 > VIRTIO_QUEUE_MAX) {
error_setg(errp, "Invalid number of qps (= %" PRIu32 "), "
"must be a positive integer less than %d.",
diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
index a9b6dc252e..ed4659c189 100644
--- a/include/hw/virtio/virtio-net.h
+++ b/include/hw/virtio/virtio-net.h
@@ -194,6 +194,7 @@ struct VirtIONet {
NICConf nic_conf;
DeviceState *qdev;
int multiqueue;
+ uint16_t max_ncs;
uint16_t max_qps;
uint16_t curr_qps;
size_t config_size;
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH 18/18] vhost-vdpa: multiqueue support
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (16 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 17/18] virtio-net: vhost control virtqueue support Jason Wang
@ 2021-06-21 4:16 ` Jason Wang
2021-07-01 6:51 ` Eugenio Perez Martin
2021-06-21 4:33 ` [PATCH 00/18] vhost-vDPA multiqueue no-reply
18 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-21 4:16 UTC (permalink / raw)
To: mst, jasowang, qemu-devel; +Cc: eperezma, elic, lingshan.zhu, lulu
This patch implements the multiqueue support for vhost-vdpa. This is
done simply by reading the number of queue pairs from the config space
and initialize the datapath and control path net client.
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/virtio-net.c | 3 +-
net/vhost-vdpa.c | 98 ++++++++++++++++++++++++++++++++++++++++-----
2 files changed, 91 insertions(+), 10 deletions(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 5074b521cf..2c2ed98c0b 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3370,7 +3370,8 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
n->max_ncs = MAX(n->nic_conf.peers.queues, 1);
- /* Figure out the datapath queue pairs since the bakcend could
+ /*
+ * Figure out the datapath queue pairs since the bakcend could
* provide control queue via peers as well.
*/
if (n->nic_conf.peers.queues) {
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index cc11b2ec40..048344b4bc 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -18,6 +18,7 @@
#include "qemu/error-report.h"
#include "qemu/option.h"
#include "qapi/error.h"
+#include <linux/vhost.h>
#include <sys/ioctl.h>
#include <err.h>
#include "standard-headers/linux/virtio_net.h"
@@ -52,6 +53,8 @@ const int vdpa_feature_bits[] = {
VIRTIO_NET_F_HOST_UFO,
VIRTIO_NET_F_MRG_RXBUF,
VIRTIO_NET_F_MTU,
+ VIRTIO_NET_F_MQ,
+ VIRTIO_NET_F_CTRL_VQ,
VIRTIO_F_IOMMU_PLATFORM,
VIRTIO_F_RING_PACKED,
VIRTIO_NET_F_RSS,
@@ -82,7 +85,8 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
return ret;
}
-static int vhost_vdpa_add(NetClientState *ncs, void *be)
+static int vhost_vdpa_add(NetClientState *ncs, void *be, int qp_index,
+ int nvqs)
{
VhostNetOptions options;
struct vhost_net *net = NULL;
@@ -95,7 +99,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
options.net_backend = ncs;
options.opaque = be;
options.busyloop_timeout = 0;
- options.nvqs = 2;
+ options.nvqs = nvqs;
net = vhost_net_init(&options);
if (!net) {
@@ -159,18 +163,28 @@ static NetClientInfo net_vhost_vdpa_info = {
static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
const char *device,
const char *name,
- int vdpa_device_fd)
+ int vdpa_device_fd,
+ int qp_index,
+ int nvqs,
+ bool is_datapath)
{
NetClientState *nc = NULL;
VhostVDPAState *s;
int ret = 0;
assert(name);
- nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
+ if (is_datapath) {
+ nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device,
+ name);
+ } else {
+ nc = qemu_new_net_control_client(&net_vhost_vdpa_info, peer,
+ device, name);
+ }
snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
s = DO_UPCAST(VhostVDPAState, nc, nc);
s->vhost_vdpa.device_fd = vdpa_device_fd;
- ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
+ s->vhost_vdpa.index = qp_index;
+ ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, qp_index, nvqs);
if (ret) {
qemu_del_net_client(nc);
return NULL;
@@ -196,12 +210,52 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
return 0;
}
+static int vhost_vdpa_get_max_qps(int fd, int *has_cvq, Error **errp)
+{
+ unsigned long config_size = offsetof(struct vhost_vdpa_config, buf);
+ struct vhost_vdpa_config *config;
+ __virtio16 *max_qps;
+ uint64_t features;
+ int ret;
+
+ ret = ioctl(fd, VHOST_GET_FEATURES, &features);
+ if (ret) {
+ error_setg(errp, "Fail to query features from vhost-vDPA device");
+ return ret;
+ }
+
+ if (features & (1 << VIRTIO_NET_F_CTRL_VQ)) {
+ *has_cvq = 1;
+ } else {
+ *has_cvq = 0;
+ }
+
+ if (features & (1 << VIRTIO_NET_F_MQ)) {
+ config = g_malloc0(config_size + sizeof(*max_qps));
+ config->off = offsetof(struct virtio_net_config, max_virtqueue_pairs);
+ config->len = sizeof(*max_qps);
+
+ ret = ioctl(fd, VHOST_VDPA_GET_CONFIG, config);
+ if (ret) {
+ error_setg(errp, "Fail to get config from vhost-vDPA device");
+ return -ret;
+ }
+
+ max_qps = (__virtio16 *)&config->buf;
+
+ return lduw_le_p(max_qps);
+ }
+
+ return 1;
+}
+
int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
NetClientState *peer, Error **errp)
{
const NetdevVhostVDPAOptions *opts;
int vdpa_device_fd;
- NetClientState *nc;
+ NetClientState **ncs, *nc;
+ int qps, i, has_cvq = 0;
assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
opts = &netdev->u.vhost_vdpa;
@@ -216,11 +270,37 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
return -errno;
}
- nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
- if (!nc) {
+ qps = vhost_vdpa_get_max_qps(vdpa_device_fd, &has_cvq, errp);
+ if (qps < 0) {
qemu_close(vdpa_device_fd);
- return -1;
+ return qps;
+ }
+
+ ncs = g_malloc0(sizeof(*ncs) * qps);
+
+ for (i = 0; i < qps; i++) {
+ ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
+ vdpa_device_fd, i, 2, true);
+ if (!ncs[i])
+ goto err;
}
+ if (has_cvq) {
+ nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
+ vdpa_device_fd, i, 1, false);
+ if (!nc)
+ goto err;
+ }
+
+ g_free(ncs);
return 0;
+
+err:
+ if (i) {
+ qemu_del_net_client(ncs[0]);
+ }
+ qemu_close(vdpa_device_fd);
+ g_free(ncs);
+
+ return -1;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH 00/18] vhost-vDPA multiqueue
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
` (17 preceding siblings ...)
2021-06-21 4:16 ` [PATCH 18/18] vhost-vdpa: multiqueue support Jason Wang
@ 2021-06-21 4:33 ` no-reply
18 siblings, 0 replies; 51+ messages in thread
From: no-reply @ 2021-06-21 4:33 UTC (permalink / raw)
To: jasowang; +Cc: lulu, mst, jasowang, qemu-devel, eperezma, elic, lingshan.zhu
Patchew URL: https://patchew.org/QEMU/20210621041650.5826-1-jasowang@redhat.com/
Hi,
This series seems to have some coding style problems. See output below for
more information:
Type: series
Message-id: 20210621041650.5826-1-jasowang@redhat.com
Subject: [PATCH 00/18] vhost-vDPA multiqueue
=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===
Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
* [new tag] patchew/20210621041650.5826-1-jasowang@redhat.com -> patchew/20210621041650.5826-1-jasowang@redhat.com
Switched to a new branch 'test'
f8fa9a3 vhost-vdpa: multiqueue support
8f59a43 virtio-net: vhost control virtqueue support
ec75ba0 virito-net: use "qps" instead of "queues" when possible
2bd3467 vhost-net: control virtqueue support
8276601 net: introduce control client
816e0a6 vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
3033ddd vhost-vdpa: prepare for the multiqueue support
d3460f2 vhost-vdpa: classify one time request
3f6db79 vhost-vdpa: open device fd in net_init_vhost_vdpa()
7596feb vhost-vdpa: remove the unncessary queue_index assignment
ce7d8de vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
a8bc70a vhost-vdpa: tweak the error label in vhost_vdpa_add()
263c865 vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
32b4c0e vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
fbc405c vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
7a12106 vhost_net: do not assume nvqs is always 2
1a9f738 vhost: use unsigned int for nvqs
5dff074 vhost_net: remove the meaningless assignment in vhost_net_start_one()
=== OUTPUT BEGIN ===
1/18 Checking commit 5dff074a8095 (vhost_net: remove the meaningless assignment in vhost_net_start_one())
2/18 Checking commit 1a9f7380e8f0 (vhost: use unsigned int for nvqs)
3/18 Checking commit 7a12106db152 (vhost_net: do not assume nvqs is always 2)
4/18 Checking commit fbc405c2ccec (vhost-vdpa: remove the unnecessary check in vhost_vdpa_add())
5/18 Checking commit 32b4c0e1f95c (vhost-vdpa: don't cleanup twice in vhost_vdpa_add())
6/18 Checking commit 263c8652e706 (vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add())
7/18 Checking commit a8bc70a72c41 (vhost-vdpa: tweak the error label in vhost_vdpa_add())
8/18 Checking commit ce7d8de36520 (vhost-vdpa: fix the wrong assertion in vhost_vdpa_init())
9/18 Checking commit 7596feb74e23 (vhost-vdpa: remove the unncessary queue_index assignment)
10/18 Checking commit 3f6db79bbb54 (vhost-vdpa: open device fd in net_init_vhost_vdpa())
11/18 Checking commit d3460f2b8886 (vhost-vdpa: classify one time request)
12/18 Checking commit 3033ddd3df9e (vhost-vdpa: prepare for the multiqueue support)
13/18 Checking commit 816e0a62ed3b (vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *)
14/18 Checking commit 8276601ad80d (net: introduce control client)
15/18 Checking commit 2bd3467c7183 (vhost-net: control virtqueue support)
16/18 Checking commit ec75ba0c5758 (virito-net: use "qps" instead of "queues" when possible)
WARNING: Block comments use a leading /* on a separate line
#294: FILE: hw/net/virtio-net.c:1420:
+ /* stop the backend before changing the number of qps to avoid handling a
total: 0 errors, 1 warnings, 457 lines checked
Patch 16/18 has style problems, please review. If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
17/18 Checking commit 8f59a439ccf2 (virtio-net: vhost control virtqueue support)
WARNING: Block comments use a leading /* on a separate line
#55: FILE: hw/net/virtio-net.c:3373:
+ /* Figure out the datapath queue pairs since the bakcend could
total: 0 errors, 1 warnings, 51 lines checked
Patch 17/18 has style problems, please review. If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
18/18 Checking commit f8fa9a3e2c56 (vhost-vdpa: multiqueue support)
ERROR: braces {} are necessary for all arms of this statement
#176: FILE: net/vhost-vdpa.c:284:
+ if (!ncs[i])
[...]
ERROR: braces {} are necessary for all arms of this statement
#183: FILE: net/vhost-vdpa.c:291:
+ if (!nc)
[...]
total: 2 errors, 0 warnings, 165 lines checked
Patch 18/18 has style problems, please review. If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
=== OUTPUT END ===
Test command exited with code: 1
The full log is available at
http://patchew.org/logs/20210621041650.5826-1-jasowang@redhat.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one()
2021-06-21 4:16 ` [PATCH 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
@ 2021-06-21 11:45 ` Eli Cohen
2021-06-24 7:42 ` Jason Wang
0 siblings, 1 reply; 51+ messages in thread
From: Eli Cohen @ 2021-06-21 11:45 UTC (permalink / raw)
To: Jason Wang; +Cc: eperezma, lingshan.zhu, qemu-devel, lulu, mst
On Mon, Jun 21, 2021 at 12:16:33PM +0800, Jason Wang wrote:
> The nvqs and vqs has been initialized during vhost_net_init() and is
I suggest "nvqs and vqs have been initialized during vhost_net_init() and
are not..."
Other than that
Reviewed-by: Eli Cohen <elic@nvidia.com>
> not expected to change during the life cycle of vhost_net
> structure. So this patch removes the meaningless assignment.
>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/net/vhost_net.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index 44c1ed92dc..6bd4184f96 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -238,9 +238,6 @@ static int vhost_net_start_one(struct vhost_net *net,
> struct vhost_vring_file file = { };
> int r;
>
> - net->dev.nvqs = 2;
> - net->dev.vqs = net->vqs;
> -
> r = vhost_dev_enable_notifiers(&net->dev, dev);
> if (r < 0) {
> goto fail_notifiers;
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 02/18] vhost: use unsigned int for nvqs
2021-06-21 4:16 ` [PATCH 02/18] vhost: use unsigned int for nvqs Jason Wang
@ 2021-06-21 11:46 ` Eli Cohen
0 siblings, 0 replies; 51+ messages in thread
From: Eli Cohen @ 2021-06-21 11:46 UTC (permalink / raw)
To: Jason Wang; +Cc: eperezma, lingshan.zhu, qemu-devel, lulu, mst
On Mon, Jun 21, 2021 at 12:16:34PM +0800, Jason Wang wrote:
> Switch to use unsigned int for nvqs since it's not expected to be
> negative.
>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
> ---
> include/hw/virtio/vhost.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index 21a9a52088..ddd7d3d594 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -71,7 +71,7 @@ struct vhost_dev {
> int n_tmp_sections;
> MemoryRegionSection *tmp_sections;
> struct vhost_virtqueue *vqs;
> - int nvqs;
> + unsigned int nvqs;
> /* the first virtqueue which would be used by this vhost dev */
> int vq_index;
> /* if non-zero, minimum required value for max_queues */
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 03/18] vhost_net: do not assume nvqs is always 2
2021-06-21 4:16 ` [PATCH 03/18] vhost_net: do not assume nvqs is always 2 Jason Wang
@ 2021-06-23 14:49 ` Stefano Garzarella
2021-06-24 6:22 ` Eli Cohen
1 sibling, 0 replies; 51+ messages in thread
From: Stefano Garzarella @ 2021-06-23 14:49 UTC (permalink / raw)
To: Jason Wang; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
On Mon, Jun 21, 2021 at 12:16:35PM +0800, Jason Wang wrote:
>This patch switches to initialize dev.nvqs from the VhostNetOptions
>instead of assuming it was 2. This is useful for implementing control
>virtqueue support which will be a single vhost_net structure with a
>single cvq.
>
>Signed-off-by: Jason Wang <jasowang@redhat.com>
>---
> hw/net/vhost_net.c | 2 +-
> include/net/vhost_net.h | 1 +
> net/tap.c | 1 +
> net/vhost-user.c | 1 +
> net/vhost-vdpa.c | 1 +
> 5 files changed, 5 insertions(+), 1 deletion(-)
>
>diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
>index 6bd4184f96..ef1370bd92 100644
>--- a/hw/net/vhost_net.c
>+++ b/hw/net/vhost_net.c
>@@ -163,9 +163,9 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
> goto fail;
> }
> net->nc = options->net_backend;
>+ net->dev.nvqs = options->nvqs;
>
> net->dev.max_queues = 1;
>- net->dev.nvqs = 2;
> net->dev.vqs = net->vqs;
>
> if (backend_kernel) {
>diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
>index 172b0051d8..fba40cf695 100644
>--- a/include/net/vhost_net.h
>+++ b/include/net/vhost_net.h
>@@ -14,6 +14,7 @@ typedef struct VhostNetOptions {
> VhostBackendType backend_type;
> NetClientState *net_backend;
> uint32_t busyloop_timeout;
>+ unsigned int nvqs;
> void *opaque;
> } VhostNetOptions;
>
>diff --git a/net/tap.c b/net/tap.c
>index f5686bbf77..f716be3e3f 100644
>--- a/net/tap.c
>+++ b/net/tap.c
>@@ -749,6 +749,7 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
> qemu_set_nonblock(vhostfd);
> }
> options.opaque = (void *)(uintptr_t)vhostfd;
>+ options.nvqs = 2;
>
> s->vhost_net = vhost_net_init(&options);
> if (!s->vhost_net) {
>diff --git a/net/vhost-user.c b/net/vhost-user.c
>index ffbd94d944..b93918c5a4 100644
>--- a/net/vhost-user.c
>+++ b/net/vhost-user.c
>@@ -85,6 +85,7 @@ static int vhost_user_start(int queues, NetClientState *ncs[],
> options.net_backend = ncs[i];
> options.opaque = be;
> options.busyloop_timeout = 0;
>+ options.nvqs = 2;
> net = vhost_net_init(&options);
> if (!net) {
> error_report("failed to init vhost_net for queue %d", i);
>diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>index 19187dce8c..18b45ad777 100644
>--- a/net/vhost-vdpa.c
>+++ b/net/vhost-vdpa.c
>@@ -105,6 +105,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
> options.net_backend = ncs;
> options.opaque = be;
> options.busyloop_timeout = 0;
>+ options.nvqs = 2;
>
> net = vhost_net_init(&options);
> if (!net) {
>--
>2.25.1
>
>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
2021-06-21 4:16 ` [PATCH 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
@ 2021-06-23 14:53 ` Stefano Garzarella
2021-06-24 6:38 ` Eli Cohen
2021-06-24 7:46 ` Jason Wang
0 siblings, 2 replies; 51+ messages in thread
From: Stefano Garzarella @ 2021-06-23 14:53 UTC (permalink / raw)
To: Jason Wang; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
On Mon, Jun 21, 2021 at 12:16:36PM +0800, Jason Wang wrote:
>The VhostVDPAState is just allocated by qemu_new_net_client() via
>g_malloc0() in net_vhost_vdpa_init(). So s->vhost_net is NULL for
>sure, let's remove this unnecessary check in vhost_vdpa_add().
>
>Signed-off-by: Jason Wang <jasowang@redhat.com>
>---
> net/vhost-vdpa.c | 4 ----
> 1 file changed, 4 deletions(-)
>
>diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>index 18b45ad777..728e63ff54 100644
>--- a/net/vhost-vdpa.c
>+++ b/net/vhost-vdpa.c
>@@ -112,10 +112,6 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
> error_report("failed to init vhost_net for queue");
> goto err;
> }
>- if (s->vhost_net) {
>- vhost_net_cleanup(s->vhost_net);
>- g_free(s->vhost_net);
>- }
Maybe we can add an assert() to discover future issues, but I don't have
a strong opinion.
It is fine:
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
> s->vhost_net = net;
> ret = vhost_vdpa_net_check_device_id(net);
> if (ret) {
>-- 2.25.1
>2.25.1
>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 05/18] vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
2021-06-21 4:16 ` [PATCH 05/18] vhost-vdpa: don't cleanup twice " Jason Wang
@ 2021-06-23 14:56 ` Stefano Garzarella
0 siblings, 0 replies; 51+ messages in thread
From: Stefano Garzarella @ 2021-06-23 14:56 UTC (permalink / raw)
To: Jason Wang; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
On Mon, Jun 21, 2021 at 12:16:37PM +0800, Jason Wang wrote:
>The previous vhost_net_cleanup is sufficient for freeing, calling
>vhost_vdpa_del() in this case will lead an extra round of free. Note
>that this kind of "double free" is safe since vhost_dev_cleanup() zero
>the whole structure.
>
>Signed-off-by: Jason Wang <jasowang@redhat.com>
>---
> net/vhost-vdpa.c | 11 -----------
> 1 file changed, 11 deletions(-)
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
>
>diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>index 728e63ff54..f5689a7c32 100644
>--- a/net/vhost-vdpa.c
>+++ b/net/vhost-vdpa.c
>@@ -82,16 +82,6 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
> return ret;
> }
>
>-static void vhost_vdpa_del(NetClientState *ncs)
>-{
>- VhostVDPAState *s;
>- assert(ncs->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>- s = DO_UPCAST(VhostVDPAState, nc, ncs);
>- if (s->vhost_net) {
>- vhost_net_cleanup(s->vhost_net);
>- }
>-}
>-
> static int vhost_vdpa_add(NetClientState *ncs, void *be)
> {
> VhostNetOptions options;
>@@ -122,7 +112,6 @@ err:
> if (net) {
> vhost_net_cleanup(net);
> }
>- vhost_vdpa_del(ncs);
> return -1;
> }
>
>--
>2.25.1
>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
2021-06-21 4:16 ` [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net " Jason Wang
@ 2021-06-23 15:00 ` Stefano Garzarella
2021-06-24 7:06 ` Eli Cohen
2021-06-24 7:14 ` Eli Cohen
0 siblings, 2 replies; 51+ messages in thread
From: Stefano Garzarella @ 2021-06-23 15:00 UTC (permalink / raw)
To: Jason Wang; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
On Mon, Jun 21, 2021 at 12:16:38PM +0800, Jason Wang wrote:
>Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
>Signed-off-by: Jason Wang <jasowang@redhat.com>
>---
> net/vhost-vdpa.c | 1 +
> 1 file changed, 1 insertion(+)
>
>diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>index f5689a7c32..21f09c546f 100644
>--- a/net/vhost-vdpa.c
>+++ b/net/vhost-vdpa.c
>@@ -111,6 +111,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
> err:
> if (net) {
> vhost_net_cleanup(net);
>+ g_free(net);
> }
> return -1;
> }
>--
>2.25.1
>
>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 07/18] vhost-vdpa: tweak the error label in vhost_vdpa_add()
2021-06-21 4:16 ` [PATCH 07/18] vhost-vdpa: tweak the error label " Jason Wang
@ 2021-06-23 15:03 ` Stefano Garzarella
2021-07-06 8:03 ` Jason Wang
0 siblings, 1 reply; 51+ messages in thread
From: Stefano Garzarella @ 2021-06-23 15:03 UTC (permalink / raw)
To: Jason Wang; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
On Mon, Jun 21, 2021 at 12:16:39PM +0800, Jason Wang wrote:
>Introduce new error label to avoid the unnecessary checking of net
>pointer.
>
>Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
>Signed-off-by: Jason Wang <jasowang@redhat.com>
>---
> net/vhost-vdpa.c | 13 ++++++-------
> 1 file changed, 6 insertions(+), 7 deletions(-)
>
>diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>index 21f09c546f..0da7bc347a 100644
>--- a/net/vhost-vdpa.c
>+++ b/net/vhost-vdpa.c
>@@ -100,19 +100,18 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
> net = vhost_net_init(&options);
> if (!net) {
> error_report("failed to init vhost_net for queue");
>- goto err;
>+ goto err_init;
> }
> s->vhost_net = net;
> ret = vhost_vdpa_net_check_device_id(net);
> if (ret) {
>- goto err;
>+ goto err_check;
> }
> return 0;
>-err:
>- if (net) {
>- vhost_net_cleanup(net);
>- g_free(net);
>- }
>+err_check:
>+ vhost_net_cleanup(net);
>+ g_free(net);
Should we set s->vhost_net to NULL to avoid use after free?
Then we should also remove the `assert(s->vhost_net)` in
net_vhost_vdpa_init() since we can fail.
>+err_init:
> return -1;
> }
>
>--
>2.25.1
>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
2021-06-21 4:16 ` [PATCH 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
@ 2021-06-23 15:04 ` Stefano Garzarella
0 siblings, 0 replies; 51+ messages in thread
From: Stefano Garzarella @ 2021-06-23 15:04 UTC (permalink / raw)
To: Jason Wang; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
On Mon, Jun 21, 2021 at 12:16:40PM +0800, Jason Wang wrote:
>Vhost_vdpa_add() can fail for various reasons, so the assertion of the
>succeed is wrong. Instead, we should free the NetClientState and
>propagate the error to the caller
>
>Signed-off-by: Jason Wang <jasowang@redhat.com>
>---
> net/vhost-vdpa.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
>diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>index 0da7bc347a..87b181a74e 100644
>--- a/net/vhost-vdpa.c
>+++ b/net/vhost-vdpa.c
>@@ -174,7 +174,10 @@ static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
> }
> s->vhost_vdpa.device_fd = vdpa_device_fd;
> ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
>- assert(s->vhost_net);
>+ if (ret) {
>+ qemu_close(vdpa_device_fd);
>+ qemu_del_net_client(nc);
>+ }
Okay, I see now :-)
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
> return ret;
> }
>
>--
>2.25.1
>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 09/18] vhost-vdpa: remove the unncessary queue_index assignment
2021-06-21 4:16 ` [PATCH 09/18] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
@ 2021-06-23 15:05 ` Stefano Garzarella
0 siblings, 0 replies; 51+ messages in thread
From: Stefano Garzarella @ 2021-06-23 15:05 UTC (permalink / raw)
To: Jason Wang; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
On Mon, Jun 21, 2021 at 12:16:41PM +0800, Jason Wang wrote:
>The queue_index of NetClientState should be assigned in set_netdev()
>afterwards, so trying to net_vhost_vdpa_init() is meaningless. This
>patch removes this.
>
>Signed-off-by: Jason Wang <jasowang@redhat.com>
>---
> net/vhost-vdpa.c | 1 -
> 1 file changed, 1 deletion(-)
>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
>diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>index 87b181a74e..572aed4ca2 100644
>--- a/net/vhost-vdpa.c
>+++ b/net/vhost-vdpa.c
>@@ -166,7 +166,6 @@ static int net_vhost_vdpa_init(NetClientState
>*peer, const char *device,
> assert(name);
> nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device,
> name);
> snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
>- nc->queue_index = 0;
> s = DO_UPCAST(VhostVDPAState, nc, nc);
> vdpa_device_fd = qemu_open_old(vhostdev, O_RDWR);
> if (vdpa_device_fd == -1) {
>-- 2.25.1
>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa()
2021-06-21 4:16 ` [PATCH 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
@ 2021-06-23 15:07 ` Stefano Garzarella
0 siblings, 0 replies; 51+ messages in thread
From: Stefano Garzarella @ 2021-06-23 15:07 UTC (permalink / raw)
To: Jason Wang; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
On Mon, Jun 21, 2021 at 12:16:42PM +0800, Jason Wang wrote:
>This path switches to open device fd in net_init_vhost_vpda(). This is
>used to prepare for the multiqueue support.
>
>Signed-off-by: Jason Wang <jasowang@redhat.com>
>---
> net/vhost-vdpa.c | 23 +++++++++++++++--------
> 1 file changed, 15 insertions(+), 8 deletions(-)
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
>
>diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>index 572aed4ca2..e63a54a938 100644
>--- a/net/vhost-vdpa.c
>+++ b/net/vhost-vdpa.c
>@@ -157,24 +157,19 @@ static NetClientInfo net_vhost_vdpa_info = {
> };
>
> static int net_vhost_vdpa_init(NetClientState *peer, const char *device,
>- const char *name, const char *vhostdev)
>+ const char *name, int vdpa_device_fd)
> {
> NetClientState *nc = NULL;
> VhostVDPAState *s;
>- int vdpa_device_fd = -1;
> int ret = 0;
> assert(name);
> nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
> snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
> s = DO_UPCAST(VhostVDPAState, nc, nc);
>- vdpa_device_fd = qemu_open_old(vhostdev, O_RDWR);
>- if (vdpa_device_fd == -1) {
>- return -errno;
>- }
>+
> s->vhost_vdpa.device_fd = vdpa_device_fd;
> ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
> if (ret) {
>- qemu_close(vdpa_device_fd);
> qemu_del_net_client(nc);
> }
> return ret;
>@@ -202,6 +197,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> NetClientState *peer, Error **errp)
> {
> const NetdevVhostVDPAOptions *opts;
>+ int vdpa_device_fd, ret;
>
> assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> opts = &netdev->u.vhost_vdpa;
>@@ -210,5 +206,16 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> (char *)name, errp)) {
> return -1;
> }
>- return net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, opts->vhostdev);
>+
>+ vdpa_device_fd = qemu_open_old(opts->vhostdev, O_RDWR);
>+ if (vdpa_device_fd == -1) {
>+ return -errno;
>+ }
>+
>+ ret = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
>+ if (ret) {
>+ qemu_close(vdpa_device_fd);
>+ }
>+
>+ return ret;
> }
>--
>2.25.1
>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 03/18] vhost_net: do not assume nvqs is always 2
2021-06-21 4:16 ` [PATCH 03/18] vhost_net: do not assume nvqs is always 2 Jason Wang
2021-06-23 14:49 ` Stefano Garzarella
@ 2021-06-24 6:22 ` Eli Cohen
2021-06-24 7:42 ` Jason Wang
1 sibling, 1 reply; 51+ messages in thread
From: Eli Cohen @ 2021-06-24 6:22 UTC (permalink / raw)
To: Jason Wang; +Cc: eperezma, lingshan.zhu, qemu-devel, lulu, mst
On Mon, Jun 21, 2021 at 12:16:35PM +0800, Jason Wang wrote:
> This patch switches to initialize dev.nvqs from the VhostNetOptions
> instead of assuming it was 2. This is useful for implementing control
> virtqueue support which will be a single vhost_net structure with a
> single cvq.
Maybe worth mentioning in the changelog that nvqs is still set to 2 for
all users and this patch does not change functionality.
Reviewed-by: Eli Cohen <elic@nvidia.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/net/vhost_net.c | 2 +-
> include/net/vhost_net.h | 1 +
> net/tap.c | 1 +
> net/vhost-user.c | 1 +
> net/vhost-vdpa.c | 1 +
> 5 files changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index 6bd4184f96..ef1370bd92 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -163,9 +163,9 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
> goto fail;
> }
> net->nc = options->net_backend;
> + net->dev.nvqs = options->nvqs;
>
> net->dev.max_queues = 1;
> - net->dev.nvqs = 2;
> net->dev.vqs = net->vqs;
>
> if (backend_kernel) {
> diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
> index 172b0051d8..fba40cf695 100644
> --- a/include/net/vhost_net.h
> +++ b/include/net/vhost_net.h
> @@ -14,6 +14,7 @@ typedef struct VhostNetOptions {
> VhostBackendType backend_type;
> NetClientState *net_backend;
> uint32_t busyloop_timeout;
> + unsigned int nvqs;
> void *opaque;
> } VhostNetOptions;
>
> diff --git a/net/tap.c b/net/tap.c
> index f5686bbf77..f716be3e3f 100644
> --- a/net/tap.c
> +++ b/net/tap.c
> @@ -749,6 +749,7 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
> qemu_set_nonblock(vhostfd);
> }
> options.opaque = (void *)(uintptr_t)vhostfd;
> + options.nvqs = 2;
>
> s->vhost_net = vhost_net_init(&options);
> if (!s->vhost_net) {
> diff --git a/net/vhost-user.c b/net/vhost-user.c
> index ffbd94d944..b93918c5a4 100644
> --- a/net/vhost-user.c
> +++ b/net/vhost-user.c
> @@ -85,6 +85,7 @@ static int vhost_user_start(int queues, NetClientState *ncs[],
> options.net_backend = ncs[i];
> options.opaque = be;
> options.busyloop_timeout = 0;
> + options.nvqs = 2;
> net = vhost_net_init(&options);
> if (!net) {
> error_report("failed to init vhost_net for queue %d", i);
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 19187dce8c..18b45ad777 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -105,6 +105,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
> options.net_backend = ncs;
> options.opaque = be;
> options.busyloop_timeout = 0;
> + options.nvqs = 2;
>
> net = vhost_net_init(&options);
> if (!net) {
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
2021-06-23 14:53 ` Stefano Garzarella
@ 2021-06-24 6:38 ` Eli Cohen
2021-06-24 7:46 ` Jason Wang
1 sibling, 0 replies; 51+ messages in thread
From: Eli Cohen @ 2021-06-24 6:38 UTC (permalink / raw)
To: Stefano Garzarella
Cc: lulu, mst, Jason Wang, qemu-devel, eperezma, lingshan.zhu
On Mon, Jun 21, 2021 at 12:16:36PM +0800, Jason Wang wrote:
> The VhostVDPAState is just allocated by qemu_new_net_client() via
> g_malloc0() in net_vhost_vdpa_init(). So s->vhost_net is NULL for
> sure, let's remove this unnecessary check in vhost_vdpa_add().
>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
> ---
> net/vhost-vdpa.c | 4 ----
> 1 file changed, 4 deletions(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 18b45ad777..728e63ff54 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -112,10 +112,6 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
> error_report("failed to init vhost_net for queue");
> goto err;
> }
> - if (s->vhost_net) {
> - vhost_net_cleanup(s->vhost_net);
> - g_free(s->vhost_net);
> - }
> s->vhost_net = net;
> ret = vhost_vdpa_net_check_device_id(net);
> if (ret) {
> -- 2.25.1
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
2021-06-23 15:00 ` Stefano Garzarella
@ 2021-06-24 7:06 ` Eli Cohen
2021-06-24 7:10 ` Jason Wang
2021-06-24 7:14 ` Eli Cohen
1 sibling, 1 reply; 51+ messages in thread
From: Eli Cohen @ 2021-06-24 7:06 UTC (permalink / raw)
To: Stefano Garzarella
Cc: lulu, mst, Jason Wang, qemu-devel, eperezma, lingshan.zhu
On Wed, Jun 23, 2021 at 05:00:16PM +0200, Stefano Garzarella wrote:
> On Mon, Jun 21, 2021 at 12:16:38PM +0800, Jason Wang wrote:
> > Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
> > ---
> > net/vhost-vdpa.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index f5689a7c32..21f09c546f 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -111,6 +111,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
> > err:
> > if (net) {
This check is redundant. net is not null.
> > vhost_net_cleanup(net);
> > + g_free(net);
> > }
> > return -1;
> > }
> > --
> > 2.25.1
> >
> >
>
> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
2021-06-24 7:06 ` Eli Cohen
@ 2021-06-24 7:10 ` Jason Wang
2021-06-24 7:32 ` Eli Cohen
0 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-06-24 7:10 UTC (permalink / raw)
To: Eli Cohen, Stefano Garzarella
Cc: eperezma, lingshan.zhu, qemu-devel, lulu, mst
在 2021/6/24 下午3:06, Eli Cohen 写道:
> On Wed, Jun 23, 2021 at 05:00:16PM +0200, Stefano Garzarella wrote:
>> On Mon, Jun 21, 2021 at 12:16:38PM +0800, Jason Wang wrote:
>>> Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>> ---
>>> net/vhost-vdpa.c | 1 +
>>> 1 file changed, 1 insertion(+)
>>>
>>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>>> index f5689a7c32..21f09c546f 100644
>>> --- a/net/vhost-vdpa.c
>>> +++ b/net/vhost-vdpa.c
>>> @@ -111,6 +111,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
>>> err:
>>> if (net) {
> This check is redundant. net is not null.
Actually, it can:
net = vhost_net_init(&options);
if (!net) {
error_report("failed to init vhost_net for queue");
goto err;
}
Thanks
>>> vhost_net_cleanup(net);
>>> + g_free(net);
>>> }
>>> return -1;
>>> }
>>> --
>>> 2.25.1
>>>
>>>
>> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
2021-06-23 15:00 ` Stefano Garzarella
2021-06-24 7:06 ` Eli Cohen
@ 2021-06-24 7:14 ` Eli Cohen
2021-06-24 7:41 ` Jason Wang
1 sibling, 1 reply; 51+ messages in thread
From: Eli Cohen @ 2021-06-24 7:14 UTC (permalink / raw)
To: Stefano Garzarella
Cc: lulu, mst, Jason Wang, qemu-devel, eperezma, lingshan.zhu
On Wed, Jun 23, 2021 at 05:00:16PM +0200, Stefano Garzarella wrote:
> On Mon, Jun 21, 2021 at 12:16:38PM +0800, Jason Wang wrote:
> > Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
> > ---
> > net/vhost-vdpa.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index f5689a7c32..21f09c546f 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -111,6 +111,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
> > err:
> > if (net) {
> > vhost_net_cleanup(net);
> > + g_free(net);
Shouldn't this call be part of the implementation of
vhost_net_cleanup()?
> > }
> > return -1;
> > }
> > --
> > 2.25.1
> >
> >
>
> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
2021-06-24 7:10 ` Jason Wang
@ 2021-06-24 7:32 ` Eli Cohen
0 siblings, 0 replies; 51+ messages in thread
From: Eli Cohen @ 2021-06-24 7:32 UTC (permalink / raw)
To: Jason Wang
Cc: lulu, mst, qemu-devel, eperezma, lingshan.zhu, Stefano Garzarella
On Thu, Jun 24, 2021 at 03:10:46PM +0800, Jason Wang wrote:
>
> 在 2021/6/24 下午3:06, Eli Cohen 写道:
> > On Wed, Jun 23, 2021 at 05:00:16PM +0200, Stefano Garzarella wrote:
> > > On Mon, Jun 21, 2021 at 12:16:38PM +0800, Jason Wang wrote:
> > > > Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
> > > > Signed-off-by: Jason Wang <jasowang@redhat.com>
> > > > ---
> > > > net/vhost-vdpa.c | 1 +
> > > > 1 file changed, 1 insertion(+)
> > > >
> > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > index f5689a7c32..21f09c546f 100644
> > > > --- a/net/vhost-vdpa.c
> > > > +++ b/net/vhost-vdpa.c
> > > > @@ -111,6 +111,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
> > > > err:
> > > > if (net) {
> > This check is redundant. net is not null.
>
>
> Actually, it can:
>
> net = vhost_net_init(&options);
> if (!net) {
> error_report("failed to init vhost_net for queue");
> goto err;
> }
Hmmm... right.
>
> Thanks
>
>
> > > > vhost_net_cleanup(net);
> > > > + g_free(net);
> > > > }
> > > > return -1;
> > > > }
> > > > --
> > > > 2.25.1
> > > >
> > > >
> > > Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
> > >
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
2021-06-24 7:14 ` Eli Cohen
@ 2021-06-24 7:41 ` Jason Wang
0 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-24 7:41 UTC (permalink / raw)
To: Eli Cohen, Stefano Garzarella
Cc: eperezma, lingshan.zhu, qemu-devel, lulu, mst
在 2021/6/24 下午3:14, Eli Cohen 写道:
> On Wed, Jun 23, 2021 at 05:00:16PM +0200, Stefano Garzarella wrote:
>> On Mon, Jun 21, 2021 at 12:16:38PM +0800, Jason Wang wrote:
>>> Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>> ---
>>> net/vhost-vdpa.c | 1 +
>>> 1 file changed, 1 insertion(+)
>>>
>>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>>> index f5689a7c32..21f09c546f 100644
>>> --- a/net/vhost-vdpa.c
>>> +++ b/net/vhost-vdpa.c
>>> @@ -111,6 +111,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
>>> err:
>>> if (net) {
>>> vhost_net_cleanup(net);
>>> + g_free(net);
> Shouldn't this call be part of the implementation of
> vhost_net_cleanup()?
Not sure, at least there's one user that doesn't do the free (see
vhost_user_stop()).
Thanks
>
>>> }
>>> return -1;
>>> }
>>> --
>>> 2.25.1
>>>
>>>
>> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 03/18] vhost_net: do not assume nvqs is always 2
2021-06-24 6:22 ` Eli Cohen
@ 2021-06-24 7:42 ` Jason Wang
0 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-24 7:42 UTC (permalink / raw)
To: Eli Cohen; +Cc: eperezma, lingshan.zhu, qemu-devel, lulu, mst
在 2021/6/24 下午2:22, Eli Cohen 写道:
> On Mon, Jun 21, 2021 at 12:16:35PM +0800, Jason Wang wrote:
>> This patch switches to initialize dev.nvqs from the VhostNetOptions
>> instead of assuming it was 2. This is useful for implementing control
>> virtqueue support which will be a single vhost_net structure with a
>> single cvq.
> Maybe worth mentioning in the changelog that nvqs is still set to 2 for
> all users and this patch does not change functionality.
>
> Reviewed-by: Eli Cohen <elic@nvidia.com>
Ok, will do that in V2.
Thanks
>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>> hw/net/vhost_net.c | 2 +-
>> include/net/vhost_net.h | 1 +
>> net/tap.c | 1 +
>> net/vhost-user.c | 1 +
>> net/vhost-vdpa.c | 1 +
>> 5 files changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
>> index 6bd4184f96..ef1370bd92 100644
>> --- a/hw/net/vhost_net.c
>> +++ b/hw/net/vhost_net.c
>> @@ -163,9 +163,9 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
>> goto fail;
>> }
>> net->nc = options->net_backend;
>> + net->dev.nvqs = options->nvqs;
>>
>> net->dev.max_queues = 1;
>> - net->dev.nvqs = 2;
>> net->dev.vqs = net->vqs;
>>
>> if (backend_kernel) {
>> diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
>> index 172b0051d8..fba40cf695 100644
>> --- a/include/net/vhost_net.h
>> +++ b/include/net/vhost_net.h
>> @@ -14,6 +14,7 @@ typedef struct VhostNetOptions {
>> VhostBackendType backend_type;
>> NetClientState *net_backend;
>> uint32_t busyloop_timeout;
>> + unsigned int nvqs;
>> void *opaque;
>> } VhostNetOptions;
>>
>> diff --git a/net/tap.c b/net/tap.c
>> index f5686bbf77..f716be3e3f 100644
>> --- a/net/tap.c
>> +++ b/net/tap.c
>> @@ -749,6 +749,7 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
>> qemu_set_nonblock(vhostfd);
>> }
>> options.opaque = (void *)(uintptr_t)vhostfd;
>> + options.nvqs = 2;
>>
>> s->vhost_net = vhost_net_init(&options);
>> if (!s->vhost_net) {
>> diff --git a/net/vhost-user.c b/net/vhost-user.c
>> index ffbd94d944..b93918c5a4 100644
>> --- a/net/vhost-user.c
>> +++ b/net/vhost-user.c
>> @@ -85,6 +85,7 @@ static int vhost_user_start(int queues, NetClientState *ncs[],
>> options.net_backend = ncs[i];
>> options.opaque = be;
>> options.busyloop_timeout = 0;
>> + options.nvqs = 2;
>> net = vhost_net_init(&options);
>> if (!net) {
>> error_report("failed to init vhost_net for queue %d", i);
>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>> index 19187dce8c..18b45ad777 100644
>> --- a/net/vhost-vdpa.c
>> +++ b/net/vhost-vdpa.c
>> @@ -105,6 +105,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
>> options.net_backend = ncs;
>> options.opaque = be;
>> options.busyloop_timeout = 0;
>> + options.nvqs = 2;
>>
>> net = vhost_net_init(&options);
>> if (!net) {
>> --
>> 2.25.1
>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 15/18] vhost-net: control virtqueue support
2021-06-21 4:16 ` [PATCH 15/18] vhost-net: control virtqueue support Jason Wang
@ 2021-06-24 7:42 ` Eli Cohen
2021-06-24 7:44 ` Jason Wang
2021-06-30 17:33 ` Eugenio Perez Martin
1 sibling, 1 reply; 51+ messages in thread
From: Eli Cohen @ 2021-06-24 7:42 UTC (permalink / raw)
To: Jason Wang; +Cc: eperezma, lingshan.zhu, qemu-devel, lulu, mst
On Mon, Jun 21, 2021 at 12:16:47PM +0800, Jason Wang wrote:
> We assume there's no cvq in the past, this is not true when we need
> control virtqueue support for vhost-user backends. So this patch
> implements the control virtqueue support for vhost-net. As datapath,
> the control virtqueue is also required to be coupled with the
> NetClientState. The vhost_net_start/stop() are tweaked to accept the
> number of datapath queue pairs plus the the number of control
> virtqueue for us to start and stop the vhost device.
>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/net/vhost_net.c | 43 ++++++++++++++++++++++++++++++-----------
> hw/net/virtio-net.c | 4 ++--
> include/net/vhost_net.h | 6 ++++--
> 3 files changed, 38 insertions(+), 15 deletions(-)
>
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index ef1370bd92..fe2fd7e3d5 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -311,11 +311,14 @@ static void vhost_net_stop_one(struct vhost_net *net,
> }
>
> int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> - int total_queues)
> + int data_qps, int cvq)
> {
> BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
> VirtioBusState *vbus = VIRTIO_BUS(qbus);
> VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
> + int total_notifiers = data_qps * 2 + cvq;
> + VirtIONet *n = VIRTIO_NET(dev);
> + int nvhosts = data_qps + cvq;
> struct vhost_net *net;
> int r, e, i;
> NetClientState *peer;
> @@ -325,9 +328,14 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> return -ENOSYS;
> }
>
> - for (i = 0; i < total_queues; i++) {
> + for (i = 0; i < nvhosts; i++) {
> +
> + if (i < data_qps) {
> + peer = qemu_get_peer(ncs, i);
> + } else { /* Control Virtqueue */
> + peer = qemu_get_peer(ncs, n->max_qps);
> + }
>
> - peer = qemu_get_peer(ncs, i);
> net = get_vhost_net(peer);
> vhost_net_set_vq_index(net, i * 2);
>
> @@ -340,14 +348,18 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> }
> }
>
> - r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
> + r = k->set_guest_notifiers(qbus->parent, total_notifiers, true);
> if (r < 0) {
> error_report("Error binding guest notifier: %d", -r);
> goto err;
> }
>
> - for (i = 0; i < total_queues; i++) {
> - peer = qemu_get_peer(ncs, i);
> + for (i = 0; i < nvhosts; i++) {
> + if (i < data_qps) {
> + peer = qemu_get_peer(ncs, i);
> + } else {
> + peer = qemu_get_peer(ncs, n->max_qps);
> + }
> r = vhost_net_start_one(get_vhost_net(peer), dev);
>
> if (r < 0) {
> @@ -371,7 +383,7 @@ err_start:
> peer = qemu_get_peer(ncs , i);
> vhost_net_stop_one(get_vhost_net(peer), dev);
> }
> - e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
> + e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
> if (e < 0) {
> fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e);
> fflush(stderr);
> @@ -381,18 +393,27 @@ err:
> }
>
> void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> - int total_queues)
> + int data_qps, int cvq)
> {
> BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
> VirtioBusState *vbus = VIRTIO_BUS(qbus);
> VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
> + VirtIONet *n = VIRTIO_NET(dev);
> + NetClientState *peer;
> + int total_notifiers = data_qps * 2 + cvq;
> + int nvhosts = data_qps + cvq;
> int i, r;
>
> - for (i = 0; i < total_queues; i++) {
> - vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
> + for (i = 0; i < nvhosts; i++) {
> + if (i < data_qps) {
> + peer = qemu_get_peer(ncs, i);
> + } else {
> + peer = qemu_get_peer(ncs, n->max_qps);
> + }
> + vhost_net_stop_one(get_vhost_net(peer), dev);
> }
>
> - r = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
> + r = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
> if (r < 0) {
> fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", r);
> fflush(stderr);
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index bd7958b9f0..614660274c 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> }
>
> n->vhost_started = 1;
> - r = vhost_net_start(vdev, n->nic->ncs, queues);
> + r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
> if (r < 0) {
> error_report("unable to start vhost net: %d: "
> "falling back on userspace virtio", -r);
> n->vhost_started = 0;
> }
> } else {
> - vhost_net_stop(vdev, n->nic->ncs, queues);
> + vhost_net_stop(vdev, n->nic->ncs, queues, 0);
> n->vhost_started = 0;
> }
> }
> diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
> index fba40cf695..e656e38af9 100644
> --- a/include/net/vhost_net.h
> +++ b/include/net/vhost_net.h
> @@ -21,8 +21,10 @@ typedef struct VhostNetOptions {
> uint64_t vhost_net_get_max_queues(VHostNetState *net);
> struct vhost_net *vhost_net_init(VhostNetOptions *options);
>
> -int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
> -void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs, int total_queues);
This breaks compilation of hw/net/vhost_net-stub.c
> +int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> + int data_qps, int cvq);
> +void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> + int data_qps, int cvq);
>
> void vhost_net_cleanup(VHostNetState *net);
>
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one()
2021-06-21 11:45 ` Eli Cohen
@ 2021-06-24 7:42 ` Jason Wang
0 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-24 7:42 UTC (permalink / raw)
To: Eli Cohen; +Cc: eperezma, lingshan.zhu, qemu-devel, lulu, mst
在 2021/6/21 下午7:45, Eli Cohen 写道:
> On Mon, Jun 21, 2021 at 12:16:33PM +0800, Jason Wang wrote:
>> The nvqs and vqs has been initialized during vhost_net_init() and is
> I suggest "nvqs and vqs have been initialized during vhost_net_init() and
> are not..."
> Other than that
> Reviewed-by: Eli Cohen <elic@nvidia.com>
Will fix.
Thanks
>
>> not expected to change during the life cycle of vhost_net
>> structure. So this patch removes the meaningless assignment.
>>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>> hw/net/vhost_net.c | 3 ---
>> 1 file changed, 3 deletions(-)
>>
>> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
>> index 44c1ed92dc..6bd4184f96 100644
>> --- a/hw/net/vhost_net.c
>> +++ b/hw/net/vhost_net.c
>> @@ -238,9 +238,6 @@ static int vhost_net_start_one(struct vhost_net *net,
>> struct vhost_vring_file file = { };
>> int r;
>>
>> - net->dev.nvqs = 2;
>> - net->dev.vqs = net->vqs;
>> -
>> r = vhost_dev_enable_notifiers(&net->dev, dev);
>> if (r < 0) {
>> goto fail_notifiers;
>> --
>> 2.25.1
>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 15/18] vhost-net: control virtqueue support
2021-06-24 7:42 ` Eli Cohen
@ 2021-06-24 7:44 ` Jason Wang
0 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-24 7:44 UTC (permalink / raw)
To: Eli Cohen; +Cc: eperezma, lingshan.zhu, qemu-devel, lulu, mst
在 2021/6/24 下午3:42, Eli Cohen 写道:
>>
>> -int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
>> -void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs, int total_queues);
> This breaks compilation of hw/net/vhost_net-stub.c
>
Right, will fix.
Thanks
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
2021-06-23 14:53 ` Stefano Garzarella
2021-06-24 6:38 ` Eli Cohen
@ 2021-06-24 7:46 ` Jason Wang
1 sibling, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-06-24 7:46 UTC (permalink / raw)
To: Stefano Garzarella; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
在 2021/6/23 下午10:53, Stefano Garzarella 写道:
> On Mon, Jun 21, 2021 at 12:16:36PM +0800, Jason Wang wrote:
>> The VhostVDPAState is just allocated by qemu_new_net_client() via
>> g_malloc0() in net_vhost_vdpa_init(). So s->vhost_net is NULL for
>> sure, let's remove this unnecessary check in vhost_vdpa_add().
>>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>> net/vhost-vdpa.c | 4 ----
>> 1 file changed, 4 deletions(-)
>>
>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>> index 18b45ad777..728e63ff54 100644
>> --- a/net/vhost-vdpa.c
>> +++ b/net/vhost-vdpa.c
>> @@ -112,10 +112,6 @@ static int vhost_vdpa_add(NetClientState *ncs,
>> void *be)
>> error_report("failed to init vhost_net for queue");
>> goto err;
>> }
>> - if (s->vhost_net) {
>> - vhost_net_cleanup(s->vhost_net);
>> - g_free(s->vhost_net);
>> - }
>
> Maybe we can add an assert() to discover future issues, but I don't
> have a strong opinion.
I think the assumption of qemu_new_net_client() is that it will always
succeed (see other caller).
So I tend not to bother.
Thanks
>
> It is fine:
>
> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
>
>> s->vhost_net = net;
>> ret = vhost_vdpa_net_check_device_id(net);
>> if (ret) {
>> -- 2.25.1
>> 2.25.1
>>
>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 15/18] vhost-net: control virtqueue support
2021-06-21 4:16 ` [PATCH 15/18] vhost-net: control virtqueue support Jason Wang
2021-06-24 7:42 ` Eli Cohen
@ 2021-06-30 17:33 ` Eugenio Perez Martin
2021-07-01 3:03 ` Jason Wang
1 sibling, 1 reply; 51+ messages in thread
From: Eugenio Perez Martin @ 2021-06-30 17:33 UTC (permalink / raw)
To: Jason Wang; +Cc: Eli Cohen, Cindy Lu, qemu-level, lingshan.zhu, Michael Tsirkin
On Mon, Jun 21, 2021 at 6:18 AM Jason Wang <jasowang@redhat.com> wrote:
>
> We assume there's no cvq in the past, this is not true when we need
> control virtqueue support for vhost-user backends. So this patch
> implements the control virtqueue support for vhost-net. As datapath,
> the control virtqueue is also required to be coupled with the
> NetClientState. The vhost_net_start/stop() are tweaked to accept the
> number of datapath queue pairs plus the the number of control
> virtqueue for us to start and stop the vhost device.
>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/net/vhost_net.c | 43 ++++++++++++++++++++++++++++++-----------
> hw/net/virtio-net.c | 4 ++--
> include/net/vhost_net.h | 6 ++++--
> 3 files changed, 38 insertions(+), 15 deletions(-)
>
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index ef1370bd92..fe2fd7e3d5 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -311,11 +311,14 @@ static void vhost_net_stop_one(struct vhost_net *net,
> }
>
> int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> - int total_queues)
> + int data_qps, int cvq)
I can see the convenience of being an int, but maybe it is more clear
to use a boolean?
> {
> BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
> VirtioBusState *vbus = VIRTIO_BUS(qbus);
> VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
> + int total_notifiers = data_qps * 2 + cvq;
> + VirtIONet *n = VIRTIO_NET(dev);
> + int nvhosts = data_qps + cvq;
> struct vhost_net *net;
> int r, e, i;
> NetClientState *peer;
> @@ -325,9 +328,14 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> return -ENOSYS;
> }
>
> - for (i = 0; i < total_queues; i++) {
> + for (i = 0; i < nvhosts; i++) {
> +
> + if (i < data_qps) {
> + peer = qemu_get_peer(ncs, i);
> + } else { /* Control Virtqueue */
> + peer = qemu_get_peer(ncs, n->max_qps);
The field max_qps should be max_queues until the next patch, or maybe
we can reorder the commits and then rename the field before this
commit?
Same comment later on this function and in vhost_net_stop.
Thanks!
> + }
>
> - peer = qemu_get_peer(ncs, i);
> net = get_vhost_net(peer);
> vhost_net_set_vq_index(net, i * 2);
>
> @@ -340,14 +348,18 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> }
> }
>
> - r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
> + r = k->set_guest_notifiers(qbus->parent, total_notifiers, true);
> if (r < 0) {
> error_report("Error binding guest notifier: %d", -r);
> goto err;
> }
>
> - for (i = 0; i < total_queues; i++) {
> - peer = qemu_get_peer(ncs, i);
> + for (i = 0; i < nvhosts; i++) {
> + if (i < data_qps) {
> + peer = qemu_get_peer(ncs, i);
> + } else {
> + peer = qemu_get_peer(ncs, n->max_qps);
> + }
> r = vhost_net_start_one(get_vhost_net(peer), dev);
>
> if (r < 0) {
> @@ -371,7 +383,7 @@ err_start:
> peer = qemu_get_peer(ncs , i);
> vhost_net_stop_one(get_vhost_net(peer), dev);
> }
> - e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
> + e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
> if (e < 0) {
> fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e);
> fflush(stderr);
> @@ -381,18 +393,27 @@ err:
> }
>
> void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> - int total_queues)
> + int data_qps, int cvq)
> {
> BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
> VirtioBusState *vbus = VIRTIO_BUS(qbus);
> VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
> + VirtIONet *n = VIRTIO_NET(dev);
> + NetClientState *peer;
> + int total_notifiers = data_qps * 2 + cvq;
> + int nvhosts = data_qps + cvq;
> int i, r;
>
> - for (i = 0; i < total_queues; i++) {
> - vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
> + for (i = 0; i < nvhosts; i++) {
> + if (i < data_qps) {
> + peer = qemu_get_peer(ncs, i);
> + } else {
> + peer = qemu_get_peer(ncs, n->max_qps);
> + }
> + vhost_net_stop_one(get_vhost_net(peer), dev);
> }
>
> - r = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
> + r = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
> if (r < 0) {
> fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", r);
> fflush(stderr);
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index bd7958b9f0..614660274c 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> }
>
> n->vhost_started = 1;
> - r = vhost_net_start(vdev, n->nic->ncs, queues);
> + r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
> if (r < 0) {
> error_report("unable to start vhost net: %d: "
> "falling back on userspace virtio", -r);
> n->vhost_started = 0;
> }
> } else {
> - vhost_net_stop(vdev, n->nic->ncs, queues);
> + vhost_net_stop(vdev, n->nic->ncs, queues, 0);
> n->vhost_started = 0;
> }
> }
> diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
> index fba40cf695..e656e38af9 100644
> --- a/include/net/vhost_net.h
> +++ b/include/net/vhost_net.h
> @@ -21,8 +21,10 @@ typedef struct VhostNetOptions {
> uint64_t vhost_net_get_max_queues(VHostNetState *net);
> struct vhost_net *vhost_net_init(VhostNetOptions *options);
>
> -int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
> -void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs, int total_queues);
> +int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> + int data_qps, int cvq);
> +void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
> + int data_qps, int cvq);
>
> void vhost_net_cleanup(VHostNetState *net);
>
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 15/18] vhost-net: control virtqueue support
2021-06-30 17:33 ` Eugenio Perez Martin
@ 2021-07-01 3:03 ` Jason Wang
0 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-07-01 3:03 UTC (permalink / raw)
To: Eugenio Perez Martin
Cc: Eli Cohen, Cindy Lu, qemu-level, lingshan.zhu, Michael Tsirkin
在 2021/7/1 上午1:33, Eugenio Perez Martin 写道:
> On Mon, Jun 21, 2021 at 6:18 AM Jason Wang <jasowang@redhat.com> wrote:
>> We assume there's no cvq in the past, this is not true when we need
>> control virtqueue support for vhost-user backends. So this patch
>> implements the control virtqueue support for vhost-net. As datapath,
>> the control virtqueue is also required to be coupled with the
>> NetClientState. The vhost_net_start/stop() are tweaked to accept the
>> number of datapath queue pairs plus the the number of control
>> virtqueue for us to start and stop the vhost device.
>>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>> hw/net/vhost_net.c | 43 ++++++++++++++++++++++++++++++-----------
>> hw/net/virtio-net.c | 4 ++--
>> include/net/vhost_net.h | 6 ++++--
>> 3 files changed, 38 insertions(+), 15 deletions(-)
>>
>> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
>> index ef1370bd92..fe2fd7e3d5 100644
>> --- a/hw/net/vhost_net.c
>> +++ b/hw/net/vhost_net.c
>> @@ -311,11 +311,14 @@ static void vhost_net_stop_one(struct vhost_net *net,
>> }
>>
>> int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
>> - int total_queues)
>> + int data_qps, int cvq)
> I can see the convenience of being an int, but maybe it is more clear
> to use a boolean?
I tend to leave this for future extensions. E.g we may have more than
one cvqs.
>
>> {
>> BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
>> VirtioBusState *vbus = VIRTIO_BUS(qbus);
>> VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
>> + int total_notifiers = data_qps * 2 + cvq;
>> + VirtIONet *n = VIRTIO_NET(dev);
>> + int nvhosts = data_qps + cvq;
>> struct vhost_net *net;
>> int r, e, i;
>> NetClientState *peer;
>> @@ -325,9 +328,14 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
>> return -ENOSYS;
>> }
>>
>> - for (i = 0; i < total_queues; i++) {
>> + for (i = 0; i < nvhosts; i++) {
>> +
>> + if (i < data_qps) {
>> + peer = qemu_get_peer(ncs, i);
>> + } else { /* Control Virtqueue */
>> + peer = qemu_get_peer(ncs, n->max_qps);
> The field max_qps should be max_queues until the next patch, or maybe
> we can reorder the commits and then rename the field before this
> commit?
You're right, let me re-order the patches.
Thanks
>
> Same comment later on this function and in vhost_net_stop.
>
> Thanks!
>
>> + }
>>
>> - peer = qemu_get_peer(ncs, i);
>> net = get_vhost_net(peer);
>> vhost_net_set_vq_index(net, i * 2);
>>
>> @@ -340,14 +348,18 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
>> }
>> }
>>
>> - r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
>> + r = k->set_guest_notifiers(qbus->parent, total_notifiers, true);
>> if (r < 0) {
>> error_report("Error binding guest notifier: %d", -r);
>> goto err;
>> }
>>
>> - for (i = 0; i < total_queues; i++) {
>> - peer = qemu_get_peer(ncs, i);
>> + for (i = 0; i < nvhosts; i++) {
>> + if (i < data_qps) {
>> + peer = qemu_get_peer(ncs, i);
>> + } else {
>> + peer = qemu_get_peer(ncs, n->max_qps);
>> + }
>> r = vhost_net_start_one(get_vhost_net(peer), dev);
>>
>> if (r < 0) {
>> @@ -371,7 +383,7 @@ err_start:
>> peer = qemu_get_peer(ncs , i);
>> vhost_net_stop_one(get_vhost_net(peer), dev);
>> }
>> - e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
>> + e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
>> if (e < 0) {
>> fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e);
>> fflush(stderr);
>> @@ -381,18 +393,27 @@ err:
>> }
>>
>> void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
>> - int total_queues)
>> + int data_qps, int cvq)
>> {
>> BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
>> VirtioBusState *vbus = VIRTIO_BUS(qbus);
>> VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
>> + VirtIONet *n = VIRTIO_NET(dev);
>> + NetClientState *peer;
>> + int total_notifiers = data_qps * 2 + cvq;
>> + int nvhosts = data_qps + cvq;
>> int i, r;
>>
>> - for (i = 0; i < total_queues; i++) {
>> - vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
>> + for (i = 0; i < nvhosts; i++) {
>> + if (i < data_qps) {
>> + peer = qemu_get_peer(ncs, i);
>> + } else {
>> + peer = qemu_get_peer(ncs, n->max_qps);
>> + }
>> + vhost_net_stop_one(get_vhost_net(peer), dev);
>> }
>>
>> - r = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
>> + r = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
>> if (r < 0) {
>> fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", r);
>> fflush(stderr);
>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>> index bd7958b9f0..614660274c 100644
>> --- a/hw/net/virtio-net.c
>> +++ b/hw/net/virtio-net.c
>> @@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>> }
>>
>> n->vhost_started = 1;
>> - r = vhost_net_start(vdev, n->nic->ncs, queues);
>> + r = vhost_net_start(vdev, n->nic->ncs, queues, 0);
>> if (r < 0) {
>> error_report("unable to start vhost net: %d: "
>> "falling back on userspace virtio", -r);
>> n->vhost_started = 0;
>> }
>> } else {
>> - vhost_net_stop(vdev, n->nic->ncs, queues);
>> + vhost_net_stop(vdev, n->nic->ncs, queues, 0);
>> n->vhost_started = 0;
>> }
>> }
>> diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
>> index fba40cf695..e656e38af9 100644
>> --- a/include/net/vhost_net.h
>> +++ b/include/net/vhost_net.h
>> @@ -21,8 +21,10 @@ typedef struct VhostNetOptions {
>> uint64_t vhost_net_get_max_queues(VHostNetState *net);
>> struct vhost_net *vhost_net_init(VhostNetOptions *options);
>>
>> -int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
>> -void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs, int total_queues);
>> +int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
>> + int data_qps, int cvq);
>> +void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
>> + int data_qps, int cvq);
>>
>> void vhost_net_cleanup(VHostNetState *net);
>>
>> --
>> 2.25.1
>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 18/18] vhost-vdpa: multiqueue support
2021-06-21 4:16 ` [PATCH 18/18] vhost-vdpa: multiqueue support Jason Wang
@ 2021-07-01 6:51 ` Eugenio Perez Martin
2021-07-01 8:15 ` Jason Wang
2021-07-06 7:46 ` Jason Wang
0 siblings, 2 replies; 51+ messages in thread
From: Eugenio Perez Martin @ 2021-07-01 6:51 UTC (permalink / raw)
To: Jason Wang; +Cc: Eli Cohen, Cindy Lu, qemu-level, lingshan.zhu, Michael Tsirkin
On Mon, Jun 21, 2021 at 6:18 AM Jason Wang <jasowang@redhat.com> wrote:
>
> This patch implements the multiqueue support for vhost-vdpa. This is
> done simply by reading the number of queue pairs from the config space
> and initialize the datapath and control path net client.
>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/net/virtio-net.c | 3 +-
> net/vhost-vdpa.c | 98 ++++++++++++++++++++++++++++++++++++++++-----
> 2 files changed, 91 insertions(+), 10 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 5074b521cf..2c2ed98c0b 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3370,7 +3370,8 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>
> n->max_ncs = MAX(n->nic_conf.peers.queues, 1);
>
> - /* Figure out the datapath queue pairs since the bakcend could
> + /*
> + * Figure out the datapath queue pairs since the bakcend could
If we are going to modify the comment we could s/bakcend/backend/.
> * provide control queue via peers as well.
> */
> if (n->nic_conf.peers.queues) {
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index cc11b2ec40..048344b4bc 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -18,6 +18,7 @@
> #include "qemu/error-report.h"
> #include "qemu/option.h"
> #include "qapi/error.h"
> +#include <linux/vhost.h>
> #include <sys/ioctl.h>
> #include <err.h>
> #include "standard-headers/linux/virtio_net.h"
> @@ -52,6 +53,8 @@ const int vdpa_feature_bits[] = {
> VIRTIO_NET_F_HOST_UFO,
> VIRTIO_NET_F_MRG_RXBUF,
> VIRTIO_NET_F_MTU,
> + VIRTIO_NET_F_MQ,
> + VIRTIO_NET_F_CTRL_VQ,
Hi!
I'm not sure if it's qemu the one that must control it, but I cannot
use vdpa_sim of linux 5.13 (i.e., with no control vq patches) with
this series applied:
[ 3.967421] virtio_net virtio0: device advertises feature
VIRTIO_NET_F_CTRL_RX but not VIRTIO_NET_F_CTRL_VQ
[ 3.968613] virtio_net: probe of virtio0 failed with error -22
Did you mention it somewhere else and I've missed it? or is it
actually a bug in the device? In this second case, I think we should
still workaround it in qemu, because old vdpasim_net with no
VIRTIO_NET_F_CTRL_VQ still works ok without this patch.
Thanks!
> VIRTIO_F_IOMMU_PLATFORM,
> VIRTIO_F_RING_PACKED,
> VIRTIO_NET_F_RSS,
> @@ -82,7 +85,8 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
> return ret;
> }
>
> -static int vhost_vdpa_add(NetClientState *ncs, void *be)
> +static int vhost_vdpa_add(NetClientState *ncs, void *be, int qp_index,
> + int nvqs)
> {
> VhostNetOptions options;
> struct vhost_net *net = NULL;
> @@ -95,7 +99,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
> options.net_backend = ncs;
> options.opaque = be;
> options.busyloop_timeout = 0;
> - options.nvqs = 2;
> + options.nvqs = nvqs;
>
> net = vhost_net_init(&options);
> if (!net) {
> @@ -159,18 +163,28 @@ static NetClientInfo net_vhost_vdpa_info = {
> static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> const char *device,
> const char *name,
> - int vdpa_device_fd)
> + int vdpa_device_fd,
> + int qp_index,
> + int nvqs,
> + bool is_datapath)
> {
> NetClientState *nc = NULL;
> VhostVDPAState *s;
> int ret = 0;
> assert(name);
> - nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
> + if (is_datapath) {
> + nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device,
> + name);
> + } else {
> + nc = qemu_new_net_control_client(&net_vhost_vdpa_info, peer,
> + device, name);
> + }
> snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
> s = DO_UPCAST(VhostVDPAState, nc, nc);
>
> s->vhost_vdpa.device_fd = vdpa_device_fd;
> - ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
> + s->vhost_vdpa.index = qp_index;
> + ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, qp_index, nvqs);
> if (ret) {
> qemu_del_net_client(nc);
> return NULL;
> @@ -196,12 +210,52 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
> return 0;
> }
>
> +static int vhost_vdpa_get_max_qps(int fd, int *has_cvq, Error **errp)
> +{
> + unsigned long config_size = offsetof(struct vhost_vdpa_config, buf);
> + struct vhost_vdpa_config *config;
> + __virtio16 *max_qps;
> + uint64_t features;
> + int ret;
> +
> + ret = ioctl(fd, VHOST_GET_FEATURES, &features);
> + if (ret) {
> + error_setg(errp, "Fail to query features from vhost-vDPA device");
> + return ret;
> + }
> +
> + if (features & (1 << VIRTIO_NET_F_CTRL_VQ)) {
> + *has_cvq = 1;
> + } else {
> + *has_cvq = 0;
> + }
> +
> + if (features & (1 << VIRTIO_NET_F_MQ)) {
> + config = g_malloc0(config_size + sizeof(*max_qps));
> + config->off = offsetof(struct virtio_net_config, max_virtqueue_pairs);
> + config->len = sizeof(*max_qps);
> +
> + ret = ioctl(fd, VHOST_VDPA_GET_CONFIG, config);
> + if (ret) {
> + error_setg(errp, "Fail to get config from vhost-vDPA device");
> + return -ret;
> + }
> +
> + max_qps = (__virtio16 *)&config->buf;
> +
> + return lduw_le_p(max_qps);
> + }
> +
> + return 1;
> +}
> +
> int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> NetClientState *peer, Error **errp)
> {
> const NetdevVhostVDPAOptions *opts;
> int vdpa_device_fd;
> - NetClientState *nc;
> + NetClientState **ncs, *nc;
> + int qps, i, has_cvq = 0;
>
> assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> opts = &netdev->u.vhost_vdpa;
> @@ -216,11 +270,37 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> return -errno;
> }
>
> - nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
> - if (!nc) {
> + qps = vhost_vdpa_get_max_qps(vdpa_device_fd, &has_cvq, errp);
> + if (qps < 0) {
> qemu_close(vdpa_device_fd);
> - return -1;
> + return qps;
> + }
> +
> + ncs = g_malloc0(sizeof(*ncs) * qps);
> +
> + for (i = 0; i < qps; i++) {
> + ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> + vdpa_device_fd, i, 2, true);
> + if (!ncs[i])
> + goto err;
> }
>
> + if (has_cvq) {
> + nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> + vdpa_device_fd, i, 1, false);
> + if (!nc)
> + goto err;
> + }
> +
> + g_free(ncs);
> return 0;
> +
> +err:
> + if (i) {
> + qemu_del_net_client(ncs[0]);
> + }
> + qemu_close(vdpa_device_fd);
> + g_free(ncs);
> +
> + return -1;
> }
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 18/18] vhost-vdpa: multiqueue support
2021-07-01 6:51 ` Eugenio Perez Martin
@ 2021-07-01 8:15 ` Jason Wang
2021-07-06 7:46 ` Jason Wang
1 sibling, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-07-01 8:15 UTC (permalink / raw)
To: Eugenio Perez Martin
Cc: Eli Cohen, Cindy Lu, qemu-level, lingshan.zhu, Michael Tsirkin
在 2021/7/1 下午2:51, Eugenio Perez Martin 写道:
> On Mon, Jun 21, 2021 at 6:18 AM Jason Wang <jasowang@redhat.com> wrote:
>> This patch implements the multiqueue support for vhost-vdpa. This is
>> done simply by reading the number of queue pairs from the config space
>> and initialize the datapath and control path net client.
>>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>> hw/net/virtio-net.c | 3 +-
>> net/vhost-vdpa.c | 98 ++++++++++++++++++++++++++++++++++++++++-----
>> 2 files changed, 91 insertions(+), 10 deletions(-)
>>
>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>> index 5074b521cf..2c2ed98c0b 100644
>> --- a/hw/net/virtio-net.c
>> +++ b/hw/net/virtio-net.c
>> @@ -3370,7 +3370,8 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>>
>> n->max_ncs = MAX(n->nic_conf.peers.queues, 1);
>>
>> - /* Figure out the datapath queue pairs since the bakcend could
>> + /*
>> + * Figure out the datapath queue pairs since the bakcend could
> If we are going to modify the comment we could s/bakcend/backend/.
Will fix.
>
>> * provide control queue via peers as well.
>> */
>> if (n->nic_conf.peers.queues) {
>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>> index cc11b2ec40..048344b4bc 100644
>> --- a/net/vhost-vdpa.c
>> +++ b/net/vhost-vdpa.c
>> @@ -18,6 +18,7 @@
>> #include "qemu/error-report.h"
>> #include "qemu/option.h"
>> #include "qapi/error.h"
>> +#include <linux/vhost.h>
>> #include <sys/ioctl.h>
>> #include <err.h>
>> #include "standard-headers/linux/virtio_net.h"
>> @@ -52,6 +53,8 @@ const int vdpa_feature_bits[] = {
>> VIRTIO_NET_F_HOST_UFO,
>> VIRTIO_NET_F_MRG_RXBUF,
>> VIRTIO_NET_F_MTU,
>> + VIRTIO_NET_F_MQ,
>> + VIRTIO_NET_F_CTRL_VQ,
>
> Hi!
>
> I'm not sure if it's qemu the one that must control it, but I cannot
> use vdpa_sim of linux 5.13 (i.e., with no control vq patches) with
> this series applied:
>
> [ 3.967421] virtio_net virtio0: device advertises feature
> VIRTIO_NET_F_CTRL_RX but not VIRTIO_NET_F_CTRL_VQ
> [ 3.968613] virtio_net: probe of virtio0 failed with error -22
Interesting, looks like a bug somewhere.
We never advertise CTRL_RX in the case of simulator.
>
> Did you mention it somewhere else and I've missed it? or is it
> actually a bug in the device? In this second case, I think we should
> still workaround it in qemu, because old vdpasim_net with no
> VIRTIO_NET_F_CTRL_VQ still works ok without this patch.
Should be a bug, will have a look.
Thanks
>
> Thanks!
>
>> VIRTIO_F_IOMMU_PLATFORM,
>> VIRTIO_F_RING_PACKED,
>> VIRTIO_NET_F_RSS,
>> @@ -82,7 +85,8 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
>> return ret;
>> }
>>
>> -static int vhost_vdpa_add(NetClientState *ncs, void *be)
>> +static int vhost_vdpa_add(NetClientState *ncs, void *be, int qp_index,
>> + int nvqs)
>> {
>> VhostNetOptions options;
>> struct vhost_net *net = NULL;
>> @@ -95,7 +99,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
>> options.net_backend = ncs;
>> options.opaque = be;
>> options.busyloop_timeout = 0;
>> - options.nvqs = 2;
>> + options.nvqs = nvqs;
>>
>> net = vhost_net_init(&options);
>> if (!net) {
>> @@ -159,18 +163,28 @@ static NetClientInfo net_vhost_vdpa_info = {
>> static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>> const char *device,
>> const char *name,
>> - int vdpa_device_fd)
>> + int vdpa_device_fd,
>> + int qp_index,
>> + int nvqs,
>> + bool is_datapath)
>> {
>> NetClientState *nc = NULL;
>> VhostVDPAState *s;
>> int ret = 0;
>> assert(name);
>> - nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
>> + if (is_datapath) {
>> + nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device,
>> + name);
>> + } else {
>> + nc = qemu_new_net_control_client(&net_vhost_vdpa_info, peer,
>> + device, name);
>> + }
>> snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
>> s = DO_UPCAST(VhostVDPAState, nc, nc);
>>
>> s->vhost_vdpa.device_fd = vdpa_device_fd;
>> - ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
>> + s->vhost_vdpa.index = qp_index;
>> + ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, qp_index, nvqs);
>> if (ret) {
>> qemu_del_net_client(nc);
>> return NULL;
>> @@ -196,12 +210,52 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
>> return 0;
>> }
>>
>> +static int vhost_vdpa_get_max_qps(int fd, int *has_cvq, Error **errp)
>> +{
>> + unsigned long config_size = offsetof(struct vhost_vdpa_config, buf);
>> + struct vhost_vdpa_config *config;
>> + __virtio16 *max_qps;
>> + uint64_t features;
>> + int ret;
>> +
>> + ret = ioctl(fd, VHOST_GET_FEATURES, &features);
>> + if (ret) {
>> + error_setg(errp, "Fail to query features from vhost-vDPA device");
>> + return ret;
>> + }
>> +
>> + if (features & (1 << VIRTIO_NET_F_CTRL_VQ)) {
>> + *has_cvq = 1;
>> + } else {
>> + *has_cvq = 0;
>> + }
>> +
>> + if (features & (1 << VIRTIO_NET_F_MQ)) {
>> + config = g_malloc0(config_size + sizeof(*max_qps));
>> + config->off = offsetof(struct virtio_net_config, max_virtqueue_pairs);
>> + config->len = sizeof(*max_qps);
>> +
>> + ret = ioctl(fd, VHOST_VDPA_GET_CONFIG, config);
>> + if (ret) {
>> + error_setg(errp, "Fail to get config from vhost-vDPA device");
>> + return -ret;
>> + }
>> +
>> + max_qps = (__virtio16 *)&config->buf;
>> +
>> + return lduw_le_p(max_qps);
>> + }
>> +
>> + return 1;
>> +}
>> +
>> int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>> NetClientState *peer, Error **errp)
>> {
>> const NetdevVhostVDPAOptions *opts;
>> int vdpa_device_fd;
>> - NetClientState *nc;
>> + NetClientState **ncs, *nc;
>> + int qps, i, has_cvq = 0;
>>
>> assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>> opts = &netdev->u.vhost_vdpa;
>> @@ -216,11 +270,37 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>> return -errno;
>> }
>>
>> - nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
>> - if (!nc) {
>> + qps = vhost_vdpa_get_max_qps(vdpa_device_fd, &has_cvq, errp);
>> + if (qps < 0) {
>> qemu_close(vdpa_device_fd);
>> - return -1;
>> + return qps;
>> + }
>> +
>> + ncs = g_malloc0(sizeof(*ncs) * qps);
>> +
>> + for (i = 0; i < qps; i++) {
>> + ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>> + vdpa_device_fd, i, 2, true);
>> + if (!ncs[i])
>> + goto err;
>> }
>>
>> + if (has_cvq) {
>> + nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>> + vdpa_device_fd, i, 1, false);
>> + if (!nc)
>> + goto err;
>> + }
>> +
>> + g_free(ncs);
>> return 0;
>> +
>> +err:
>> + if (i) {
>> + qemu_del_net_client(ncs[0]);
>> + }
>> + qemu_close(vdpa_device_fd);
>> + g_free(ncs);
>> +
>> + return -1;
>> }
>> --
>> 2.25.1
>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 18/18] vhost-vdpa: multiqueue support
2021-07-01 6:51 ` Eugenio Perez Martin
2021-07-01 8:15 ` Jason Wang
@ 2021-07-06 7:46 ` Jason Wang
1 sibling, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-07-06 7:46 UTC (permalink / raw)
To: Eugenio Perez Martin
Cc: Eli Cohen, Cindy Lu, qemu-level, lingshan.zhu, Michael Tsirkin
在 2021/7/1 下午2:51, Eugenio Perez Martin 写道:
>> * provide control queue via peers as well.
>> */
>> if (n->nic_conf.peers.queues) {
>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>> index cc11b2ec40..048344b4bc 100644
>> --- a/net/vhost-vdpa.c
>> +++ b/net/vhost-vdpa.c
>> @@ -18,6 +18,7 @@
>> #include "qemu/error-report.h"
>> #include "qemu/option.h"
>> #include "qapi/error.h"
>> +#include <linux/vhost.h>
>> #include <sys/ioctl.h>
>> #include <err.h>
>> #include "standard-headers/linux/virtio_net.h"
>> @@ -52,6 +53,8 @@ const int vdpa_feature_bits[] = {
>> VIRTIO_NET_F_HOST_UFO,
>> VIRTIO_NET_F_MRG_RXBUF,
>> VIRTIO_NET_F_MTU,
>> + VIRTIO_NET_F_MQ,
>> + VIRTIO_NET_F_CTRL_VQ,
> Hi!
>
> I'm not sure if it's qemu the one that must control it, but I cannot
> use vdpa_sim of linux 5.13 (i.e., with no control vq patches) with
> this series applied:
>
> [ 3.967421] virtio_net virtio0: device advertises feature
> VIRTIO_NET_F_CTRL_RX but not VIRTIO_NET_F_CTRL_VQ
> [ 3.968613] virtio_net: probe of virtio0 failed with error -22
>
> Did you mention it somewhere else and I've missed it? or is it
> actually a bug in the device? In this second case, I think we should
> still workaround it in qemu, because old vdpasim_net with no
> VIRTIO_NET_F_CTRL_VQ still works ok without this patch.
>
> Thanks!
So the problem is we need not only validating MQ but also all the
features that depends on the CTRL VQ here (rx filters, mac, rss,
announce etc).
I will fix this in the next version.
Thanks
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 07/18] vhost-vdpa: tweak the error label in vhost_vdpa_add()
2021-06-23 15:03 ` Stefano Garzarella
@ 2021-07-06 8:03 ` Jason Wang
2021-07-06 8:10 ` Jason Wang
0 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-07-06 8:03 UTC (permalink / raw)
To: Stefano Garzarella; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
在 2021/6/23 下午11:03, Stefano Garzarella 写道:
> On Mon, Jun 21, 2021 at 12:16:39PM +0800, Jason Wang wrote:
>> Introduce new error label to avoid the unnecessary checking of net
>> pointer.
>>
>> Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>> net/vhost-vdpa.c | 13 ++++++-------
>> 1 file changed, 6 insertions(+), 7 deletions(-)
>>
>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>> index 21f09c546f..0da7bc347a 100644
>> --- a/net/vhost-vdpa.c
>> +++ b/net/vhost-vdpa.c
>> @@ -100,19 +100,18 @@ static int vhost_vdpa_add(NetClientState *ncs,
>> void *be)
>> net = vhost_net_init(&options);
>> if (!net) {
>> error_report("failed to init vhost_net for queue");
>> - goto err;
>> + goto err_init;
>> }
>> s->vhost_net = net;
>> ret = vhost_vdpa_net_check_device_id(net);
>> if (ret) {
>> - goto err;
>> + goto err_check;
>> }
>> return 0;
>> -err:
>> - if (net) {
>> - vhost_net_cleanup(net);
>> - g_free(net);
>> - }
>> +err_check:
>> + vhost_net_cleanup(net);
>> + g_free(net);
>
> Should we set s->vhost_net to NULL to avoid use after free?
>
> Then we should also remove the `assert(s->vhost_net)` in
> net_vhost_vdpa_init() since we can fail.
Right, will do this in a separate patch.
Thanks
>
>> +err_init:
>> return -1;
>> }
>>
>> --
>> 2.25.1
>>
>>
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 07/18] vhost-vdpa: tweak the error label in vhost_vdpa_add()
2021-07-06 8:03 ` Jason Wang
@ 2021-07-06 8:10 ` Jason Wang
2021-07-06 8:27 ` Stefano Garzarella
0 siblings, 1 reply; 51+ messages in thread
From: Jason Wang @ 2021-07-06 8:10 UTC (permalink / raw)
To: Stefano Garzarella; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
在 2021/7/6 下午4:03, Jason Wang 写道:
>
> 在 2021/6/23 下午11:03, Stefano Garzarella 写道:
>> On Mon, Jun 21, 2021 at 12:16:39PM +0800, Jason Wang wrote:
>>> Introduce new error label to avoid the unnecessary checking of net
>>> pointer.
>>>
>>> Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>> ---
>>> net/vhost-vdpa.c | 13 ++++++-------
>>> 1 file changed, 6 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>>> index 21f09c546f..0da7bc347a 100644
>>> --- a/net/vhost-vdpa.c
>>> +++ b/net/vhost-vdpa.c
>>> @@ -100,19 +100,18 @@ static int vhost_vdpa_add(NetClientState *ncs,
>>> void *be)
>>> net = vhost_net_init(&options);
>>> if (!net) {
>>> error_report("failed to init vhost_net for queue");
>>> - goto err;
>>> + goto err_init;
>>> }
>>> s->vhost_net = net;
>>> ret = vhost_vdpa_net_check_device_id(net);
>>> if (ret) {
>>> - goto err;
>>> + goto err_check;
>>> }
>>> return 0;
>>> -err:
>>> - if (net) {
>>> - vhost_net_cleanup(net);
>>> - g_free(net);
>>> - }
>>> +err_check:
>>> + vhost_net_cleanup(net);
>>> + g_free(net);
>>
>> Should we set s->vhost_net to NULL to avoid use after free?
>>
>> Then we should also remove the `assert(s->vhost_net)` in
>> net_vhost_vdpa_init() since we can fail.
>
>
> Right, will do this in a separate patch.
I just forget the job has been done in the next patch :)
So we are fine here.
Thanks
>
> Thanks
>
>
>>
>>> +err_init:
>>> return -1;
>>> }
>>>
>>> --
>>> 2.25.1
>>>
>>>
>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 07/18] vhost-vdpa: tweak the error label in vhost_vdpa_add()
2021-07-06 8:10 ` Jason Wang
@ 2021-07-06 8:27 ` Stefano Garzarella
2021-07-06 8:28 ` Jason Wang
0 siblings, 1 reply; 51+ messages in thread
From: Stefano Garzarella @ 2021-07-06 8:27 UTC (permalink / raw)
To: Jason Wang; +Cc: lulu, mst, qemu-devel, eperezma, elic, lingshan.zhu
On Tue, Jul 06, 2021 at 04:10:22PM +0800, Jason Wang wrote:
>
>在 2021/7/6 下午4:03, Jason Wang 写道:
>>
>>在 2021/6/23 下午11:03, Stefano Garzarella 写道:
>>>On Mon, Jun 21, 2021 at 12:16:39PM +0800, Jason Wang wrote:
>>>>Introduce new error label to avoid the unnecessary checking of net
>>>>pointer.
>>>>
>>>>Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
>>>>Signed-off-by: Jason Wang <jasowang@redhat.com>
>>>>---
>>>>net/vhost-vdpa.c | 13 ++++++-------
>>>>1 file changed, 6 insertions(+), 7 deletions(-)
>>>>
>>>>diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>>>>index 21f09c546f..0da7bc347a 100644
>>>>--- a/net/vhost-vdpa.c
>>>>+++ b/net/vhost-vdpa.c
>>>>@@ -100,19 +100,18 @@ static int vhost_vdpa_add(NetClientState
>>>>*ncs, void *be)
>>>> net = vhost_net_init(&options);
>>>> if (!net) {
>>>> error_report("failed to init vhost_net for queue");
>>>>- goto err;
>>>>+ goto err_init;
>>>> }
>>>> s->vhost_net = net;
>>>> ret = vhost_vdpa_net_check_device_id(net);
>>>> if (ret) {
>>>>- goto err;
>>>>+ goto err_check;
>>>> }
>>>> return 0;
>>>>-err:
>>>>- if (net) {
>>>>- vhost_net_cleanup(net);
>>>>- g_free(net);
>>>>- }
>>>>+err_check:
>>>>+ vhost_net_cleanup(net);
>>>>+ g_free(net);
>>>
>>>Should we set s->vhost_net to NULL to avoid use after free?
>>>
>>>Then we should also remove the `assert(s->vhost_net)` in
>>>net_vhost_vdpa_init() since we can fail.
>>
>>
>>Right, will do this in a separate patch.
>
>
>I just forget the job has been done in the next patch :)
I saw it later too ;-)
>
>So we are fine here.
Yep for the assert(), but what about setting s->vhost_net to NULL?
Or just move the s->vhost_net assignment just before the `return 0`.
Thanks,
Stefano
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 07/18] vhost-vdpa: tweak the error label in vhost_vdpa_add()
2021-07-06 8:27 ` Stefano Garzarella
@ 2021-07-06 8:28 ` Jason Wang
0 siblings, 0 replies; 51+ messages in thread
From: Jason Wang @ 2021-07-06 8:28 UTC (permalink / raw)
To: Stefano Garzarella
Cc: Cindy Lu, mst, qemu-devel, eperezma, Eli Cohen, Zhu Lingshan
On Tue, Jul 6, 2021 at 4:27 PM Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> On Tue, Jul 06, 2021 at 04:10:22PM +0800, Jason Wang wrote:
> >
> >在 2021/7/6 下午4:03, Jason Wang 写道:
> >>
> >>在 2021/6/23 下午11:03, Stefano Garzarella 写道:
> >>>On Mon, Jun 21, 2021 at 12:16:39PM +0800, Jason Wang wrote:
> >>>>Introduce new error label to avoid the unnecessary checking of net
> >>>>pointer.
> >>>>
> >>>>Fixes: 1e0a84ea49b68 ("vhost-vdpa: introduce vhost-vdpa net client")
> >>>>Signed-off-by: Jason Wang <jasowang@redhat.com>
> >>>>---
> >>>>net/vhost-vdpa.c | 13 ++++++-------
> >>>>1 file changed, 6 insertions(+), 7 deletions(-)
> >>>>
> >>>>diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> >>>>index 21f09c546f..0da7bc347a 100644
> >>>>--- a/net/vhost-vdpa.c
> >>>>+++ b/net/vhost-vdpa.c
> >>>>@@ -100,19 +100,18 @@ static int vhost_vdpa_add(NetClientState
> >>>>*ncs, void *be)
> >>>> net = vhost_net_init(&options);
> >>>> if (!net) {
> >>>> error_report("failed to init vhost_net for queue");
> >>>>- goto err;
> >>>>+ goto err_init;
> >>>> }
> >>>> s->vhost_net = net;
> >>>> ret = vhost_vdpa_net_check_device_id(net);
> >>>> if (ret) {
> >>>>- goto err;
> >>>>+ goto err_check;
> >>>> }
> >>>> return 0;
> >>>>-err:
> >>>>- if (net) {
> >>>>- vhost_net_cleanup(net);
> >>>>- g_free(net);
> >>>>- }
> >>>>+err_check:
> >>>>+ vhost_net_cleanup(net);
> >>>>+ g_free(net);
> >>>
> >>>Should we set s->vhost_net to NULL to avoid use after free?
> >>>
> >>>Then we should also remove the `assert(s->vhost_net)` in
> >>>net_vhost_vdpa_init() since we can fail.
> >>
> >>
> >>Right, will do this in a separate patch.
> >
> >
> >I just forget the job has been done in the next patch :)
>
> I saw it later too ;-)
>
> >
> >So we are fine here.
>
> Yep for the assert(), but what about setting s->vhost_net to NULL?
> Or just move the s->vhost_net assignment just before the `return 0`.
We can do, I've posted V2. If it has comment, I will do that in V3.
Otherwise I will send a separate patch for this.
Thanks
>
> Thanks,
> Stefano
>
^ permalink raw reply [flat|nested] 51+ messages in thread
end of thread, other threads:[~2021-07-06 8:43 UTC | newest]
Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-21 4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
2021-06-21 4:16 ` [PATCH 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
2021-06-21 11:45 ` Eli Cohen
2021-06-24 7:42 ` Jason Wang
2021-06-21 4:16 ` [PATCH 02/18] vhost: use unsigned int for nvqs Jason Wang
2021-06-21 11:46 ` Eli Cohen
2021-06-21 4:16 ` [PATCH 03/18] vhost_net: do not assume nvqs is always 2 Jason Wang
2021-06-23 14:49 ` Stefano Garzarella
2021-06-24 6:22 ` Eli Cohen
2021-06-24 7:42 ` Jason Wang
2021-06-21 4:16 ` [PATCH 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
2021-06-23 14:53 ` Stefano Garzarella
2021-06-24 6:38 ` Eli Cohen
2021-06-24 7:46 ` Jason Wang
2021-06-21 4:16 ` [PATCH 05/18] vhost-vdpa: don't cleanup twice " Jason Wang
2021-06-23 14:56 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net " Jason Wang
2021-06-23 15:00 ` Stefano Garzarella
2021-06-24 7:06 ` Eli Cohen
2021-06-24 7:10 ` Jason Wang
2021-06-24 7:32 ` Eli Cohen
2021-06-24 7:14 ` Eli Cohen
2021-06-24 7:41 ` Jason Wang
2021-06-21 4:16 ` [PATCH 07/18] vhost-vdpa: tweak the error label " Jason Wang
2021-06-23 15:03 ` Stefano Garzarella
2021-07-06 8:03 ` Jason Wang
2021-07-06 8:10 ` Jason Wang
2021-07-06 8:27 ` Stefano Garzarella
2021-07-06 8:28 ` Jason Wang
2021-06-21 4:16 ` [PATCH 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
2021-06-23 15:04 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 09/18] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
2021-06-23 15:05 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
2021-06-23 15:07 ` Stefano Garzarella
2021-06-21 4:16 ` [PATCH 11/18] vhost-vdpa: classify one time request Jason Wang
2021-06-21 4:16 ` [PATCH 12/18] vhost-vdpa: prepare for the multiqueue support Jason Wang
2021-06-21 4:16 ` [PATCH 13/18] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
2021-06-21 4:16 ` [PATCH 14/18] net: introduce control client Jason Wang
2021-06-21 4:16 ` [PATCH 15/18] vhost-net: control virtqueue support Jason Wang
2021-06-24 7:42 ` Eli Cohen
2021-06-24 7:44 ` Jason Wang
2021-06-30 17:33 ` Eugenio Perez Martin
2021-07-01 3:03 ` Jason Wang
2021-06-21 4:16 ` [PATCH 16/18] virito-net: use "qps" instead of "queues" when possible Jason Wang
2021-06-21 4:16 ` [PATCH 17/18] virtio-net: vhost control virtqueue support Jason Wang
2021-06-21 4:16 ` [PATCH 18/18] vhost-vdpa: multiqueue support Jason Wang
2021-07-01 6:51 ` Eugenio Perez Martin
2021-07-01 8:15 ` Jason Wang
2021-07-06 7:46 ` Jason Wang
2021-06-21 4:33 ` [PATCH 00/18] vhost-vDPA multiqueue no-reply
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.