All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/6] vhost-vdpa multiqueue fixes
@ 2022-05-06  4:54 Si-Wei Liu
  2022-05-06  4:54 ` [PATCH v3 1/6] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated Si-Wei Liu
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Si-Wei Liu @ 2022-05-06  4:54 UTC (permalink / raw)
  To: qemu-devel; +Cc: jasowang, mst, eperezma, sgarzare, eli, si-wei.liu

Hi,

This patch series attempt to fix a few issues in vhost-vdpa multiqueue functionality.

Patch #1 and #2 are the formal submission for RFC patch in:
https://lore.kernel.org/qemu-devel/c3e931ee-1a1b-9c2f-2f59-cb4395c230f9@oracle.com/

Patch #3 through #5 are obviously small bug fixes. Please find the description of
each in the commit log.

Patch #6 is a workaround fix for the QEMU segfault described in:
https://lore.kernel.org/qemu-devel/4f2acb7a-d436-9d97-80b1-3308c1b396b5@oracle.com/


Thanks,
-Siwei

---
v3:
  - switch to LOG_GUEST_ERROR for guest trigger-able error
  - add temporary band-aid fix for QEMU crash due to recursive call
v2:
  - split off vhost_dev notifier patch from "align ctrl_vq index for non-mq
    guest for vhost_vdpa"
  - change assert to error message
  - rename vhost_vdpa_one_time_request to vhost_vdpa_first_dev for clarity

Si-Wei Liu (6):
  virtio-net: setup vhost_dev and notifiers for cvq only when feature is
    negotiated
  virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
  vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa
  vhost-net: fix improper cleanup in vhost_net_start
  vhost-vdpa: backend feature should set only once
  virtio-net: don't handle mq request in userspace handler for
    vhost-vdpa

 hw/net/vhost_net.c     |  4 +++-
 hw/net/virtio-net.c    | 49 ++++++++++++++++++++++++++++++++++++++++++++++---
 hw/virtio/vhost-vdpa.c | 23 +++++++++++++++--------
 net/vhost-vdpa.c       |  4 +++-
 4 files changed, 67 insertions(+), 13 deletions(-)

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/6] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated
  2022-05-06  4:54 [PATCH v3 0/6] vhost-vdpa multiqueue fixes Si-Wei Liu
@ 2022-05-06  4:54 ` Si-Wei Liu
  2022-05-06  4:54 ` [PATCH v3 2/6] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa Si-Wei Liu
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Si-Wei Liu @ 2022-05-06  4:54 UTC (permalink / raw)
  To: qemu-devel; +Cc: jasowang, mst, eperezma, sgarzare, eli, si-wei.liu

When the control virtqueue feature is absent or not negotiated,
vhost_net_start() still tries to set up vhost_dev and install
vhost notifiers for the control virtqueue, which results in
erroneous ioctl calls with incorrect queue index sending down
to driver. Do that only when needed.

Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
---
 hw/net/virtio-net.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 1067e72..ffb3475 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -245,7 +245,8 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     NetClientState *nc = qemu_get_queue(n->nic);
     int queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
-    int cvq = n->max_ncs - n->max_queue_pairs;
+    int cvq = virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) ?
+              n->max_ncs - n->max_queue_pairs : 0;
 
     if (!get_vhost_net(nc->peer)) {
         return;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/6] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
  2022-05-06  4:54 [PATCH v3 0/6] vhost-vdpa multiqueue fixes Si-Wei Liu
  2022-05-06  4:54 ` [PATCH v3 1/6] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated Si-Wei Liu
@ 2022-05-06  4:54 ` Si-Wei Liu
  2022-05-06  7:35   ` Jason Wang
  2022-05-06  4:54 ` [PATCH v3 3/6] vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa Si-Wei Liu
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Si-Wei Liu @ 2022-05-06  4:54 UTC (permalink / raw)
  To: qemu-devel; +Cc: jasowang, mst, eperezma, sgarzare, eli, si-wei.liu

With MQ enabled vdpa device and non-MQ supporting guest e.g.
booting vdpa with mq=on over OVMF of single vqp, below assert
failure is seen:

../hw/virtio/vhost-vdpa.c:560: vhost_vdpa_get_vq_index: Assertion `idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs' failed.

0  0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
1  0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
2  0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
3  0x00007f8ce3fec252 in  () at /lib64/libc.so.6
4  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
5  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
6  0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800, vdev=0x558f568f91f0, n=2, mask=<optimized out>) at ../hw/virtio/vhost.c:1557
7  0x0000558f52c6b89a in virtio_pci_set_guest_notifier (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true, with_irqfd=with_irqfd@entry=false)
   at ../hw/virtio/virtio-pci.c:974
8  0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers (d=0x558f568f0f60, nvqs=3, assign=true) at ../hw/virtio/virtio-pci.c:1019
9  0x0000558f52bf091d in vhost_net_start (dev=dev@entry=0x558f568f91f0, ncs=0x558f56937cd0, data_queue_pairs=data_queue_pairs@entry=1, cvq=cvq@entry=1)
   at ../hw/net/vhost_net.c:361
10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized out>, n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0, status=15 '\017') at ../hw/net/virtio-net.c:370
12 0x0000558f52d6c4b2 in virtio_set_status (vdev=vdev@entry=0x558f568f91f0, val=val@entry=15 '\017') at ../hw/virtio/virtio.c:1945
13 0x0000558f52c69eff in virtio_pci_common_write (opaque=0x558f568f0f60, addr=<optimized out>, val=<optimized out>, size=<optimized out>) at ../hw/virtio/virtio-pci.c:1292
14 0x0000558f52d15d6e in memory_region_write_accessor (mr=0x558f568f19d0, addr=20, value=<optimized out>, size=1, shift=<optimized out>, mask=<optimized out>, attrs=...)
   at ../softmmu/memory.c:492
15 0x0000558f52d127de in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f8cdbffe748, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x558f52d15cf0 <memory_region_write_accessor>, mr=0x558f568f19d0, attrs=...) at ../softmmu/memory.c:554
16 0x0000558f52d157ef in memory_region_dispatch_write (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...)
   at ../softmmu/memory.c:1504
17 0x0000558f52d078e7 in flatview_write_continue (fv=fv@entry=0x7f8accbc3b90, addr=addr@entry=103079215124, attrs=..., ptr=ptr@entry=0x7f8ce6300028, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x558f568f19d0) at /home/opc/qemu-upstream/include/qemu/host-utils.h:165
18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90, addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at ../softmmu/physmem.c:2822
19 0x0000558f52d0b36b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>)
   at ../softmmu/physmem.c:2914
20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=...,
   attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>, is_write=<optimized out>) at ../softmmu/physmem.c:2924
21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60) at ../accel/kvm/kvm-all.c:2903
22 0x0000558f52dcfabd in kvm_vcpu_thread_fn (arg=arg@entry=0x558f55c2da60) at ../accel/kvm/kvm-accel-ops.c:49
23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:556
24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6

The cause for the assert failure is due to that the vhost_dev index
for the ctrl vq was not aligned with actual one in use by the guest.
Upon multiqueue feature negotiation in virtio_net_set_multiqueue(),
if guest doesn't support multiqueue, the guest vq layout would shrink
to a single queue pair, consisting of 3 vqs in total (rx, tx and ctrl).
This results in ctrl_vq taking a different vhost_dev group index than
the default. We can map vq to the correct vhost_dev group by checking
if MQ is supported by guest and successfully negotiated. Since the
MQ feature is only present along with CTRL_VQ, we ensure the index
2 is only meant for the control vq while MQ is not supported by guest.

Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
Suggested-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
---
 hw/net/virtio-net.c | 33 +++++++++++++++++++++++++++++++--
 1 file changed, 31 insertions(+), 2 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index ffb3475..f0bb29c 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -14,6 +14,7 @@
 #include "qemu/osdep.h"
 #include "qemu/atomic.h"
 #include "qemu/iov.h"
+#include "qemu/log.h"
 #include "qemu/main-loop.h"
 #include "qemu/module.h"
 #include "hw/virtio/virtio.h"
@@ -3171,8 +3172,22 @@ static NetClientInfo net_virtio_info = {
 static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx)
 {
     VirtIONet *n = VIRTIO_NET(vdev);
-    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
+    NetClientState *nc;
     assert(n->vhost_started);
+    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
+        /* Must guard against invalid features and bogus queue index
+         * from being set by malicious guest, or penetrated through
+         * buggy migration stream.
+         */
+        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
+            qemu_log_mask(LOG_GUEST_ERROR,
+                          "%s: bogus vq index ignored\n", __func__);
+            return false;
+        }
+        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
+    } else {
+        nc = qemu_get_subqueue(n->nic, vq2q(idx));
+    }
     return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
 }
 
@@ -3180,8 +3195,22 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
                                            bool mask)
 {
     VirtIONet *n = VIRTIO_NET(vdev);
-    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
+    NetClientState *nc;
     assert(n->vhost_started);
+    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
+        /* Must guard against invalid features and bogus queue index
+         * from being set by malicious guest, or penetrated through
+         * buggy migration stream.
+         */
+        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
+            qemu_log_mask(LOG_GUEST_ERROR,
+                          "%s: bogus vq index ignored\n", __func__);
+            return;
+        }
+        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
+    } else {
+        nc = qemu_get_subqueue(n->nic, vq2q(idx));
+    }
     vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
                              vdev, idx, mask);
 }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 3/6] vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa
  2022-05-06  4:54 [PATCH v3 0/6] vhost-vdpa multiqueue fixes Si-Wei Liu
  2022-05-06  4:54 ` [PATCH v3 1/6] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated Si-Wei Liu
  2022-05-06  4:54 ` [PATCH v3 2/6] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa Si-Wei Liu
@ 2022-05-06  4:54 ` Si-Wei Liu
  2022-05-06  4:54 ` [PATCH v3 4/6] vhost-net: fix improper cleanup in vhost_net_start Si-Wei Liu
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Si-Wei Liu @ 2022-05-06  4:54 UTC (permalink / raw)
  To: qemu-devel; +Cc: jasowang, mst, eperezma, sgarzare, eli, si-wei.liu

... such that no memory leaks on dangling net clients in case of
error.

Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 1e9fe47..df1e69e 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -306,7 +306,9 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
 
 err:
     if (i) {
-        qemu_del_net_client(ncs[0]);
+        for (i--; i >= 0; i--) {
+            qemu_del_net_client(ncs[i]);
+        }
     }
     qemu_close(vdpa_device_fd);
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 4/6] vhost-net: fix improper cleanup in vhost_net_start
  2022-05-06  4:54 [PATCH v3 0/6] vhost-vdpa multiqueue fixes Si-Wei Liu
                   ` (2 preceding siblings ...)
  2022-05-06  4:54 ` [PATCH v3 3/6] vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa Si-Wei Liu
@ 2022-05-06  4:54 ` Si-Wei Liu
  2022-05-06  4:54 ` [PATCH v3 5/6] vhost-vdpa: backend feature should set only once Si-Wei Liu
  2022-05-06  4:54 ` [PATCH v3 6/6] virtio-net: don't handle mq request in userspace handler for vhost-vdpa Si-Wei Liu
  5 siblings, 0 replies; 10+ messages in thread
From: Si-Wei Liu @ 2022-05-06  4:54 UTC (permalink / raw)
  To: qemu-devel; +Cc: jasowang, mst, eperezma, sgarzare, eli, si-wei.liu

vhost_net_start() missed a corresponding stop_one() upon error from
vhost_set_vring_enable(). While at it, make the error handling for
err_start more robust. No real issue was found due to this though.

Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 30379d2..d6d7c51 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -381,6 +381,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
             r = vhost_set_vring_enable(peer, peer->vring_enable);
 
             if (r < 0) {
+                vhost_net_stop_one(get_vhost_net(peer), dev);
                 goto err_start;
             }
         }
@@ -390,7 +391,8 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
 
 err_start:
     while (--i >= 0) {
-        peer = qemu_get_peer(ncs , i);
+        peer = qemu_get_peer(ncs, i < data_queue_pairs ?
+                                  i : n->max_queue_pairs);
         vhost_net_stop_one(get_vhost_net(peer), dev);
     }
     e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 5/6] vhost-vdpa: backend feature should set only once
  2022-05-06  4:54 [PATCH v3 0/6] vhost-vdpa multiqueue fixes Si-Wei Liu
                   ` (3 preceding siblings ...)
  2022-05-06  4:54 ` [PATCH v3 4/6] vhost-net: fix improper cleanup in vhost_net_start Si-Wei Liu
@ 2022-05-06  4:54 ` Si-Wei Liu
  2022-05-06 12:22   ` Stefano Garzarella
  2022-05-06  4:54 ` [PATCH v3 6/6] virtio-net: don't handle mq request in userspace handler for vhost-vdpa Si-Wei Liu
  5 siblings, 1 reply; 10+ messages in thread
From: Si-Wei Liu @ 2022-05-06  4:54 UTC (permalink / raw)
  To: qemu-devel; +Cc: jasowang, mst, eperezma, sgarzare, eli, si-wei.liu

The vhost_vdpa_one_time_request() branch in
vhost_vdpa_set_backend_cap() incorrectly sends down
ioctls on vhost_dev with non-zero index. This may
end up with multiple VHOST_SET_BACKEND_FEATURES
ioctl calls sent down on the vhost-vdpa fd that is
shared between all these vhost_dev's.

To fix it, send down ioctl only once via the first
vhost_dev with index 0. For more readibility of
code, vhost_vdpa_one_time_request() is renamed to
vhost_vdpa_first_dev() with polarity flipped.
This call is only applicable to the request that
performs operation before setting up queues, and
usually at the beginning of operation. Document
the requirement for it in place.

Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 8adf7c0..fd1268e 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -366,11 +366,18 @@ static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
                                     v->iova_range.last);
 }
 
-static bool vhost_vdpa_one_time_request(struct vhost_dev *dev)
+/*
+ * The use of this function is for requests that only need to be
+ * applied once. Typically such request occurs at the beginning 
+ * of operation, and before setting up queues. It should not be
+ * used for request that performs operation until all queues are
+ * set, which would need to check dev->vq_index_end instead.
+ */
+static bool vhost_vdpa_first_dev(struct vhost_dev *dev)
 {
     struct vhost_vdpa *v = dev->opaque;
 
-    return v->index != 0;
+    return v->index == 0;
 }
 
 static int vhost_vdpa_get_dev_features(struct vhost_dev *dev,
@@ -451,7 +458,7 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
 
     vhost_vdpa_get_iova_range(v);
 
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (!vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
@@ -594,7 +601,7 @@ static int vhost_vdpa_memslots_limit(struct vhost_dev *dev)
 static int vhost_vdpa_set_mem_table(struct vhost_dev *dev,
                                     struct vhost_memory *mem)
 {
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (!vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
@@ -623,7 +630,7 @@ static int vhost_vdpa_set_features(struct vhost_dev *dev,
     struct vhost_vdpa *v = dev->opaque;
     int ret;
 
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (!vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
@@ -665,7 +672,7 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
 
     features &= f;
 
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (vhost_vdpa_first_dev(dev)) {
         r = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, &features);
         if (r) {
             return -EFAULT;
@@ -1118,7 +1125,7 @@ static int vhost_vdpa_set_log_base(struct vhost_dev *dev, uint64_t base,
                                      struct vhost_log *log)
 {
     struct vhost_vdpa *v = dev->opaque;
-    if (v->shadow_vqs_enabled || vhost_vdpa_one_time_request(dev)) {
+    if (v->shadow_vqs_enabled || !vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
@@ -1240,7 +1247,7 @@ static int vhost_vdpa_get_features(struct vhost_dev *dev,
 
 static int vhost_vdpa_set_owner(struct vhost_dev *dev)
 {
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (!vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 6/6] virtio-net: don't handle mq request in userspace handler for vhost-vdpa
  2022-05-06  4:54 [PATCH v3 0/6] vhost-vdpa multiqueue fixes Si-Wei Liu
                   ` (4 preceding siblings ...)
  2022-05-06  4:54 ` [PATCH v3 5/6] vhost-vdpa: backend feature should set only once Si-Wei Liu
@ 2022-05-06  4:54 ` Si-Wei Liu
  2022-05-06  7:35   ` Jason Wang
  5 siblings, 1 reply; 10+ messages in thread
From: Si-Wei Liu @ 2022-05-06  4:54 UTC (permalink / raw)
  To: qemu-devel; +Cc: jasowang, mst, eperezma, sgarzare, eli, si-wei.liu

virtio_queue_host_notifier_read() tends to read pending event
left behind on ioeventfd in the vhost_net_stop() path, and
attempts to handle outstanding kicks from userspace vq handler.
However, in the ctrl_vq handler, virtio_net_handle_mq() has a
recursive call into virtio_net_set_status(), which may lead to
segmentation fault as shown in below stack trace:

0  0x000055f800df1780 in qdev_get_parent_bus (dev=0x0) at ../hw/core/qdev.c:376
1  0x000055f800c68ad8 in virtio_bus_device_iommu_enabled (vdev=vdev@entry=0x0) at ../hw/virtio/virtio-bus.c:331
2  0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>) at ../hw/virtio/vhost.c:318
3  0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>, buffer=0x7fc19bec5240, len=2052, is_write=1, access_len=2052) at ../hw/virtio/vhost.c:336
4  0x000055f800d71867 in vhost_virtqueue_stop (dev=dev@entry=0x55f8037ccc30, vdev=vdev@entry=0x55f8044ec590, vq=0x55f8037cceb0, idx=0) at ../hw/virtio/vhost.c:1241
5  0x000055f800d7406c in vhost_dev_stop (hdev=hdev@entry=0x55f8037ccc30, vdev=vdev@entry=0x55f8044ec590) at ../hw/virtio/vhost.c:1839
6  0x000055f800bf00a7 in vhost_net_stop_one (net=0x55f8037ccc30, dev=0x55f8044ec590) at ../hw/net/vhost_net.c:315
7  0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, cvq=cvq@entry=1)
   at ../hw/net/vhost_net.c:423
8  0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
9  0x000055f800d4e628 in virtio_net_set_status (vdev=vdev@entry=0x55f8044ec590, status=15 '\017') at ../hw/net/virtio-net.c:370
10 0x000055f800d534d8 in virtio_net_handle_ctrl (iov_cnt=<optimized out>, iov=<optimized out>, cmd=0 '\000', n=0x55f8044ec590) at ../hw/net/virtio-net.c:1408
11 0x000055f800d534d8 in virtio_net_handle_ctrl (vdev=0x55f8044ec590, vq=0x7fc1a7e888d0) at ../hw/net/virtio-net.c:1452
12 0x000055f800d69f37 in virtio_queue_host_notifier_read (vq=0x7fc1a7e888d0) at ../hw/virtio/virtio.c:2331
13 0x000055f800d69f37 in virtio_queue_host_notifier_read (n=n@entry=0x7fc1a7e8894c) at ../hw/virtio/virtio.c:3575
14 0x000055f800c688e6 in virtio_bus_cleanup_host_notifier (bus=<optimized out>, n=n@entry=14) at ../hw/virtio/virtio-bus.c:312
15 0x000055f800d73106 in vhost_dev_disable_notifiers (hdev=hdev@entry=0x55f8035b51b0, vdev=vdev@entry=0x55f8044ec590)
   at ../../../include/hw/virtio/virtio-bus.h:35
16 0x000055f800bf00b2 in vhost_net_stop_one (net=0x55f8035b51b0, dev=0x55f8044ec590) at ../hw/net/vhost_net.c:316
17 0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, cvq=cvq@entry=1)
   at ../hw/net/vhost_net.c:423
18 0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
19 0x000055f800d4e628 in virtio_net_set_status (vdev=0x55f8044ec590, status=15 '\017') at ../hw/net/virtio-net.c:370
20 0x000055f800d6c4b2 in virtio_set_status (vdev=0x55f8044ec590, val=<optimized out>) at ../hw/virtio/virtio.c:1945
21 0x000055f800d11d9d in vm_state_notify (running=running@entry=false, state=state@entry=RUN_STATE_SHUTDOWN) at ../softmmu/runstate.c:333
22 0x000055f800d04e7a in do_vm_stop (state=state@entry=RUN_STATE_SHUTDOWN, send_stop=send_stop@entry=false) at ../softmmu/cpus.c:262
23 0x000055f800d04e99 in vm_shutdown () at ../softmmu/cpus.c:280
24 0x000055f800d126af in qemu_cleanup () at ../softmmu/runstate.c:812
25 0x000055f800ad5b13 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:51

For now, temporarily disable handling MQ request from the ctrl_vq
userspace hanlder to avoid the recursive virtio_net_set_status()
call. Some rework is needed to allow changing the number of
queues without going through a full virtio_net_set_status cycle,
particularly for vhost-vdpa backend.

This patch will need to be reverted as soon as future patches of
having the change of #queues handled in userspace is merged.

Fixes: 402378407db ("vhost-vdpa: multiqueue support")
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
---
 hw/net/virtio-net.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index f0bb29c..e263116 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1381,6 +1381,7 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     uint16_t queue_pairs;
+    NetClientState *nc = qemu_get_queue(n->nic);
 
     virtio_net_disable_rss(n);
     if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
@@ -1412,6 +1413,18 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
         return VIRTIO_NET_ERR;
     }
 
+    /* Avoid changing the number of queue_pairs for vdpa device in
+     * userspace handler. A future fix is needed to handle the mq
+     * change in userspace handler with vhost-vdpa. Let's disable
+     * the mq handling from userspace for now and only allow get
+     * done through the kernel. Ripples may be seen when falling
+     * back to userspace, but without doing it qemu process would
+     * crash on a recursive entry to virtio_net_set_status().
+     */ 
+    if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
+        return VIRTIO_NET_ERR;
+    }
+
     n->curr_queue_pairs = queue_pairs;
     /* stop the backend before changing the number of queue_pairs to avoid handling a
      * disabled queue */
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/6] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
  2022-05-06  4:54 ` [PATCH v3 2/6] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa Si-Wei Liu
@ 2022-05-06  7:35   ` Jason Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-05-06  7:35 UTC (permalink / raw)
  To: Si-Wei Liu; +Cc: qemu-devel, mst, eperezma, Stefano Garzarella, Eli Cohen

On Fri, May 6, 2022 at 12:54 PM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
> With MQ enabled vdpa device and non-MQ supporting guest e.g.
> booting vdpa with mq=on over OVMF of single vqp, below assert
> failure is seen:
>
> ../hw/virtio/vhost-vdpa.c:560: vhost_vdpa_get_vq_index: Assertion `idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs' failed.
>
> 0  0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
> 1  0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
> 2  0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
> 3  0x00007f8ce3fec252 in  () at /lib64/libc.so.6
> 4  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
> 5  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
> 6  0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800, vdev=0x558f568f91f0, n=2, mask=<optimized out>) at ../hw/virtio/vhost.c:1557
> 7  0x0000558f52c6b89a in virtio_pci_set_guest_notifier (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true, with_irqfd=with_irqfd@entry=false)
>    at ../hw/virtio/virtio-pci.c:974
> 8  0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers (d=0x558f568f0f60, nvqs=3, assign=true) at ../hw/virtio/virtio-pci.c:1019
> 9  0x0000558f52bf091d in vhost_net_start (dev=dev@entry=0x558f568f91f0, ncs=0x558f56937cd0, data_queue_pairs=data_queue_pairs@entry=1, cvq=cvq@entry=1)
>    at ../hw/net/vhost_net.c:361
> 10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized out>, n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
> 11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0, status=15 '\017') at ../hw/net/virtio-net.c:370
> 12 0x0000558f52d6c4b2 in virtio_set_status (vdev=vdev@entry=0x558f568f91f0, val=val@entry=15 '\017') at ../hw/virtio/virtio.c:1945
> 13 0x0000558f52c69eff in virtio_pci_common_write (opaque=0x558f568f0f60, addr=<optimized out>, val=<optimized out>, size=<optimized out>) at ../hw/virtio/virtio-pci.c:1292
> 14 0x0000558f52d15d6e in memory_region_write_accessor (mr=0x558f568f19d0, addr=20, value=<optimized out>, size=1, shift=<optimized out>, mask=<optimized out>, attrs=...)
>    at ../softmmu/memory.c:492
> 15 0x0000558f52d127de in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f8cdbffe748, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x558f52d15cf0 <memory_region_write_accessor>, mr=0x558f568f19d0, attrs=...) at ../softmmu/memory.c:554
> 16 0x0000558f52d157ef in memory_region_dispatch_write (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...)
>    at ../softmmu/memory.c:1504
> 17 0x0000558f52d078e7 in flatview_write_continue (fv=fv@entry=0x7f8accbc3b90, addr=addr@entry=103079215124, attrs=..., ptr=ptr@entry=0x7f8ce6300028, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x558f568f19d0) at /home/opc/qemu-upstream/include/qemu/host-utils.h:165
> 18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90, addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at ../softmmu/physmem.c:2822
> 19 0x0000558f52d0b36b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>)
>    at ../softmmu/physmem.c:2914
> 20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=...,
>    attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>, is_write=<optimized out>) at ../softmmu/physmem.c:2924
> 21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60) at ../accel/kvm/kvm-all.c:2903
> 22 0x0000558f52dcfabd in kvm_vcpu_thread_fn (arg=arg@entry=0x558f55c2da60) at ../accel/kvm/kvm-accel-ops.c:49
> 23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:556
> 24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
> 25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6
>
> The cause for the assert failure is due to that the vhost_dev index
> for the ctrl vq was not aligned with actual one in use by the guest.
> Upon multiqueue feature negotiation in virtio_net_set_multiqueue(),
> if guest doesn't support multiqueue, the guest vq layout would shrink
> to a single queue pair, consisting of 3 vqs in total (rx, tx and ctrl).
> This results in ctrl_vq taking a different vhost_dev group index than
> the default. We can map vq to the correct vhost_dev group by checking
> if MQ is supported by guest and successfully negotiated. Since the
> MQ feature is only present along with CTRL_VQ, we ensure the index
> 2 is only meant for the control vq while MQ is not supported by guest.
>
> Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
> Suggested-by: Jason Wang <jasowang@redhat.com>
> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  hw/net/virtio-net.c | 33 +++++++++++++++++++++++++++++++--
>  1 file changed, 31 insertions(+), 2 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index ffb3475..f0bb29c 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -14,6 +14,7 @@
>  #include "qemu/osdep.h"
>  #include "qemu/atomic.h"
>  #include "qemu/iov.h"
> +#include "qemu/log.h"
>  #include "qemu/main-loop.h"
>  #include "qemu/module.h"
>  #include "hw/virtio/virtio.h"
> @@ -3171,8 +3172,22 @@ static NetClientInfo net_virtio_info = {
>  static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx)
>  {
>      VirtIONet *n = VIRTIO_NET(vdev);
> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
> +    NetClientState *nc;
>      assert(n->vhost_started);
> +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
> +        /* Must guard against invalid features and bogus queue index
> +         * from being set by malicious guest, or penetrated through
> +         * buggy migration stream.
> +         */
> +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
> +            qemu_log_mask(LOG_GUEST_ERROR,
> +                          "%s: bogus vq index ignored\n", __func__);
> +            return false;
> +        }
> +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
> +    } else {
> +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
> +    }
>      return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
>  }
>
> @@ -3180,8 +3195,22 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
>                                             bool mask)
>  {
>      VirtIONet *n = VIRTIO_NET(vdev);
> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
> +    NetClientState *nc;
>      assert(n->vhost_started);
> +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
> +        /* Must guard against invalid features and bogus queue index
> +         * from being set by malicious guest, or penetrated through
> +         * buggy migration stream.
> +         */
> +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
> +            qemu_log_mask(LOG_GUEST_ERROR,
> +                          "%s: bogus vq index ignored\n", __func__);
> +            return;
> +        }
> +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
> +    } else {
> +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
> +    }
>      vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
>                               vdev, idx, mask);
>  }
> --
> 1.8.3.1
>



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 6/6] virtio-net: don't handle mq request in userspace handler for vhost-vdpa
  2022-05-06  4:54 ` [PATCH v3 6/6] virtio-net: don't handle mq request in userspace handler for vhost-vdpa Si-Wei Liu
@ 2022-05-06  7:35   ` Jason Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-05-06  7:35 UTC (permalink / raw)
  To: Si-Wei Liu; +Cc: qemu-devel, mst, eperezma, Stefano Garzarella, Eli Cohen

On Fri, May 6, 2022 at 12:55 PM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
> virtio_queue_host_notifier_read() tends to read pending event
> left behind on ioeventfd in the vhost_net_stop() path, and
> attempts to handle outstanding kicks from userspace vq handler.
> However, in the ctrl_vq handler, virtio_net_handle_mq() has a
> recursive call into virtio_net_set_status(), which may lead to
> segmentation fault as shown in below stack trace:
>
> 0  0x000055f800df1780 in qdev_get_parent_bus (dev=0x0) at ../hw/core/qdev.c:376
> 1  0x000055f800c68ad8 in virtio_bus_device_iommu_enabled (vdev=vdev@entry=0x0) at ../hw/virtio/virtio-bus.c:331
> 2  0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>) at ../hw/virtio/vhost.c:318
> 3  0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>, buffer=0x7fc19bec5240, len=2052, is_write=1, access_len=2052) at ../hw/virtio/vhost.c:336
> 4  0x000055f800d71867 in vhost_virtqueue_stop (dev=dev@entry=0x55f8037ccc30, vdev=vdev@entry=0x55f8044ec590, vq=0x55f8037cceb0, idx=0) at ../hw/virtio/vhost.c:1241
> 5  0x000055f800d7406c in vhost_dev_stop (hdev=hdev@entry=0x55f8037ccc30, vdev=vdev@entry=0x55f8044ec590) at ../hw/virtio/vhost.c:1839
> 6  0x000055f800bf00a7 in vhost_net_stop_one (net=0x55f8037ccc30, dev=0x55f8044ec590) at ../hw/net/vhost_net.c:315
> 7  0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, cvq=cvq@entry=1)
>    at ../hw/net/vhost_net.c:423
> 8  0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
> 9  0x000055f800d4e628 in virtio_net_set_status (vdev=vdev@entry=0x55f8044ec590, status=15 '\017') at ../hw/net/virtio-net.c:370
> 10 0x000055f800d534d8 in virtio_net_handle_ctrl (iov_cnt=<optimized out>, iov=<optimized out>, cmd=0 '\000', n=0x55f8044ec590) at ../hw/net/virtio-net.c:1408
> 11 0x000055f800d534d8 in virtio_net_handle_ctrl (vdev=0x55f8044ec590, vq=0x7fc1a7e888d0) at ../hw/net/virtio-net.c:1452
> 12 0x000055f800d69f37 in virtio_queue_host_notifier_read (vq=0x7fc1a7e888d0) at ../hw/virtio/virtio.c:2331
> 13 0x000055f800d69f37 in virtio_queue_host_notifier_read (n=n@entry=0x7fc1a7e8894c) at ../hw/virtio/virtio.c:3575
> 14 0x000055f800c688e6 in virtio_bus_cleanup_host_notifier (bus=<optimized out>, n=n@entry=14) at ../hw/virtio/virtio-bus.c:312
> 15 0x000055f800d73106 in vhost_dev_disable_notifiers (hdev=hdev@entry=0x55f8035b51b0, vdev=vdev@entry=0x55f8044ec590)
>    at ../../../include/hw/virtio/virtio-bus.h:35
> 16 0x000055f800bf00b2 in vhost_net_stop_one (net=0x55f8035b51b0, dev=0x55f8044ec590) at ../hw/net/vhost_net.c:316
> 17 0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, cvq=cvq@entry=1)
>    at ../hw/net/vhost_net.c:423
> 18 0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
> 19 0x000055f800d4e628 in virtio_net_set_status (vdev=0x55f8044ec590, status=15 '\017') at ../hw/net/virtio-net.c:370
> 20 0x000055f800d6c4b2 in virtio_set_status (vdev=0x55f8044ec590, val=<optimized out>) at ../hw/virtio/virtio.c:1945
> 21 0x000055f800d11d9d in vm_state_notify (running=running@entry=false, state=state@entry=RUN_STATE_SHUTDOWN) at ../softmmu/runstate.c:333
> 22 0x000055f800d04e7a in do_vm_stop (state=state@entry=RUN_STATE_SHUTDOWN, send_stop=send_stop@entry=false) at ../softmmu/cpus.c:262
> 23 0x000055f800d04e99 in vm_shutdown () at ../softmmu/cpus.c:280
> 24 0x000055f800d126af in qemu_cleanup () at ../softmmu/runstate.c:812
> 25 0x000055f800ad5b13 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:51
>
> For now, temporarily disable handling MQ request from the ctrl_vq
> userspace hanlder to avoid the recursive virtio_net_set_status()
> call. Some rework is needed to allow changing the number of
> queues without going through a full virtio_net_set_status cycle,
> particularly for vhost-vdpa backend.
>
> This patch will need to be reverted as soon as future patches of
> having the change of #queues handled in userspace is merged.
>
> Fixes: 402378407db ("vhost-vdpa: multiqueue support")
> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  hw/net/virtio-net.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index f0bb29c..e263116 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -1381,6 +1381,7 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
>      uint16_t queue_pairs;
> +    NetClientState *nc = qemu_get_queue(n->nic);
>
>      virtio_net_disable_rss(n);
>      if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
> @@ -1412,6 +1413,18 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
>          return VIRTIO_NET_ERR;
>      }
>
> +    /* Avoid changing the number of queue_pairs for vdpa device in
> +     * userspace handler. A future fix is needed to handle the mq
> +     * change in userspace handler with vhost-vdpa. Let's disable
> +     * the mq handling from userspace for now and only allow get
> +     * done through the kernel. Ripples may be seen when falling
> +     * back to userspace, but without doing it qemu process would
> +     * crash on a recursive entry to virtio_net_set_status().
> +     */
> +    if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> +        return VIRTIO_NET_ERR;
> +    }
> +
>      n->curr_queue_pairs = queue_pairs;
>      /* stop the backend before changing the number of queue_pairs to avoid handling a
>       * disabled queue */
> --
> 1.8.3.1
>



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 5/6] vhost-vdpa: backend feature should set only once
  2022-05-06  4:54 ` [PATCH v3 5/6] vhost-vdpa: backend feature should set only once Si-Wei Liu
@ 2022-05-06 12:22   ` Stefano Garzarella
  0 siblings, 0 replies; 10+ messages in thread
From: Stefano Garzarella @ 2022-05-06 12:22 UTC (permalink / raw)
  To: Si-Wei Liu; +Cc: qemu-devel, jasowang, mst, eperezma, eli

On Thu, May 05, 2022 at 09:54:33PM -0700, Si-Wei Liu wrote:
>The vhost_vdpa_one_time_request() branch in
>vhost_vdpa_set_backend_cap() incorrectly sends down
>ioctls on vhost_dev with non-zero index. This may
>end up with multiple VHOST_SET_BACKEND_FEATURES
>ioctl calls sent down on the vhost-vdpa fd that is
>shared between all these vhost_dev's.
>
>To fix it, send down ioctl only once via the first
>vhost_dev with index 0. For more readibility of
>code, vhost_vdpa_one_time_request() is renamed to
>vhost_vdpa_first_dev() with polarity flipped.

Sorry for being late, you may have already discussed this, but IMHO it 
would be better to split this patch into two:
- fix the problem in vhost_vdpa_set_backend_cap()
- change name and polarity of vhost_vdpa_one_time_request()

Thanks,
Stefano

>This call is only applicable to the request that
>performs operation before setting up queues, and
>usually at the beginning of operation. Document
>the requirement for it in place.
>
>Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
>Acked-by: Jason Wang <jasowang@redhat.com>
>Acked-by: Eugenio Pérez <eperezma@redhat.com>
>---
> hw/virtio/vhost-vdpa.c | 23 +++++++++++++++--------
> 1 file changed, 15 insertions(+), 8 deletions(-)
>
>diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
>index 8adf7c0..fd1268e 100644
>--- a/hw/virtio/vhost-vdpa.c
>+++ b/hw/virtio/vhost-vdpa.c
>@@ -366,11 +366,18 @@ static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
>                                     v->iova_range.last);
> }
>
>-static bool vhost_vdpa_one_time_request(struct vhost_dev *dev)
>+/*
>+ * The use of this function is for requests that only need to be
>+ * applied once. Typically such request occurs at the beginning
>+ * of operation, and before setting up queues. It should not be
>+ * used for request that performs operation until all queues are
>+ * set, which would need to check dev->vq_index_end instead.
>+ */
>+static bool vhost_vdpa_first_dev(struct vhost_dev *dev)
> {
>     struct vhost_vdpa *v = dev->opaque;
>
>-    return v->index != 0;
>+    return v->index == 0;
> }
>
> static int vhost_vdpa_get_dev_features(struct vhost_dev *dev,
>@@ -451,7 +458,7 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
>
>     vhost_vdpa_get_iova_range(v);
>
>-    if (vhost_vdpa_one_time_request(dev)) {
>+    if (!vhost_vdpa_first_dev(dev)) {
>         return 0;
>     }
>
>@@ -594,7 +601,7 @@ static int vhost_vdpa_memslots_limit(struct vhost_dev *dev)
> static int vhost_vdpa_set_mem_table(struct vhost_dev *dev,
>                                     struct vhost_memory *mem)
> {
>-    if (vhost_vdpa_one_time_request(dev)) {
>+    if (!vhost_vdpa_first_dev(dev)) {
>         return 0;
>     }
>
>@@ -623,7 +630,7 @@ static int vhost_vdpa_set_features(struct vhost_dev *dev,
>     struct vhost_vdpa *v = dev->opaque;
>     int ret;
>
>-    if (vhost_vdpa_one_time_request(dev)) {
>+    if (!vhost_vdpa_first_dev(dev)) {
>         return 0;
>     }
>
>@@ -665,7 +672,7 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
>
>     features &= f;
>
>-    if (vhost_vdpa_one_time_request(dev)) {
>+    if (vhost_vdpa_first_dev(dev)) {
>         r = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, &features);
>         if (r) {
>             return -EFAULT;
>@@ -1118,7 +1125,7 @@ static int vhost_vdpa_set_log_base(struct vhost_dev *dev, uint64_t base,
>                                      struct vhost_log *log)
> {
>     struct vhost_vdpa *v = dev->opaque;
>-    if (v->shadow_vqs_enabled || vhost_vdpa_one_time_request(dev)) {
>+    if (v->shadow_vqs_enabled || !vhost_vdpa_first_dev(dev)) {
>         return 0;
>     }
>
>@@ -1240,7 +1247,7 @@ static int vhost_vdpa_get_features(struct vhost_dev *dev,
>
> static int vhost_vdpa_set_owner(struct vhost_dev *dev)
> {
>-    if (vhost_vdpa_one_time_request(dev)) {
>+    if (!vhost_vdpa_first_dev(dev)) {
>         return 0;
>     }
>
>-- 
>1.8.3.1
>



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-05-06 12:44 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-06  4:54 [PATCH v3 0/6] vhost-vdpa multiqueue fixes Si-Wei Liu
2022-05-06  4:54 ` [PATCH v3 1/6] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated Si-Wei Liu
2022-05-06  4:54 ` [PATCH v3 2/6] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa Si-Wei Liu
2022-05-06  7:35   ` Jason Wang
2022-05-06  4:54 ` [PATCH v3 3/6] vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa Si-Wei Liu
2022-05-06  4:54 ` [PATCH v3 4/6] vhost-net: fix improper cleanup in vhost_net_start Si-Wei Liu
2022-05-06  4:54 ` [PATCH v3 5/6] vhost-vdpa: backend feature should set only once Si-Wei Liu
2022-05-06 12:22   ` Stefano Garzarella
2022-05-06  4:54 ` [PATCH v3 6/6] virtio-net: don't handle mq request in userspace handler for vhost-vdpa Si-Wei Liu
2022-05-06  7:35   ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.