All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] vhost-vdpa multiqueue fixes
@ 2022-04-27  8:30 Si-Wei Liu
  2022-04-27  8:30 ` [PATCH v2 1/5] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated Si-Wei Liu
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Si-Wei Liu @ 2022-04-27  8:30 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, jasowang, eperezma, eli, si-wei.liu, sgarzare

Hi,

This patch series attempt to fix a few issues in vhost-vdpa multiqueue functionality.

Patch #1 and #2 are the formal submission for RFC patch in:
https://lore.kernel.org/qemu-devel/c3e931ee-1a1b-9c2f-2f59-cb4395c230f9@oracle.com/

Patch #3 through #5 are obviously small bug fixes. Please find the description of
each in the commit log.

The patches #2 to #4 for v1 are taken off from the series. These patches will be
posted separately later on after rework.
https://lore.kernel.org/qemu-devel/CACGkMEuecMSCWdr0P9f=U-8wYAw3M05mmeRo+y6Zmt2TxCUNBg@mail.gmail.com/


Thanks,
-Siwei

---
v2:
  - split off vhost_dev notifier patch from "align ctrl_vq index for non-mq
    guest for vhost_vdpa"
  - change assert to error message
  - rename vhost_vdpa_one_time_request to vhost_vdpa_first_dev for clarity

Si-Wei Liu (5):
  virtio-net: setup vhost_dev and notifiers for cvq only when feature is
    negotiated
  virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
  vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa
  vhost-net: fix improper cleanup in vhost_net_start
  vhost-vdpa: backend feature should set only once

 hw/net/vhost_net.c     |  4 +++-
 hw/net/virtio-net.c    | 25 ++++++++++++++++++++++---
 hw/virtio/vhost-vdpa.c | 23 +++++++++++++++--------
 net/vhost-vdpa.c       |  4 +++-
 4 files changed, 43 insertions(+), 13 deletions(-)

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/5] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated
  2022-04-27  8:30 [PATCH v2 0/5] vhost-vdpa multiqueue fixes Si-Wei Liu
@ 2022-04-27  8:30 ` Si-Wei Liu
  2022-04-29  2:14   ` Jason Wang
  2022-04-27  8:30 ` [PATCH v2 2/5] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa Si-Wei Liu
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Si-Wei Liu @ 2022-04-27  8:30 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, jasowang, eperezma, eli, si-wei.liu, sgarzare

When the control virtqueue feature is absent or not negotiated,
vhost_net_start() still tries to set up vhost_dev and install
vhost notifiers for the control virtqueue, which results in
erroneous ioctl calls with incorrect queue index sending down
to driver. Do that only when needed.

Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
---
 hw/net/virtio-net.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 1067e72..ffb3475 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -245,7 +245,8 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     NetClientState *nc = qemu_get_queue(n->nic);
     int queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
-    int cvq = n->max_ncs - n->max_queue_pairs;
+    int cvq = virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) ?
+              n->max_ncs - n->max_queue_pairs : 0;
 
     if (!get_vhost_net(nc->peer)) {
         return;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/5] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
  2022-04-27  8:30 [PATCH v2 0/5] vhost-vdpa multiqueue fixes Si-Wei Liu
  2022-04-27  8:30 ` [PATCH v2 1/5] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated Si-Wei Liu
@ 2022-04-27  8:30 ` Si-Wei Liu
  2022-04-29  2:23   ` Jason Wang
  2022-04-27  8:30 ` [PATCH v2 3/5] vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa Si-Wei Liu
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Si-Wei Liu @ 2022-04-27  8:30 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, jasowang, eperezma, eli, si-wei.liu, sgarzare

With MQ enabled vdpa device and non-MQ supporting guest e.g.
booting vdpa with mq=on over OVMF of single vqp, below assert
failure is seen:

../hw/virtio/vhost-vdpa.c:560: vhost_vdpa_get_vq_index: Assertion `idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs' failed.

0  0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
1  0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
2  0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
3  0x00007f8ce3fec252 in  () at /lib64/libc.so.6
4  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
5  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
6  0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800, vdev=0x558f568f91f0, n=2, mask=<optimized out>) at ../hw/virtio/vhost.c:1557
7  0x0000558f52c6b89a in virtio_pci_set_guest_notifier (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true, with_irqfd=with_irqfd@entry=false)
   at ../hw/virtio/virtio-pci.c:974
8  0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers (d=0x558f568f0f60, nvqs=3, assign=true) at ../hw/virtio/virtio-pci.c:1019
9  0x0000558f52bf091d in vhost_net_start (dev=dev@entry=0x558f568f91f0, ncs=0x558f56937cd0, data_queue_pairs=data_queue_pairs@entry=1, cvq=cvq@entry=1)
   at ../hw/net/vhost_net.c:361
10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized out>, n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0, status=15 '\017') at ../hw/net/virtio-net.c:370
12 0x0000558f52d6c4b2 in virtio_set_status (vdev=vdev@entry=0x558f568f91f0, val=val@entry=15 '\017') at ../hw/virtio/virtio.c:1945
13 0x0000558f52c69eff in virtio_pci_common_write (opaque=0x558f568f0f60, addr=<optimized out>, val=<optimized out>, size=<optimized out>) at ../hw/virtio/virtio-pci.c:1292
14 0x0000558f52d15d6e in memory_region_write_accessor (mr=0x558f568f19d0, addr=20, value=<optimized out>, size=1, shift=<optimized out>, mask=<optimized out>, attrs=...)
   at ../softmmu/memory.c:492
15 0x0000558f52d127de in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f8cdbffe748, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x558f52d15cf0 <memory_region_write_accessor>, mr=0x558f568f19d0, attrs=...) at ../softmmu/memory.c:554
16 0x0000558f52d157ef in memory_region_dispatch_write (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...)
   at ../softmmu/memory.c:1504
17 0x0000558f52d078e7 in flatview_write_continue (fv=fv@entry=0x7f8accbc3b90, addr=addr@entry=103079215124, attrs=..., ptr=ptr@entry=0x7f8ce6300028, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x558f568f19d0) at /home/opc/qemu-upstream/include/qemu/host-utils.h:165
18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90, addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at ../softmmu/physmem.c:2822
19 0x0000558f52d0b36b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>)
   at ../softmmu/physmem.c:2914
20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=...,
   attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>, is_write=<optimized out>) at ../softmmu/physmem.c:2924
21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60) at ../accel/kvm/kvm-all.c:2903
22 0x0000558f52dcfabd in kvm_vcpu_thread_fn (arg=arg@entry=0x558f55c2da60) at ../accel/kvm/kvm-accel-ops.c:49
23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:556
24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6

The cause for the assert failure is due to that the vhost_dev index
for the ctrl vq was not aligned with actual one in use by the guest.
Upon multiqueue feature negotiation in virtio_net_set_multiqueue(),
if guest doesn't support multiqueue, the guest vq layout would shrink
to a single queue pair, consisting of 3 vqs in total (rx, tx and ctrl).
This results in ctrl_vq taking a different vhost_dev group index than
the default. We can map vq to the correct vhost_dev group by checking
if MQ is supported by guest and successfully negotiated. Since the
MQ feature is only present along with CTRL_VQ, we make sure the index
2 is only meant for the control vq while MQ is not supported by guest.

Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
Suggested-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
---
 hw/net/virtio-net.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index ffb3475..8ca0b80 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3171,8 +3171,17 @@ static NetClientInfo net_virtio_info = {
 static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx)
 {
     VirtIONet *n = VIRTIO_NET(vdev);
-    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
+    NetClientState *nc;
     assert(n->vhost_started);
+    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
+        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
+           error_report("virtio-net: bogus vq index ignored");
+           return false;
+        }
+        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
+    } else {
+        nc = qemu_get_subqueue(n->nic, vq2q(idx));
+    }
     return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
 }
 
@@ -3180,8 +3189,17 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
                                            bool mask)
 {
     VirtIONet *n = VIRTIO_NET(vdev);
-    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
+    NetClientState *nc;
     assert(n->vhost_started);
+    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
+        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
+           error_report("virtio-net: bogus vq index ignored");
+           return;
+        }
+        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
+    } else {
+        nc = qemu_get_subqueue(n->nic, vq2q(idx));
+    }
     vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
                              vdev, idx, mask);
 }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/5] vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa
  2022-04-27  8:30 [PATCH v2 0/5] vhost-vdpa multiqueue fixes Si-Wei Liu
  2022-04-27  8:30 ` [PATCH v2 1/5] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated Si-Wei Liu
  2022-04-27  8:30 ` [PATCH v2 2/5] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa Si-Wei Liu
@ 2022-04-27  8:30 ` Si-Wei Liu
  2022-04-27  8:30 ` [PATCH v2 4/5] vhost-net: fix improper cleanup in vhost_net_start Si-Wei Liu
  2022-04-27  8:30 ` [PATCH v2 5/5] vhost-vdpa: backend feature should set only once Si-Wei Liu
  4 siblings, 0 replies; 12+ messages in thread
From: Si-Wei Liu @ 2022-04-27  8:30 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, jasowang, eperezma, eli, si-wei.liu, sgarzare

... such that no memory leaks on dangling net clients in case of
error.

Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 1e9fe47..df1e69e 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -306,7 +306,9 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
 
 err:
     if (i) {
-        qemu_del_net_client(ncs[0]);
+        for (i--; i >= 0; i--) {
+            qemu_del_net_client(ncs[i]);
+        }
     }
     qemu_close(vdpa_device_fd);
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 4/5] vhost-net: fix improper cleanup in vhost_net_start
  2022-04-27  8:30 [PATCH v2 0/5] vhost-vdpa multiqueue fixes Si-Wei Liu
                   ` (2 preceding siblings ...)
  2022-04-27  8:30 ` [PATCH v2 3/5] vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa Si-Wei Liu
@ 2022-04-27  8:30 ` Si-Wei Liu
  2022-04-27  8:30 ` [PATCH v2 5/5] vhost-vdpa: backend feature should set only once Si-Wei Liu
  4 siblings, 0 replies; 12+ messages in thread
From: Si-Wei Liu @ 2022-04-27  8:30 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, jasowang, eperezma, eli, si-wei.liu, sgarzare

vhost_net_start() missed a corresponding stop_one() upon error from
vhost_set_vring_enable(). While at it, make the error handling for
err_start more robust. No real issue was found due to this though.

Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/vhost_net.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 30379d2..d6d7c51 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -381,6 +381,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
             r = vhost_set_vring_enable(peer, peer->vring_enable);
 
             if (r < 0) {
+                vhost_net_stop_one(get_vhost_net(peer), dev);
                 goto err_start;
             }
         }
@@ -390,7 +391,8 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
 
 err_start:
     while (--i >= 0) {
-        peer = qemu_get_peer(ncs , i);
+        peer = qemu_get_peer(ncs, i < data_queue_pairs ?
+                                  i : n->max_queue_pairs);
         vhost_net_stop_one(get_vhost_net(peer), dev);
     }
     e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 5/5] vhost-vdpa: backend feature should set only once
  2022-04-27  8:30 [PATCH v2 0/5] vhost-vdpa multiqueue fixes Si-Wei Liu
                   ` (3 preceding siblings ...)
  2022-04-27  8:30 ` [PATCH v2 4/5] vhost-net: fix improper cleanup in vhost_net_start Si-Wei Liu
@ 2022-04-27  8:30 ` Si-Wei Liu
  4 siblings, 0 replies; 12+ messages in thread
From: Si-Wei Liu @ 2022-04-27  8:30 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, jasowang, eperezma, eli, si-wei.liu, sgarzare

The vhost_vdpa_one_time_request() branch in
vhost_vdpa_set_backend_cap() incorrectly sends down
ioctls on vhost_dev with non-zero index. This may
end up with multiple VHOST_SET_BACKEND_FEATURES
ioctl calls sent down on the vhost-vdpa fd that is
shared between all these vhost_dev's.

To fix it, send down ioctl only once via the first
vhost_dev with index 0. For more readibility of
code, vhost_vdpa_one_time_request() is renamed to
vhost_vdpa_first_dev() with polarity flipped.
This call is only applicable to the request that
performs operation before setting up queues, and
usually at the beginning of operation. Document
the requirement for it in place.

Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 8adf7c0..fd1268e 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -366,11 +366,18 @@ static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
                                     v->iova_range.last);
 }
 
-static bool vhost_vdpa_one_time_request(struct vhost_dev *dev)
+/*
+ * The use of this function is for requests that only need to be
+ * applied once. Typically such request occurs at the beginning 
+ * of operation, and before setting up queues. It should not be
+ * used for request that performs operation until all queues are
+ * set, which would need to check dev->vq_index_end instead.
+ */
+static bool vhost_vdpa_first_dev(struct vhost_dev *dev)
 {
     struct vhost_vdpa *v = dev->opaque;
 
-    return v->index != 0;
+    return v->index == 0;
 }
 
 static int vhost_vdpa_get_dev_features(struct vhost_dev *dev,
@@ -451,7 +458,7 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
 
     vhost_vdpa_get_iova_range(v);
 
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (!vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
@@ -594,7 +601,7 @@ static int vhost_vdpa_memslots_limit(struct vhost_dev *dev)
 static int vhost_vdpa_set_mem_table(struct vhost_dev *dev,
                                     struct vhost_memory *mem)
 {
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (!vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
@@ -623,7 +630,7 @@ static int vhost_vdpa_set_features(struct vhost_dev *dev,
     struct vhost_vdpa *v = dev->opaque;
     int ret;
 
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (!vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
@@ -665,7 +672,7 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
 
     features &= f;
 
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (vhost_vdpa_first_dev(dev)) {
         r = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, &features);
         if (r) {
             return -EFAULT;
@@ -1118,7 +1125,7 @@ static int vhost_vdpa_set_log_base(struct vhost_dev *dev, uint64_t base,
                                      struct vhost_log *log)
 {
     struct vhost_vdpa *v = dev->opaque;
-    if (v->shadow_vqs_enabled || vhost_vdpa_one_time_request(dev)) {
+    if (v->shadow_vqs_enabled || !vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
@@ -1240,7 +1247,7 @@ static int vhost_vdpa_get_features(struct vhost_dev *dev,
 
 static int vhost_vdpa_set_owner(struct vhost_dev *dev)
 {
-    if (vhost_vdpa_one_time_request(dev)) {
+    if (!vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/5] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated
  2022-04-27  8:30 ` [PATCH v2 1/5] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated Si-Wei Liu
@ 2022-04-29  2:14   ` Jason Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Jason Wang @ 2022-04-29  2:14 UTC (permalink / raw)
  To: Si-Wei Liu, qemu-devel; +Cc: eperezma, sgarzare, eli, mst


在 2022/4/27 16:30, Si-Wei Liu 写道:
> When the control virtqueue feature is absent or not negotiated,
> vhost_net_start() still tries to set up vhost_dev and install
> vhost notifiers for the control virtqueue, which results in
> erroneous ioctl calls with incorrect queue index sending down
> to driver. Do that only when needed.
>
> Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   hw/net/virtio-net.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 1067e72..ffb3475 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -245,7 +245,8 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>       VirtIODevice *vdev = VIRTIO_DEVICE(n);
>       NetClientState *nc = qemu_get_queue(n->nic);
>       int queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
> -    int cvq = n->max_ncs - n->max_queue_pairs;
> +    int cvq = virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) ?
> +              n->max_ncs - n->max_queue_pairs : 0;
>   
>       if (!get_vhost_net(nc->peer)) {
>           return;



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/5] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
  2022-04-27  8:30 ` [PATCH v2 2/5] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa Si-Wei Liu
@ 2022-04-29  2:23   ` Jason Wang
  2022-04-29  2:24     ` Jason Wang
  2022-04-30  1:12     ` Si-Wei Liu
  0 siblings, 2 replies; 12+ messages in thread
From: Jason Wang @ 2022-04-29  2:23 UTC (permalink / raw)
  To: Si-Wei Liu, qemu-devel; +Cc: eperezma, sgarzare, eli, mst


在 2022/4/27 16:30, Si-Wei Liu 写道:
> With MQ enabled vdpa device and non-MQ supporting guest e.g.
> booting vdpa with mq=on over OVMF of single vqp, below assert
> failure is seen:
>
> ../hw/virtio/vhost-vdpa.c:560: vhost_vdpa_get_vq_index: Assertion `idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs' failed.
>
> 0  0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
> 1  0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
> 2  0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
> 3  0x00007f8ce3fec252 in  () at /lib64/libc.so.6
> 4  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
> 5  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
> 6  0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800, vdev=0x558f568f91f0, n=2, mask=<optimized out>) at ../hw/virtio/vhost.c:1557
> 7  0x0000558f52c6b89a in virtio_pci_set_guest_notifier (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true, with_irqfd=with_irqfd@entry=false)
>     at ../hw/virtio/virtio-pci.c:974
> 8  0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers (d=0x558f568f0f60, nvqs=3, assign=true) at ../hw/virtio/virtio-pci.c:1019
> 9  0x0000558f52bf091d in vhost_net_start (dev=dev@entry=0x558f568f91f0, ncs=0x558f56937cd0, data_queue_pairs=data_queue_pairs@entry=1, cvq=cvq@entry=1)
>     at ../hw/net/vhost_net.c:361
> 10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized out>, n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
> 11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0, status=15 '\017') at ../hw/net/virtio-net.c:370
> 12 0x0000558f52d6c4b2 in virtio_set_status (vdev=vdev@entry=0x558f568f91f0, val=val@entry=15 '\017') at ../hw/virtio/virtio.c:1945
> 13 0x0000558f52c69eff in virtio_pci_common_write (opaque=0x558f568f0f60, addr=<optimized out>, val=<optimized out>, size=<optimized out>) at ../hw/virtio/virtio-pci.c:1292
> 14 0x0000558f52d15d6e in memory_region_write_accessor (mr=0x558f568f19d0, addr=20, value=<optimized out>, size=1, shift=<optimized out>, mask=<optimized out>, attrs=...)
>     at ../softmmu/memory.c:492
> 15 0x0000558f52d127de in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f8cdbffe748, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x558f52d15cf0 <memory_region_write_accessor>, mr=0x558f568f19d0, attrs=...) at ../softmmu/memory.c:554
> 16 0x0000558f52d157ef in memory_region_dispatch_write (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...)
>     at ../softmmu/memory.c:1504
> 17 0x0000558f52d078e7 in flatview_write_continue (fv=fv@entry=0x7f8accbc3b90, addr=addr@entry=103079215124, attrs=..., ptr=ptr@entry=0x7f8ce6300028, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x558f568f19d0) at /home/opc/qemu-upstream/include/qemu/host-utils.h:165
> 18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90, addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at ../softmmu/physmem.c:2822
> 19 0x0000558f52d0b36b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>)
>     at ../softmmu/physmem.c:2914
> 20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=...,
>     attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>, is_write=<optimized out>) at ../softmmu/physmem.c:2924
> 21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60) at ../accel/kvm/kvm-all.c:2903
> 22 0x0000558f52dcfabd in kvm_vcpu_thread_fn (arg=arg@entry=0x558f55c2da60) at ../accel/kvm/kvm-accel-ops.c:49
> 23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:556
> 24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
> 25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6
>
> The cause for the assert failure is due to that the vhost_dev index
> for the ctrl vq was not aligned with actual one in use by the guest.
> Upon multiqueue feature negotiation in virtio_net_set_multiqueue(),
> if guest doesn't support multiqueue, the guest vq layout would shrink
> to a single queue pair, consisting of 3 vqs in total (rx, tx and ctrl).
> This results in ctrl_vq taking a different vhost_dev group index than
> the default. We can map vq to the correct vhost_dev group by checking
> if MQ is supported by guest and successfully negotiated. Since the
> MQ feature is only present along with CTRL_VQ, we make sure the index
> 2 is only meant for the control vq while MQ is not supported by guest.
>
> Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
> Suggested-by: Jason Wang <jasowang@redhat.com>
> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
> ---
>   hw/net/virtio-net.c | 22 ++++++++++++++++++++--
>   1 file changed, 20 insertions(+), 2 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index ffb3475..8ca0b80 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3171,8 +3171,17 @@ static NetClientInfo net_virtio_info = {
>   static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx)
>   {
>       VirtIONet *n = VIRTIO_NET(vdev);
> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
> +    NetClientState *nc;
>       assert(n->vhost_started);
> +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
> +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
> +           error_report("virtio-net: bogus vq index ignored");


This seems trigger-able by guest.

Other looks good.

Thanks


> +           return false;
> +        }
> +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
> +    } else {
> +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
> +    }
>       return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
>   }
>   
> @@ -3180,8 +3189,17 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
>                                              bool mask)
>   {
>       VirtIONet *n = VIRTIO_NET(vdev);
> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
> +    NetClientState *nc;
>       assert(n->vhost_started);
> +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
> +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
> +           error_report("virtio-net: bogus vq index ignored");
> +           return;
> +        }
> +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
> +    } else {
> +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
> +    }
>       vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
>                                vdev, idx, mask);
>   }



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/5] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
  2022-04-29  2:23   ` Jason Wang
@ 2022-04-29  2:24     ` Jason Wang
  2022-04-30  1:13       ` Si-Wei Liu
  2022-04-30  1:12     ` Si-Wei Liu
  1 sibling, 1 reply; 12+ messages in thread
From: Jason Wang @ 2022-04-29  2:24 UTC (permalink / raw)
  To: Si-Wei Liu, qemu-devel; +Cc: eperezma, Stefano Garzarella, Eli Cohen, mst

On Fri, Apr 29, 2022 at 10:24 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/4/27 16:30, Si-Wei Liu 写道:
> > With MQ enabled vdpa device and non-MQ supporting guest e.g.
> > booting vdpa with mq=on over OVMF of single vqp, below assert
> > failure is seen:
> >
> > ../hw/virtio/vhost-vdpa.c:560: vhost_vdpa_get_vq_index: Assertion `idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs' failed.
> >
> > 0  0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
> > 1  0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
> > 2  0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
> > 3  0x00007f8ce3fec252 in  () at /lib64/libc.so.6
> > 4  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
> > 5  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
> > 6  0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800, vdev=0x558f568f91f0, n=2, mask=<optimized out>) at ../hw/virtio/vhost.c:1557
> > 7  0x0000558f52c6b89a in virtio_pci_set_guest_notifier (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true, with_irqfd=with_irqfd@entry=false)
> >     at ../hw/virtio/virtio-pci.c:974
> > 8  0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers (d=0x558f568f0f60, nvqs=3, assign=true) at ../hw/virtio/virtio-pci.c:1019
> > 9  0x0000558f52bf091d in vhost_net_start (dev=dev@entry=0x558f568f91f0, ncs=0x558f56937cd0, data_queue_pairs=data_queue_pairs@entry=1, cvq=cvq@entry=1)
> >     at ../hw/net/vhost_net.c:361
> > 10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized out>, n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
> > 11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0, status=15 '\017') at ../hw/net/virtio-net.c:370
> > 12 0x0000558f52d6c4b2 in virtio_set_status (vdev=vdev@entry=0x558f568f91f0, val=val@entry=15 '\017') at ../hw/virtio/virtio.c:1945
> > 13 0x0000558f52c69eff in virtio_pci_common_write (opaque=0x558f568f0f60, addr=<optimized out>, val=<optimized out>, size=<optimized out>) at ../hw/virtio/virtio-pci.c:1292
> > 14 0x0000558f52d15d6e in memory_region_write_accessor (mr=0x558f568f19d0, addr=20, value=<optimized out>, size=1, shift=<optimized out>, mask=<optimized out>, attrs=...)
> >     at ../softmmu/memory.c:492
> > 15 0x0000558f52d127de in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f8cdbffe748, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x558f52d15cf0 <memory_region_write_accessor>, mr=0x558f568f19d0, attrs=...) at ../softmmu/memory.c:554
> > 16 0x0000558f52d157ef in memory_region_dispatch_write (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...)
> >     at ../softmmu/memory.c:1504
> > 17 0x0000558f52d078e7 in flatview_write_continue (fv=fv@entry=0x7f8accbc3b90, addr=addr@entry=103079215124, attrs=..., ptr=ptr@entry=0x7f8ce6300028, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x558f568f19d0) at /home/opc/qemu-upstream/include/qemu/host-utils.h:165
> > 18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90, addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at ../softmmu/physmem.c:2822
> > 19 0x0000558f52d0b36b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>)
> >     at ../softmmu/physmem.c:2914
> > 20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=...,
> >     attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>, is_write=<optimized out>) at ../softmmu/physmem.c:2924
> > 21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60) at ../accel/kvm/kvm-all.c:2903
> > 22 0x0000558f52dcfabd in kvm_vcpu_thread_fn (arg=arg@entry=0x558f55c2da60) at ../accel/kvm/kvm-accel-ops.c:49
> > 23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:556
> > 24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
> > 25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6
> >
> > The cause for the assert failure is due to that the vhost_dev index
> > for the ctrl vq was not aligned with actual one in use by the guest.
> > Upon multiqueue feature negotiation in virtio_net_set_multiqueue(),
> > if guest doesn't support multiqueue, the guest vq layout would shrink
> > to a single queue pair, consisting of 3 vqs in total (rx, tx and ctrl).
> > This results in ctrl_vq taking a different vhost_dev group index than
> > the default. We can map vq to the correct vhost_dev group by checking
> > if MQ is supported by guest and successfully negotiated. Since the
> > MQ feature is only present along with CTRL_VQ, we make sure the index
> > 2 is only meant for the control vq while MQ is not supported by guest.
> >
> > Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
> > Suggested-by: Jason Wang <jasowang@redhat.com>
> > Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
> > ---
> >   hw/net/virtio-net.c | 22 ++++++++++++++++++++--
> >   1 file changed, 20 insertions(+), 2 deletions(-)
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index ffb3475..8ca0b80 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -3171,8 +3171,17 @@ static NetClientInfo net_virtio_info = {
> >   static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx)
> >   {
> >       VirtIONet *n = VIRTIO_NET(vdev);
> > -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
> > +    NetClientState *nc;
> >       assert(n->vhost_started);
> > +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
> > +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
> > +           error_report("virtio-net: bogus vq index ignored");
>
>
> This seems trigger-able by guest.
>
> Other looks good.

Btw, it would be better to add a comment to explain here.

Thanks

>
> Thanks
>
>
> > +           return false;
> > +        }
> > +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
> > +    } else {
> > +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
> > +    }
> >       return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
> >   }
> >
> > @@ -3180,8 +3189,17 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
> >                                              bool mask)
> >   {
> >       VirtIONet *n = VIRTIO_NET(vdev);
> > -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
> > +    NetClientState *nc;
> >       assert(n->vhost_started);
> > +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
> > +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
> > +           error_report("virtio-net: bogus vq index ignored");
> > +           return;
> > +        }
> > +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
> > +    } else {
> > +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
> > +    }
> >       vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
> >                                vdev, idx, mask);
> >   }



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/5] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
  2022-04-29  2:23   ` Jason Wang
  2022-04-29  2:24     ` Jason Wang
@ 2022-04-30  1:12     ` Si-Wei Liu
  2022-05-05  8:25       ` Jason Wang
  1 sibling, 1 reply; 12+ messages in thread
From: Si-Wei Liu @ 2022-04-30  1:12 UTC (permalink / raw)
  To: Jason Wang, qemu-devel; +Cc: eperezma, sgarzare, eli, mst



On 4/28/2022 7:23 PM, Jason Wang wrote:
>
> 在 2022/4/27 16:30, Si-Wei Liu 写道:
>> With MQ enabled vdpa device and non-MQ supporting guest e.g.
>> booting vdpa with mq=on over OVMF of single vqp, below assert
>> failure is seen:
>>
>> ../hw/virtio/vhost-vdpa.c:560: vhost_vdpa_get_vq_index: Assertion 
>> `idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs' failed.
>>
>> 0  0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
>> 1  0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
>> 2  0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
>> 3  0x00007f8ce3fec252 in  () at /lib64/libc.so.6
>> 4  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized 
>> out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
>> 5  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized 
>> out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
>> 6  0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800, 
>> vdev=0x558f568f91f0, n=2, mask=<optimized out>) at 
>> ../hw/virtio/vhost.c:1557
>> 7  0x0000558f52c6b89a in virtio_pci_set_guest_notifier 
>> (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true, 
>> with_irqfd=with_irqfd@entry=false)
>>     at ../hw/virtio/virtio-pci.c:974
>> 8  0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers 
>> (d=0x558f568f0f60, nvqs=3, assign=true) at 
>> ../hw/virtio/virtio-pci.c:1019
>> 9  0x0000558f52bf091d in vhost_net_start 
>> (dev=dev@entry=0x558f568f91f0, ncs=0x558f56937cd0, 
>> data_queue_pairs=data_queue_pairs@entry=1, cvq=cvq@entry=1)
>>     at ../hw/net/vhost_net.c:361
>> 10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized 
>> out>, n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
>> 11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0, 
>> status=15 '\017') at ../hw/net/virtio-net.c:370
>> 12 0x0000558f52d6c4b2 in virtio_set_status 
>> (vdev=vdev@entry=0x558f568f91f0, val=val@entry=15 '\017') at 
>> ../hw/virtio/virtio.c:1945
>> 13 0x0000558f52c69eff in virtio_pci_common_write 
>> (opaque=0x558f568f0f60, addr=<optimized out>, val=<optimized out>, 
>> size=<optimized out>) at ../hw/virtio/virtio-pci.c:1292
>> 14 0x0000558f52d15d6e in memory_region_write_accessor 
>> (mr=0x558f568f19d0, addr=20, value=<optimized out>, size=1, 
>> shift=<optimized out>, mask=<optimized out>, attrs=...)
>>     at ../softmmu/memory.c:492
>> 15 0x0000558f52d127de in access_with_adjusted_size 
>> (addr=addr@entry=20, value=value@entry=0x7f8cdbffe748, 
>> size=size@entry=1, access_size_min=<optimized out>, 
>> access_size_max=<optimized out>, access_fn=0x558f52d15cf0 
>> <memory_region_write_accessor>, mr=0x558f568f19d0, attrs=...) at 
>> ../softmmu/memory.c:554
>> 16 0x0000558f52d157ef in memory_region_dispatch_write 
>> (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>, 
>> op=<optimized out>, attrs=attrs@entry=...)
>>     at ../softmmu/memory.c:1504
>> 17 0x0000558f52d078e7 in flatview_write_continue 
>> (fv=fv@entry=0x7f8accbc3b90, addr=addr@entry=103079215124, attrs=..., 
>> ptr=ptr@entry=0x7f8ce6300028, len=len@entry=1, addr1=<optimized out>, 
>> l=<optimized out>, mr=0x558f568f19d0) at 
>> /home/opc/qemu-upstream/include/qemu/host-utils.h:165
>> 18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90, 
>> addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at 
>> ../softmmu/physmem.c:2822
>> 19 0x0000558f52d0b36b in address_space_write (as=<optimized out>, 
>> addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028, 
>> len=<optimized out>)
>>     at ../softmmu/physmem.c:2914
>> 20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>, 
>> addr=<optimized out>, attrs=...,
>>     attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized 
>> out>, is_write=<optimized out>) at ../softmmu/physmem.c:2924
>> 21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60) 
>> at ../accel/kvm/kvm-all.c:2903
>> 22 0x0000558f52dcfabd in kvm_vcpu_thread_fn 
>> (arg=arg@entry=0x558f55c2da60) at ../accel/kvm/kvm-accel-ops.c:49
>> 23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at 
>> ../util/qemu-thread-posix.c:556
>> 24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
>> 25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6
>>
>> The cause for the assert failure is due to that the vhost_dev index
>> for the ctrl vq was not aligned with actual one in use by the guest.
>> Upon multiqueue feature negotiation in virtio_net_set_multiqueue(),
>> if guest doesn't support multiqueue, the guest vq layout would shrink
>> to a single queue pair, consisting of 3 vqs in total (rx, tx and ctrl).
>> This results in ctrl_vq taking a different vhost_dev group index than
>> the default. We can map vq to the correct vhost_dev group by checking
>> if MQ is supported by guest and successfully negotiated. Since the
>> MQ feature is only present along with CTRL_VQ, we make sure the index
>> 2 is only meant for the control vq while MQ is not supported by guest.
>>
>> Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
>> Suggested-by: Jason Wang <jasowang@redhat.com>
>> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
>> ---
>>   hw/net/virtio-net.c | 22 ++++++++++++++++++++--
>>   1 file changed, 20 insertions(+), 2 deletions(-)
>>
>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>> index ffb3475..8ca0b80 100644
>> --- a/hw/net/virtio-net.c
>> +++ b/hw/net/virtio-net.c
>> @@ -3171,8 +3171,17 @@ static NetClientInfo net_virtio_info = {
>>   static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, 
>> int idx)
>>   {
>>       VirtIONet *n = VIRTIO_NET(vdev);
>> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
>> +    NetClientState *nc;
>>       assert(n->vhost_started);
>> +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
>> +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
>> +           error_report("virtio-net: bogus vq index ignored");
>
>
> This seems trigger-able by guest.
Yes, this is trigger-able by either buggy guest or buggy migration flow 
(could be due to remote buggy QEMU). I was not sure if it'll be too 
determined to use LOG_GUEST_ERROR, and doesn't seem it's the convention 
to log guest error in the same file. What's your preference here, switch 
to LOG_GUEST_ERROR, or simply drop the error message?

Thanks,
-Siwei

>
> Other looks good.
>
> Thanks
>
>
>> +           return false;
>> +        }
>> +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
>> +    } else {
>> +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
>> +    }
>>       return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
>>   }
>>   @@ -3180,8 +3189,17 @@ static void 
>> virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
>>                                              bool mask)
>>   {
>>       VirtIONet *n = VIRTIO_NET(vdev);
>> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
>> +    NetClientState *nc;
>>       assert(n->vhost_started);
>> +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
>> +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
>> +           error_report("virtio-net: bogus vq index ignored");
>> +           return;
>> +        }
>> +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
>> +    } else {
>> +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
>> +    }
>>       vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
>>                                vdev, idx, mask);
>>   }
>



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/5] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
  2022-04-29  2:24     ` Jason Wang
@ 2022-04-30  1:13       ` Si-Wei Liu
  0 siblings, 0 replies; 12+ messages in thread
From: Si-Wei Liu @ 2022-04-30  1:13 UTC (permalink / raw)
  To: Jason Wang, qemu-devel; +Cc: eperezma, Stefano Garzarella, Eli Cohen, mst



On 4/28/2022 7:24 PM, Jason Wang wrote:
> On Fri, Apr 29, 2022 at 10:24 AM Jason Wang <jasowang@redhat.com> wrote:
>>
>> 在 2022/4/27 16:30, Si-Wei Liu 写道:
>>> With MQ enabled vdpa device and non-MQ supporting guest e.g.
>>> booting vdpa with mq=on over OVMF of single vqp, below assert
>>> failure is seen:
>>>
>>> ../hw/virtio/vhost-vdpa.c:560: vhost_vdpa_get_vq_index: Assertion `idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs' failed.
>>>
>>> 0  0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
>>> 1  0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
>>> 2  0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
>>> 3  0x00007f8ce3fec252 in  () at /lib64/libc.so.6
>>> 4  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
>>> 5  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
>>> 6  0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800, vdev=0x558f568f91f0, n=2, mask=<optimized out>) at ../hw/virtio/vhost.c:1557
>>> 7  0x0000558f52c6b89a in virtio_pci_set_guest_notifier (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true, with_irqfd=with_irqfd@entry=false)
>>>      at ../hw/virtio/virtio-pci.c:974
>>> 8  0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers (d=0x558f568f0f60, nvqs=3, assign=true) at ../hw/virtio/virtio-pci.c:1019
>>> 9  0x0000558f52bf091d in vhost_net_start (dev=dev@entry=0x558f568f91f0, ncs=0x558f56937cd0, data_queue_pairs=data_queue_pairs@entry=1, cvq=cvq@entry=1)
>>>      at ../hw/net/vhost_net.c:361
>>> 10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized out>, n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
>>> 11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0, status=15 '\017') at ../hw/net/virtio-net.c:370
>>> 12 0x0000558f52d6c4b2 in virtio_set_status (vdev=vdev@entry=0x558f568f91f0, val=val@entry=15 '\017') at ../hw/virtio/virtio.c:1945
>>> 13 0x0000558f52c69eff in virtio_pci_common_write (opaque=0x558f568f0f60, addr=<optimized out>, val=<optimized out>, size=<optimized out>) at ../hw/virtio/virtio-pci.c:1292
>>> 14 0x0000558f52d15d6e in memory_region_write_accessor (mr=0x558f568f19d0, addr=20, value=<optimized out>, size=1, shift=<optimized out>, mask=<optimized out>, attrs=...)
>>>      at ../softmmu/memory.c:492
>>> 15 0x0000558f52d127de in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f8cdbffe748, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x558f52d15cf0 <memory_region_write_accessor>, mr=0x558f568f19d0, attrs=...) at ../softmmu/memory.c:554
>>> 16 0x0000558f52d157ef in memory_region_dispatch_write (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...)
>>>      at ../softmmu/memory.c:1504
>>> 17 0x0000558f52d078e7 in flatview_write_continue (fv=fv@entry=0x7f8accbc3b90, addr=addr@entry=103079215124, attrs=..., ptr=ptr@entry=0x7f8ce6300028, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x558f568f19d0) at /home/opc/qemu-upstream/include/qemu/host-utils.h:165
>>> 18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90, addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at ../softmmu/physmem.c:2822
>>> 19 0x0000558f52d0b36b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>)
>>>      at ../softmmu/physmem.c:2914
>>> 20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=...,
>>>      attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>, is_write=<optimized out>) at ../softmmu/physmem.c:2924
>>> 21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60) at ../accel/kvm/kvm-all.c:2903
>>> 22 0x0000558f52dcfabd in kvm_vcpu_thread_fn (arg=arg@entry=0x558f55c2da60) at ../accel/kvm/kvm-accel-ops.c:49
>>> 23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:556
>>> 24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
>>> 25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6
>>>
>>> The cause for the assert failure is due to that the vhost_dev index
>>> for the ctrl vq was not aligned with actual one in use by the guest.
>>> Upon multiqueue feature negotiation in virtio_net_set_multiqueue(),
>>> if guest doesn't support multiqueue, the guest vq layout would shrink
>>> to a single queue pair, consisting of 3 vqs in total (rx, tx and ctrl).
>>> This results in ctrl_vq taking a different vhost_dev group index than
>>> the default. We can map vq to the correct vhost_dev group by checking
>>> if MQ is supported by guest and successfully negotiated. Since the
>>> MQ feature is only present along with CTRL_VQ, we make sure the index
>>> 2 is only meant for the control vq while MQ is not supported by guest.
>>>
>>> Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
>>> Suggested-by: Jason Wang <jasowang@redhat.com>
>>> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
>>> ---
>>>    hw/net/virtio-net.c | 22 ++++++++++++++++++++--
>>>    1 file changed, 20 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>> index ffb3475..8ca0b80 100644
>>> --- a/hw/net/virtio-net.c
>>> +++ b/hw/net/virtio-net.c
>>> @@ -3171,8 +3171,17 @@ static NetClientInfo net_virtio_info = {
>>>    static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx)
>>>    {
>>>        VirtIONet *n = VIRTIO_NET(vdev);
>>> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
>>> +    NetClientState *nc;
>>>        assert(n->vhost_started);
>>> +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
>>> +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
>>> +           error_report("virtio-net: bogus vq index ignored");
>>
>> This seems trigger-able by guest.
>>
>> Other looks good.
> Btw, it would be better to add a comment to explain here.
Yep, will add.

-Siwei

>
> Thanks
>
>> Thanks
>>
>>
>>> +           return false;
>>> +        }
>>> +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
>>> +    } else {
>>> +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
>>> +    }
>>>        return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
>>>    }
>>>
>>> @@ -3180,8 +3189,17 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
>>>                                               bool mask)
>>>    {
>>>        VirtIONet *n = VIRTIO_NET(vdev);
>>> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
>>> +    NetClientState *nc;
>>>        assert(n->vhost_started);
>>> +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
>>> +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
>>> +           error_report("virtio-net: bogus vq index ignored");
>>> +           return;
>>> +        }
>>> +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
>>> +    } else {
>>> +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
>>> +    }
>>>        vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
>>>                                 vdev, idx, mask);
>>>    }



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/5] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
  2022-04-30  1:12     ` Si-Wei Liu
@ 2022-05-05  8:25       ` Jason Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Jason Wang @ 2022-05-05  8:25 UTC (permalink / raw)
  To: Si-Wei Liu; +Cc: qemu-devel, mst, eperezma, Stefano Garzarella, Eli Cohen

On Sat, Apr 30, 2022 at 9:13 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
>
>
> On 4/28/2022 7:23 PM, Jason Wang wrote:
> >
> > 在 2022/4/27 16:30, Si-Wei Liu 写道:
> >> With MQ enabled vdpa device and non-MQ supporting guest e.g.
> >> booting vdpa with mq=on over OVMF of single vqp, below assert
> >> failure is seen:
> >>
> >> ../hw/virtio/vhost-vdpa.c:560: vhost_vdpa_get_vq_index: Assertion
> >> `idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs' failed.
> >>
> >> 0  0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
> >> 1  0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
> >> 2  0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
> >> 3  0x00007f8ce3fec252 in  () at /lib64/libc.so.6
> >> 4  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized
> >> out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
> >> 5  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized
> >> out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
> >> 6  0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800,
> >> vdev=0x558f568f91f0, n=2, mask=<optimized out>) at
> >> ../hw/virtio/vhost.c:1557
> >> 7  0x0000558f52c6b89a in virtio_pci_set_guest_notifier
> >> (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true,
> >> with_irqfd=with_irqfd@entry=false)
> >>     at ../hw/virtio/virtio-pci.c:974
> >> 8  0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers
> >> (d=0x558f568f0f60, nvqs=3, assign=true) at
> >> ../hw/virtio/virtio-pci.c:1019
> >> 9  0x0000558f52bf091d in vhost_net_start
> >> (dev=dev@entry=0x558f568f91f0, ncs=0x558f56937cd0,
> >> data_queue_pairs=data_queue_pairs@entry=1, cvq=cvq@entry=1)
> >>     at ../hw/net/vhost_net.c:361
> >> 10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized
> >> out>, n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
> >> 11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0,
> >> status=15 '\017') at ../hw/net/virtio-net.c:370
> >> 12 0x0000558f52d6c4b2 in virtio_set_status
> >> (vdev=vdev@entry=0x558f568f91f0, val=val@entry=15 '\017') at
> >> ../hw/virtio/virtio.c:1945
> >> 13 0x0000558f52c69eff in virtio_pci_common_write
> >> (opaque=0x558f568f0f60, addr=<optimized out>, val=<optimized out>,
> >> size=<optimized out>) at ../hw/virtio/virtio-pci.c:1292
> >> 14 0x0000558f52d15d6e in memory_region_write_accessor
> >> (mr=0x558f568f19d0, addr=20, value=<optimized out>, size=1,
> >> shift=<optimized out>, mask=<optimized out>, attrs=...)
> >>     at ../softmmu/memory.c:492
> >> 15 0x0000558f52d127de in access_with_adjusted_size
> >> (addr=addr@entry=20, value=value@entry=0x7f8cdbffe748,
> >> size=size@entry=1, access_size_min=<optimized out>,
> >> access_size_max=<optimized out>, access_fn=0x558f52d15cf0
> >> <memory_region_write_accessor>, mr=0x558f568f19d0, attrs=...) at
> >> ../softmmu/memory.c:554
> >> 16 0x0000558f52d157ef in memory_region_dispatch_write
> >> (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>,
> >> op=<optimized out>, attrs=attrs@entry=...)
> >>     at ../softmmu/memory.c:1504
> >> 17 0x0000558f52d078e7 in flatview_write_continue
> >> (fv=fv@entry=0x7f8accbc3b90, addr=addr@entry=103079215124, attrs=...,
> >> ptr=ptr@entry=0x7f8ce6300028, len=len@entry=1, addr1=<optimized out>,
> >> l=<optimized out>, mr=0x558f568f19d0) at
> >> /home/opc/qemu-upstream/include/qemu/host-utils.h:165
> >> 18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90,
> >> addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at
> >> ../softmmu/physmem.c:2822
> >> 19 0x0000558f52d0b36b in address_space_write (as=<optimized out>,
> >> addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028,
> >> len=<optimized out>)
> >>     at ../softmmu/physmem.c:2914
> >> 20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>,
> >> addr=<optimized out>, attrs=...,
> >>     attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized
> >> out>, is_write=<optimized out>) at ../softmmu/physmem.c:2924
> >> 21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60)
> >> at ../accel/kvm/kvm-all.c:2903
> >> 22 0x0000558f52dcfabd in kvm_vcpu_thread_fn
> >> (arg=arg@entry=0x558f55c2da60) at ../accel/kvm/kvm-accel-ops.c:49
> >> 23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at
> >> ../util/qemu-thread-posix.c:556
> >> 24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
> >> 25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6
> >>
> >> The cause for the assert failure is due to that the vhost_dev index
> >> for the ctrl vq was not aligned with actual one in use by the guest.
> >> Upon multiqueue feature negotiation in virtio_net_set_multiqueue(),
> >> if guest doesn't support multiqueue, the guest vq layout would shrink
> >> to a single queue pair, consisting of 3 vqs in total (rx, tx and ctrl).
> >> This results in ctrl_vq taking a different vhost_dev group index than
> >> the default. We can map vq to the correct vhost_dev group by checking
> >> if MQ is supported by guest and successfully negotiated. Since the
> >> MQ feature is only present along with CTRL_VQ, we make sure the index
> >> 2 is only meant for the control vq while MQ is not supported by guest.
> >>
> >> Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
> >> Suggested-by: Jason Wang <jasowang@redhat.com>
> >> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
> >> ---
> >>   hw/net/virtio-net.c | 22 ++++++++++++++++++++--
> >>   1 file changed, 20 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> >> index ffb3475..8ca0b80 100644
> >> --- a/hw/net/virtio-net.c
> >> +++ b/hw/net/virtio-net.c
> >> @@ -3171,8 +3171,17 @@ static NetClientInfo net_virtio_info = {
> >>   static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev,
> >> int idx)
> >>   {
> >>       VirtIONet *n = VIRTIO_NET(vdev);
> >> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
> >> +    NetClientState *nc;
> >>       assert(n->vhost_started);
> >> +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
> >> +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
> >> +           error_report("virtio-net: bogus vq index ignored");
> >
> >
> > This seems trigger-able by guest.
> Yes, this is trigger-able by either buggy guest or buggy migration flow
> (could be due to remote buggy QEMU). I was not sure if it'll be too
> determined to use LOG_GUEST_ERROR, and doesn't seem it's the convention
> to log guest error in the same file. What's your preference here, switch
> to LOG_GUEST_ERROR, or simply drop the error message?

I think it's better to use LOG_GUEST_ERROR here.

Thanks

>
> Thanks,
> -Siwei
>
> >
> > Other looks good.
> >
> > Thanks
> >
> >
> >> +           return false;
> >> +        }
> >> +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
> >> +    } else {
> >> +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
> >> +    }
> >>       return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
> >>   }
> >>   @@ -3180,8 +3189,17 @@ static void
> >> virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
> >>                                              bool mask)
> >>   {
> >>       VirtIONet *n = VIRTIO_NET(vdev);
> >> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
> >> +    NetClientState *nc;
> >>       assert(n->vhost_started);
> >> +    if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
> >> +        if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
> >> +           error_report("virtio-net: bogus vq index ignored");
> >> +           return;
> >> +        }
> >> +        nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
> >> +    } else {
> >> +        nc = qemu_get_subqueue(n->nic, vq2q(idx));
> >> +    }
> >>       vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
> >>                                vdev, idx, mask);
> >>   }
> >
>



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-05-05  8:36 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-27  8:30 [PATCH v2 0/5] vhost-vdpa multiqueue fixes Si-Wei Liu
2022-04-27  8:30 ` [PATCH v2 1/5] virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated Si-Wei Liu
2022-04-29  2:14   ` Jason Wang
2022-04-27  8:30 ` [PATCH v2 2/5] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa Si-Wei Liu
2022-04-29  2:23   ` Jason Wang
2022-04-29  2:24     ` Jason Wang
2022-04-30  1:13       ` Si-Wei Liu
2022-04-30  1:12     ` Si-Wei Liu
2022-05-05  8:25       ` Jason Wang
2022-04-27  8:30 ` [PATCH v2 3/5] vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa Si-Wei Liu
2022-04-27  8:30 ` [PATCH v2 4/5] vhost-net: fix improper cleanup in vhost_net_start Si-Wei Liu
2022-04-27  8:30 ` [PATCH v2 5/5] vhost-vdpa: backend feature should set only once Si-Wei Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.