All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: qemu-devel@nongnu.org, peter.maydell@linaro.org
Cc: "Eugenio Pérez" <eperezma@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	"Jason Wang" <jasowang@redhat.com>
Subject: [PULL V2 07/25] vhost: Check for queue full at vhost_svq_add
Date: Wed, 20 Jul 2022 17:02:55 +0800	[thread overview]
Message-ID: <20220720090313.55169-8-jasowang@redhat.com> (raw)
In-Reply-To: <20220720090313.55169-1-jasowang@redhat.com>

From: Eugenio Pérez <eperezma@redhat.com>

The series need to expose vhost_svq_add with full functionality,
including checking for full queue.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 59 +++++++++++++++++++++-----------------
 1 file changed, 33 insertions(+), 26 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index e272c33..11302ea 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -233,21 +233,29 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
  * Add an element to a SVQ.
  *
  * The caller must check that there is enough slots for the new element. It
- * takes ownership of the element: In case of failure, it is free and the SVQ
- * is considered broken.
+ * takes ownership of the element: In case of failure not ENOSPC, it is free.
+ *
+ * Return -EINVAL if element is invalid, -ENOSPC if dev queue is full
  */
-static bool vhost_svq_add(VhostShadowVirtqueue *svq, VirtQueueElement *elem)
+static int vhost_svq_add(VhostShadowVirtqueue *svq, VirtQueueElement *elem)
 {
     unsigned qemu_head;
-    bool ok = vhost_svq_add_split(svq, elem, &qemu_head);
+    unsigned ndescs = elem->in_num + elem->out_num;
+    bool ok;
+
+    if (unlikely(ndescs > vhost_svq_available_slots(svq))) {
+        return -ENOSPC;
+    }
+
+    ok = vhost_svq_add_split(svq, elem, &qemu_head);
     if (unlikely(!ok)) {
         g_free(elem);
-        return false;
+        return -EINVAL;
     }
 
     svq->ring_id_maps[qemu_head] = elem;
     vhost_svq_kick(svq);
-    return true;
+    return 0;
 }
 
 /**
@@ -274,7 +282,7 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
 
         while (true) {
             VirtQueueElement *elem;
-            bool ok;
+            int r;
 
             if (svq->next_guest_avail_elem) {
                 elem = g_steal_pointer(&svq->next_guest_avail_elem);
@@ -286,25 +294,24 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
                 break;
             }
 
-            if (elem->out_num + elem->in_num > vhost_svq_available_slots(svq)) {
-                /*
-                 * This condition is possible since a contiguous buffer in GPA
-                 * does not imply a contiguous buffer in qemu's VA
-                 * scatter-gather segments. If that happens, the buffer exposed
-                 * to the device needs to be a chain of descriptors at this
-                 * moment.
-                 *
-                 * SVQ cannot hold more available buffers if we are here:
-                 * queue the current guest descriptor and ignore further kicks
-                 * until some elements are used.
-                 */
-                svq->next_guest_avail_elem = elem;
-                return;
-            }
-
-            ok = vhost_svq_add(svq, elem);
-            if (unlikely(!ok)) {
-                /* VQ is broken, just return and ignore any other kicks */
+            r = vhost_svq_add(svq, elem);
+            if (unlikely(r != 0)) {
+                if (r == -ENOSPC) {
+                    /*
+                     * This condition is possible since a contiguous buffer in
+                     * GPA does not imply a contiguous buffer in qemu's VA
+                     * scatter-gather segments. If that happens, the buffer
+                     * exposed to the device needs to be a chain of descriptors
+                     * at this moment.
+                     *
+                     * SVQ cannot hold more available buffers if we are here:
+                     * queue the current guest descriptor and ignore kicks
+                     * until some elements are used.
+                     */
+                    svq->next_guest_avail_elem = elem;
+                }
+
+                /* VQ is full or broken, just return and ignore kicks */
                 return;
             }
         }
-- 
2.7.4



  parent reply	other threads:[~2022-07-20  9:11 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-20  9:02 [PULL V2 00/25] Net patches Jason Wang
2022-07-20  9:02 ` [PULL V2 01/25] vhost: move descriptor translation to vhost_svq_vring_write_descs Jason Wang
2022-07-20  9:02 ` [PULL V2 02/25] virtio-net: Expose MAC_TABLE_ENTRIES Jason Wang
2022-07-20  9:02 ` [PULL V2 03/25] virtio-net: Expose ctrl virtqueue logic Jason Wang
2022-07-20  9:02 ` [PULL V2 04/25] vdpa: Avoid compiler to squash reads to used idx Jason Wang
2022-07-20  9:02 ` [PULL V2 05/25] vhost: Reorder vhost_svq_kick Jason Wang
2022-07-20  9:02 ` [PULL V2 06/25] vhost: Move vhost_svq_kick call to vhost_svq_add Jason Wang
2022-07-20  9:02 ` Jason Wang [this message]
2022-07-20  9:02 ` [PULL V2 08/25] vhost: Decouple vhost_svq_add from VirtQueueElement Jason Wang
2022-07-20  9:02 ` [PULL V2 09/25] vhost: Add SVQDescState Jason Wang
2022-07-20  9:02 ` [PULL V2 10/25] vhost: Track number of descs in SVQDescState Jason Wang
2022-07-20  9:02 ` [PULL V2 11/25] vhost: add vhost_svq_push_elem Jason Wang
2022-07-20  9:03 ` [PULL V2 12/25] vhost: Expose vhost_svq_add Jason Wang
2022-07-20  9:03 ` [PULL V2 13/25] vhost: add vhost_svq_poll Jason Wang
2022-07-20  9:03 ` [PULL V2 14/25] vhost: Add svq avail_handler callback Jason Wang
2022-07-20  9:03 ` [PULL V2 15/25] vdpa: Export vhost_vdpa_dma_map and unmap calls Jason Wang
2022-07-20  9:03 ` [PULL V2 16/25] vhost-net-vdpa: add stubs for when no virtio-net device is present Jason Wang
2022-07-20  9:03 ` [PULL V2 17/25] vdpa: manual forward CVQ buffers Jason Wang
2022-07-20  9:03 ` [PULL V2 18/25] vdpa: Buffer CVQ support on shadow virtqueue Jason Wang
2022-07-20  9:03 ` [PULL V2 19/25] vdpa: Extract get features part from vhost_vdpa_get_max_queue_pairs Jason Wang
2022-07-29 14:08   ` Peter Maydell
2022-08-01  3:28     ` Jason Wang
2022-08-01 13:16       ` Eugenio Perez Martin
2022-07-20  9:03 ` [PULL V2 20/25] vdpa: Add device migration blocker Jason Wang
2022-07-20  9:03 ` [PULL V2 21/25] vdpa: Add x-svq to NetdevVhostVDPAOptions Jason Wang
2022-07-20  9:03 ` [PULL V2 22/25] softmmu/runstate.c: add RunStateTransition support form COLO to PRELAUNCH Jason Wang
2022-07-20  9:03 ` [PULL V2 23/25] net/colo: Fix a "double free" crash to clear the conn_list Jason Wang
2022-07-20  9:03 ` [PULL V2 24/25] net/colo.c: No need to track conn_list for filter-rewriter Jason Wang
2022-07-20  9:03 ` [PULL V2 25/25] net/colo.c: fix segmentation fault when packet is not parsed correctly Jason Wang
2022-07-29 13:58   ` Peter Maydell
2022-08-01  4:17     ` Jason Wang
2022-08-01  5:32       ` Zhang, Chen
2022-07-20 21:32 ` [PULL V2 00/25] Net patches Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220720090313.55169-8-jasowang@redhat.com \
    --to=jasowang@redhat.com \
    --cc=eperezma@redhat.com \
    --cc=mst@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.