All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Eugenio Pérez" <eperezma@redhat.com>
To: qemu-devel@nongnu.org
Cc: Gautam Dawar <gdawar@xilinx.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	Zhu Lingshan <lingshan.zhu@intel.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Parav Pandit <parav@mellanox.com>,
	Laurent Vivier <lvivier@redhat.com>,
	Liuxiangdong <liuxiangdong5@huawei.com>,
	Eli Cohen <eli@mellanox.com>, Cindy Lu <lulu@redhat.com>,
	Harpreet Singh Anand <hanand@xilinx.com>,
	Jason Wang <jasowang@redhat.com>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>
Subject: [PATCH v2 06/19] vhost: Check for queue full at vhost_svq_add
Date: Thu, 14 Jul 2022 18:31:37 +0200	[thread overview]
Message-ID: <20220714163150.2536327-7-eperezma@redhat.com> (raw)
In-Reply-To: <20220714163150.2536327-1-eperezma@redhat.com>

The series need to expose vhost_svq_add with full functionality,
including checking for full queue.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.c | 59 +++++++++++++++++-------------
 1 file changed, 33 insertions(+), 26 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index e5a4a62daa..aee9891a67 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -233,21 +233,29 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
  * Add an element to a SVQ.
  *
  * The caller must check that there is enough slots for the new element. It
- * takes ownership of the element: In case of failure, it is free and the SVQ
- * is considered broken.
+ * takes ownership of the element: In case of failure not ENOSPC, it is free.
+ *
+ * Return -EINVAL if element is invalid, -ENOSPC if dev queue is full
  */
-static bool vhost_svq_add(VhostShadowVirtqueue *svq, VirtQueueElement *elem)
+static int vhost_svq_add(VhostShadowVirtqueue *svq, VirtQueueElement *elem)
 {
     unsigned qemu_head;
-    bool ok = vhost_svq_add_split(svq, elem, &qemu_head);
+    unsigned ndescs = elem->in_num + elem->out_num;
+    bool ok;
+
+    if (unlikely(ndescs > vhost_svq_available_slots(svq))) {
+        return -ENOSPC;
+    }
+
+    ok = vhost_svq_add_split(svq, elem, &qemu_head);
     if (unlikely(!ok)) {
         g_free(elem);
-        return false;
+        return -EINVAL;
     }
 
     svq->ring_id_maps[qemu_head] = elem;
     vhost_svq_kick(svq);
-    return true;
+    return 0;
 }
 
 /**
@@ -274,7 +282,7 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
 
         while (true) {
             VirtQueueElement *elem;
-            bool ok;
+            int r;
 
             if (svq->next_guest_avail_elem) {
                 elem = g_steal_pointer(&svq->next_guest_avail_elem);
@@ -286,25 +294,24 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
                 break;
             }
 
-            if (elem->out_num + elem->in_num > vhost_svq_available_slots(svq)) {
-                /*
-                 * This condition is possible since a contiguous buffer in GPA
-                 * does not imply a contiguous buffer in qemu's VA
-                 * scatter-gather segments. If that happens, the buffer exposed
-                 * to the device needs to be a chain of descriptors at this
-                 * moment.
-                 *
-                 * SVQ cannot hold more available buffers if we are here:
-                 * queue the current guest descriptor and ignore further kicks
-                 * until some elements are used.
-                 */
-                svq->next_guest_avail_elem = elem;
-                return;
-            }
-
-            ok = vhost_svq_add(svq, elem);
-            if (unlikely(!ok)) {
-                /* VQ is broken, just return and ignore any other kicks */
+            r = vhost_svq_add(svq, elem);
+            if (unlikely(r != 0)) {
+                if (r == -ENOSPC) {
+                    /*
+                     * This condition is possible since a contiguous buffer in
+                     * GPA does not imply a contiguous buffer in qemu's VA
+                     * scatter-gather segments. If that happens, the buffer
+                     * exposed to the device needs to be a chain of descriptors
+                     * at this moment.
+                     *
+                     * SVQ cannot hold more available buffers if we are here:
+                     * queue the current guest descriptor and ignore kicks
+                     * until some elements are used.
+                     */
+                    svq->next_guest_avail_elem = elem;
+                }
+
+                /* VQ is full or broken, just return and ignore kicks */
                 return;
             }
         }
-- 
2.31.1



  parent reply	other threads:[~2022-07-14 16:34 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-14 16:31 [PATCH v2 00/19] vdpa net devices Rx filter change notification with Shadow VQ Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 01/19] vhost: move descriptor translation to vhost_svq_vring_write_descs Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 02/19] virtio-net: Expose MAC_TABLE_ENTRIES Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 03/19] virtio-net: Expose ctrl virtqueue logic Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 04/19] vhost: Reorder vhost_svq_kick Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 05/19] vhost: Move vhost_svq_kick call to vhost_svq_add Eugenio Pérez
2022-07-14 16:31 ` Eugenio Pérez [this message]
2022-07-14 16:31 ` [PATCH v2 07/19] vhost: Decouple vhost_svq_add from VirtQueueElement Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 08/19] vhost: Add SVQElement Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 09/19] vhost: Track number of descs in SVQElement Eugenio Pérez
2022-07-15  4:10   ` Jason Wang
2022-07-15  5:41     ` Eugenio Perez Martin
2022-07-14 16:31 ` [PATCH v2 10/19] vhost: add vhost_svq_push_elem Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 11/19] vhost: Expose vhost_svq_add Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 12/19] vhost: add vhost_svq_poll Eugenio Pérez
2022-07-15  3:58   ` Jason Wang
2022-07-15  5:38     ` Eugenio Perez Martin
2022-07-15  8:47       ` Jason Wang
2022-07-15 17:05         ` Eugenio Perez Martin
2022-07-19  7:38           ` Jason Wang
2022-07-19  8:42             ` Eugenio Perez Martin
2022-07-19  8:48               ` Jason Wang
2022-07-19  9:09                 ` Eugenio Perez Martin
2022-07-14 16:31 ` [PATCH v2 13/19] vhost: Add svq avail_handler callback Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 14/19] vdpa: Export vhost_vdpa_dma_map and unmap calls Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 15/19] vdpa: manual forward CVQ buffers Eugenio Pérez
2022-07-15  4:08   ` Jason Wang
2022-07-15  5:33     ` Eugenio Perez Martin
2022-07-15  8:44       ` Jason Wang
2022-07-15  9:01         ` Eugenio Perez Martin
2022-07-14 16:31 ` [PATCH v2 16/19] vdpa: Buffer CVQ support on shadow virtqueue Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 17/19] vdpa: Extract get features part from vhost_vdpa_get_max_queue_pairs Eugenio Pérez
2022-07-14 16:31 ` [PATCH v2 18/19] vdpa: Add device migration blocker Eugenio Pérez
2022-07-15  4:03   ` Jason Wang
2022-07-15  5:39     ` Eugenio Perez Martin
2022-07-15  8:50       ` Jason Wang
2022-07-15  9:05         ` Eugenio Perez Martin
2022-07-15 16:13           ` Eugenio Perez Martin
2022-07-22 13:29         ` Eugenio Perez Martin
2022-07-14 16:31 ` [PATCH v2 19/19] vdpa: Add x-svq to NetdevVhostVDPAOptions Eugenio Pérez
2022-07-15  4:13   ` Jason Wang
2022-07-15  6:09     ` Eugenio Perez Martin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220714163150.2536327-7-eperezma@redhat.com \
    --to=eperezma@redhat.com \
    --cc=arei.gonglei@huawei.com \
    --cc=armbru@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=eblake@redhat.com \
    --cc=eli@mellanox.com \
    --cc=gdawar@xilinx.com \
    --cc=hanand@xilinx.com \
    --cc=jasowang@redhat.com \
    --cc=lingshan.zhu@intel.com \
    --cc=liuxiangdong5@huawei.com \
    --cc=lulu@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=mst@redhat.com \
    --cc=parav@mellanox.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.