All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Peter Maydell" <peter.maydell@linaro.org>,
	"Johannes Berg" <johannes.berg@intel.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	"Gerd Hoffmann" <kraxel@redhat.com>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Marc-André Lureau" <marcandre.lureau@redhat.com>,
	"Xie Yongji" <xieyongji@baidu.com>
Subject: [PULL 27/32] libvhost-user: implement in-band notifications
Date: Tue, 25 Feb 2020 10:14:40 -0500	[thread overview]
Message-ID: <20200225151210.647797-28-mst@redhat.com> (raw)
In-Reply-To: <20200225151210.647797-1-mst@redhat.com>

From: Johannes Berg <johannes.berg@intel.com>

Add support for VHOST_USER_PROTOCOL_F_IN_BAND_NOTIFICATIONS, but
as it's not desired by default, don't enable it unless the device
implementation opts in by returning it from its protocol features
callback.

Note that I updated vu_set_vring_err_exec(), but didn't add any
sending of the VHOST_USER_SLAVE_VRING_ERR message as there's no
write to the err_fd today either.

This also adds vu_queue_notify_sync() which can be used to force
a synchronous notification if inband notifications are supported.
Previously, I had left out the slave->master direction handling
of F_REPLY_ACK, this now adds some code to support it as well.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-7-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 contrib/libvhost-user/libvhost-user.h |  14 ++++
 contrib/libvhost-user/libvhost-user.c | 103 +++++++++++++++++++++++++-
 2 files changed, 114 insertions(+), 3 deletions(-)

diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h
index 5cb7708559..6fc8000e99 100644
--- a/contrib/libvhost-user/libvhost-user.h
+++ b/contrib/libvhost-user/libvhost-user.h
@@ -54,6 +54,7 @@ enum VhostUserProtocolFeature {
     VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD = 10,
     VHOST_USER_PROTOCOL_F_HOST_NOTIFIER = 11,
     VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD = 12,
+    VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS = 14,
 
     VHOST_USER_PROTOCOL_F_MAX
 };
@@ -95,6 +96,7 @@ typedef enum VhostUserRequest {
     VHOST_USER_GET_INFLIGHT_FD = 31,
     VHOST_USER_SET_INFLIGHT_FD = 32,
     VHOST_USER_GPU_SET_SOCKET = 33,
+    VHOST_USER_VRING_KICK = 35,
     VHOST_USER_MAX
 } VhostUserRequest;
 
@@ -103,6 +105,8 @@ typedef enum VhostUserSlaveRequest {
     VHOST_USER_SLAVE_IOTLB_MSG = 1,
     VHOST_USER_SLAVE_CONFIG_CHANGE_MSG = 2,
     VHOST_USER_SLAVE_VRING_HOST_NOTIFIER_MSG = 3,
+    VHOST_USER_SLAVE_VRING_CALL = 4,
+    VHOST_USER_SLAVE_VRING_ERR = 5,
     VHOST_USER_SLAVE_MAX
 }  VhostUserSlaveRequest;
 
@@ -528,6 +532,16 @@ bool vu_queue_empty(VuDev *dev, VuVirtq *vq);
  */
 void vu_queue_notify(VuDev *dev, VuVirtq *vq);
 
+/**
+ * vu_queue_notify_sync:
+ * @dev: a VuDev context
+ * @vq: a VuVirtq queue
+ *
+ * Request to notify the queue via callfd (skipped if unnecessary)
+ * or sync message if possible.
+ */
+void vu_queue_notify_sync(VuDev *dev, VuVirtq *vq);
+
 /**
  * vu_queue_pop:
  * @dev: a VuDev context
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
index 3abc9689e5..3bca996c62 100644
--- a/contrib/libvhost-user/libvhost-user.c
+++ b/contrib/libvhost-user/libvhost-user.c
@@ -136,6 +136,7 @@ vu_request_to_string(unsigned int req)
         REQ(VHOST_USER_GET_INFLIGHT_FD),
         REQ(VHOST_USER_SET_INFLIGHT_FD),
         REQ(VHOST_USER_GPU_SET_SOCKET),
+        REQ(VHOST_USER_VRING_KICK),
         REQ(VHOST_USER_MAX),
     };
 #undef REQ
@@ -163,7 +164,10 @@ vu_panic(VuDev *dev, const char *msg, ...)
     dev->panic(dev, buf);
     free(buf);
 
-    /* FIXME: find a way to call virtio_error? */
+    /*
+     * FIXME:
+     * find a way to call virtio_error, or perhaps close the connection?
+     */
 }
 
 /* Translate guest physical address to our virtual address.  */
@@ -1203,6 +1207,14 @@ vu_set_vring_err_exec(VuDev *dev, VhostUserMsg *vmsg)
 static bool
 vu_get_protocol_features_exec(VuDev *dev, VhostUserMsg *vmsg)
 {
+    /*
+     * Note that we support, but intentionally do not set,
+     * VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS. This means that
+     * a device implementation can return it in its callback
+     * (get_protocol_features) if it wants to use this for
+     * simulation, but it is otherwise not desirable (if even
+     * implemented by the master.)
+     */
     uint64_t features = 1ULL << VHOST_USER_PROTOCOL_F_MQ |
                         1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD |
                         1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ |
@@ -1235,6 +1247,25 @@ vu_set_protocol_features_exec(VuDev *dev, VhostUserMsg *vmsg)
 
     dev->protocol_features = vmsg->payload.u64;
 
+    if (vu_has_protocol_feature(dev,
+                                VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS) &&
+        (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SLAVE_REQ) ||
+         !vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK))) {
+        /*
+         * The use case for using messages for kick/call is simulation, to make
+         * the kick and call synchronous. To actually get that behaviour, both
+         * of the other features are required.
+         * Theoretically, one could use only kick messages, or do them without
+         * having F_REPLY_ACK, but too many (possibly pending) messages on the
+         * socket will eventually cause the master to hang, to avoid this in
+         * scenarios where not desired enforce that the settings are in a way
+         * that actually enables the simulation case.
+         */
+        vu_panic(dev,
+                 "F_IN_BAND_NOTIFICATIONS requires F_SLAVE_REQ && F_REPLY_ACK");
+        return false;
+    }
+
     if (dev->iface->set_protocol_features) {
         dev->iface->set_protocol_features(dev, features);
     }
@@ -1495,6 +1526,34 @@ vu_set_inflight_fd(VuDev *dev, VhostUserMsg *vmsg)
     return false;
 }
 
+static bool
+vu_handle_vring_kick(VuDev *dev, VhostUserMsg *vmsg)
+{
+    unsigned int index = vmsg->payload.state.index;
+
+    if (index >= dev->max_queues) {
+        vu_panic(dev, "Invalid queue index: %u", index);
+        return false;
+    }
+
+    DPRINT("Got kick message: handler:%p idx:%d\n",
+           dev->vq[index].handler, index);
+
+    if (!dev->vq[index].started) {
+        dev->vq[index].started = true;
+
+        if (dev->iface->queue_set_started) {
+            dev->iface->queue_set_started(dev, index, true);
+        }
+    }
+
+    if (dev->vq[index].handler) {
+        dev->vq[index].handler(dev, index);
+    }
+
+    return false;
+}
+
 static bool
 vu_process_message(VuDev *dev, VhostUserMsg *vmsg)
 {
@@ -1577,6 +1636,8 @@ vu_process_message(VuDev *dev, VhostUserMsg *vmsg)
         return vu_get_inflight_fd(dev, vmsg);
     case VHOST_USER_SET_INFLIGHT_FD:
         return vu_set_inflight_fd(dev, vmsg);
+    case VHOST_USER_VRING_KICK:
+        return vu_handle_vring_kick(dev, vmsg);
     default:
         vmsg_close_fds(vmsg);
         vu_panic(dev, "Unhandled request: %d", vmsg->request);
@@ -2038,8 +2099,7 @@ vring_notify(VuDev *dev, VuVirtq *vq)
     return !v || vring_need_event(vring_get_used_event(vq), new, old);
 }
 
-void
-vu_queue_notify(VuDev *dev, VuVirtq *vq)
+static void _vu_queue_notify(VuDev *dev, VuVirtq *vq, bool sync)
 {
     if (unlikely(dev->broken) ||
         unlikely(!vq->vring.avail)) {
@@ -2051,11 +2111,48 @@ vu_queue_notify(VuDev *dev, VuVirtq *vq)
         return;
     }
 
+    if (vq->call_fd < 0 &&
+        vu_has_protocol_feature(dev,
+                                VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS) &&
+        vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SLAVE_REQ)) {
+        VhostUserMsg vmsg = {
+            .request = VHOST_USER_SLAVE_VRING_CALL,
+            .flags = VHOST_USER_VERSION,
+            .size = sizeof(vmsg.payload.state),
+            .payload.state = {
+                .index = vq - dev->vq,
+            },
+        };
+        bool ack = sync &&
+                   vu_has_protocol_feature(dev,
+                                           VHOST_USER_PROTOCOL_F_REPLY_ACK);
+
+        if (ack) {
+            vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
+        }
+
+        vu_message_write(dev, dev->slave_fd, &vmsg);
+        if (ack) {
+            vu_message_read(dev, dev->slave_fd, &vmsg);
+        }
+        return;
+    }
+
     if (eventfd_write(vq->call_fd, 1) < 0) {
         vu_panic(dev, "Error writing eventfd: %s", strerror(errno));
     }
 }
 
+void vu_queue_notify(VuDev *dev, VuVirtq *vq)
+{
+    _vu_queue_notify(dev, vq, false);
+}
+
+void vu_queue_notify_sync(VuDev *dev, VuVirtq *vq)
+{
+    _vu_queue_notify(dev, vq, true);
+}
+
 static inline void
 vring_used_flags_set_bit(VuVirtq *vq, int mask)
 {
-- 
MST



  parent reply	other threads:[~2020-02-25 15:27 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-25 15:12 [PULL 00/32] virtio, pc: fixes, features Michael S. Tsirkin
2020-02-25 15:12 ` [PULL 01/32] bios-tables-test: tell people how to update Michael S. Tsirkin
2020-02-25 15:12 ` [PULL 02/32] bios-tables-test: fix up DIFF generation Michael S. Tsirkin
2020-02-25 15:12 ` [PULL 03/32] bios-tables-test: default diff command Michael S. Tsirkin
2020-02-25 15:12 ` [PULL 04/32] rebuild-expected-aml.sh: remind about the process Michael S. Tsirkin
2020-02-25 15:12 ` [PULL 05/32] vhost-user-fs: do delete virtio_queues in unrealize Michael S. Tsirkin
2020-02-25 15:12 ` [PULL 06/32] vhost-user-fs: convert to the new virtio_delete_queue function Michael S. Tsirkin
2020-02-25 15:12 ` [PULL 07/32] virtio-pmem: do delete rq_vq in virtio_pmem_unrealize Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 08/32] virtio-crypto: do delete ctrl_vq in virtio_crypto_device_unrealize Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 09/32] vhost-user-blk: delete virtioqueues in unrealize to fix memleaks Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 10/32] vhost-user-blk: convert to new virtio_delete_queue Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 11/32] virtio: gracefully handle invalid region caches Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 12/32] virtio-iommu: Add skeleton Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 13/32] virtio-iommu: Decode the command payload Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 14/32] virtio-iommu: Implement attach/detach command Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 15/32] virtio-iommu: Implement map/unmap Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 16/32] virtio-iommu: Implement translate Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 17/32] virtio-iommu: Implement fault reporting Michael S. Tsirkin
2020-02-25 15:13 ` [PULL 18/32] virtio-iommu: Support migration Michael S. Tsirkin
2020-02-25 15:14 ` [PULL 19/32] virtio-iommu-pci: Add virtio iommu pci support Michael S. Tsirkin
2020-02-25 15:14 ` [PULL 20/32] hw/arm/virt: Add the virtio-iommu device tree mappings Michael S. Tsirkin
2020-02-25 15:14 ` [PULL 21/32] MAINTAINERS: add virtio-iommu related files Michael S. Tsirkin
2020-02-25 15:14 ` [PULL 22/32] libvhost-user: implement VHOST_USER_PROTOCOL_F_REPLY_ACK Michael S. Tsirkin
2020-02-25 15:14 ` [PULL 23/32] libvhost-user-glib: fix VugDev main fd cleanup Michael S. Tsirkin
2020-02-25 15:14 ` [PULL 24/32] libvhost-user-glib: use g_main_context_get_thread_default() Michael S. Tsirkin
2020-02-25 15:14 ` [PULL 25/32] libvhost-user: handle NOFD flag in call/kick/err better Michael S. Tsirkin
2020-02-25 15:14 ` [PULL 26/32] docs: vhost-user: add in-band kick/call messages Michael S. Tsirkin
2020-02-25 15:14 ` Michael S. Tsirkin [this message]
2020-02-25 15:14 ` [PULL 28/32] acpi: cpuhp: document CPHP_GET_CPU_ID_CMD command Michael S. Tsirkin
2020-02-25 15:14 ` [PULL 29/32] vhost-user: only set slave channel for first vq Michael S. Tsirkin
2020-02-25 15:14 ` [PULL 30/32] tests/vhost-user-bridge: move to contrib/ Michael S. Tsirkin
2020-02-25 15:15 ` [PULL 31/32] virtiofsd: add it to the tools list Michael S. Tsirkin
2020-02-25 15:15 ` [PULL 32/32] Fixed assert in vhost_user_set_mem_table_postcopy Michael S. Tsirkin
2020-02-25 16:47 ` [PULL 00/32] virtio, pc: fixes, features Peter Maydell
2020-02-25 18:39   ` Michael S. Tsirkin
2020-02-25 21:57   ` Michael S. Tsirkin
2020-02-26  4:48 ` no-reply

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200225151210.647797-28-mst@redhat.com \
    --to=mst@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=johannes.berg@intel.com \
    --cc=kraxel@redhat.com \
    --cc=marcandre.lureau@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=xieyongji@baidu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.