From: Jason Wang <jasowang@redhat.com> To: "Eugenio Pérez" <eperezma@redhat.com>, qemu-devel@nongnu.org Cc: Parav Pandit <parav@mellanox.com>, "Michael S. Tsirkin" <mst@redhat.com>, Markus Armbruster <armbru@redhat.com>, virtualization@lists.linux-foundation.org, Harpreet Singh Anand <hanand@xilinx.com>, Xiao W Wang <xiao.w.wang@intel.com>, Stefan Hajnoczi <stefanha@redhat.com>, Eli Cohen <eli@mellanox.com>, Eric Blake <eblake@redhat.com>, Michael Lilja <ml@napatech.com> Subject: Re: [RFC PATCH v4 11/20] vhost: Route host->guest notification through shadow virtqueue Date: Wed, 13 Oct 2021 11:47:44 +0800 [thread overview] Message-ID: <ab9a7771-5f9b-6413-3e38-bd3dc7373256@redhat.com> (raw) In-Reply-To: <20211001070603.307037-12-eperezma@redhat.com> 在 2021/10/1 下午3:05, Eugenio Pérez 写道: > This will make qemu aware of the device used buffers, allowing it to > write the guest memory with its contents if needed. > > Since the use of vhost_virtqueue_start can unmasks and discard call > events, vhost_virtqueue_start should be modified in one of these ways: > * Split in two: One of them uses all logic to start a queue with no > side effects for the guest, and another one tha actually assumes that > the guest has just started the device. Vdpa should use just the > former. > * Actually store and check if the guest notifier is masked, and do it > conditionally. > * Left as it is, and duplicate all the logic in vhost-vdpa. > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > --- > hw/virtio/vhost-shadow-virtqueue.c | 19 +++++++++++++++ > hw/virtio/vhost-vdpa.c | 38 +++++++++++++++++++++++++++++- > 2 files changed, 56 insertions(+), 1 deletion(-) > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c > index 21dc99ab5d..3fe129cf63 100644 > --- a/hw/virtio/vhost-shadow-virtqueue.c > +++ b/hw/virtio/vhost-shadow-virtqueue.c > @@ -53,6 +53,22 @@ static void vhost_handle_guest_kick(EventNotifier *n) > event_notifier_set(&svq->kick_notifier); > } > > +/* Forward vhost notifications */ > +static void vhost_svq_handle_call_no_test(EventNotifier *n) > +{ > + VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue, > + call_notifier); > + > + event_notifier_set(&svq->guest_call_notifier); > +} > + > +static void vhost_svq_handle_call(EventNotifier *n) > +{ > + if (likely(event_notifier_test_and_clear(n))) { > + vhost_svq_handle_call_no_test(n); > + } > +} > + > /* > * Obtain the SVQ call notifier, where vhost device notifies SVQ that there > * exists pending used buffers. > @@ -180,6 +196,8 @@ VhostShadowVirtqueue *vhost_svq_new(struct vhost_dev *dev, int idx) > } > > svq->vq = virtio_get_queue(dev->vdev, vq_idx); > + event_notifier_set_handler(&svq->call_notifier, > + vhost_svq_handle_call); > return g_steal_pointer(&svq); > > err_init_call_notifier: > @@ -195,6 +213,7 @@ err_init_kick_notifier: > void vhost_svq_free(VhostShadowVirtqueue *vq) > { > event_notifier_cleanup(&vq->kick_notifier); > + event_notifier_set_handler(&vq->call_notifier, NULL); > event_notifier_cleanup(&vq->call_notifier); > g_free(vq); > } > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c > index bc34de2439..6c5f4c98b8 100644 > --- a/hw/virtio/vhost-vdpa.c > +++ b/hw/virtio/vhost-vdpa.c > @@ -712,13 +712,40 @@ static bool vhost_vdpa_svq_start_vq(struct vhost_dev *dev, unsigned idx) > { > struct vhost_vdpa *v = dev->opaque; > VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, idx); > - return vhost_svq_start(dev, idx, svq); > + EventNotifier *vhost_call_notifier = vhost_svq_get_svq_call_notifier(svq); > + struct vhost_vring_file vhost_call_file = { > + .index = idx + dev->vq_index, > + .fd = event_notifier_get_fd(vhost_call_notifier), > + }; > + int r; > + bool b; > + > + /* Set shadow vq -> guest notifier */ > + assert(v->call_fd[idx]); We need aovid the asser() here. On which case we can hit this? > + vhost_svq_set_guest_call_notifier(svq, v->call_fd[idx]); > + > + b = vhost_svq_start(dev, idx, svq); > + if (unlikely(!b)) { > + return false; > + } > + > + /* Set device -> SVQ notifier */ > + r = vhost_vdpa_set_vring_dev_call(dev, &vhost_call_file); > + if (unlikely(r)) { > + error_report("vhost_vdpa_set_vring_call for shadow vq failed"); > + return false; > + } Similar to kick, do we need to set_vring_call() before vhost_svq_start()? > + > + /* Check for pending calls */ > + event_notifier_set(vhost_call_notifier); Interesting, can this result spurious interrupt? > + return true; > } > > static unsigned vhost_vdpa_enable_svq(struct vhost_vdpa *v, bool enable) > { > struct vhost_dev *hdev = v->dev; > unsigned n; > + int r; > > if (enable == v->shadow_vqs_enabled) { > return hdev->nvqs; > @@ -752,9 +779,18 @@ static unsigned vhost_vdpa_enable_svq(struct vhost_vdpa *v, bool enable) > if (!enable) { > /* Disable all queues or clean up failed start */ > for (n = 0; n < v->shadow_vqs->len; ++n) { > + struct vhost_vring_file file = { > + .index = vhost_vdpa_get_vq_index(hdev, n), > + .fd = v->call_fd[n], > + }; > + > + r = vhost_vdpa_set_vring_call(hdev, &file); > + assert(r == 0); > + > unsigned vq_idx = vhost_vdpa_get_vq_index(hdev, n); > VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, n); > vhost_svq_stop(hdev, n, svq); > + /* TODO: This can unmask or override call fd! */ I don't get this comment. Does this mean the current code can't work with mask_notifiers? If yes, this is something we need to fix. Thanks > vhost_virtqueue_start(hdev, hdev->vdev, &hdev->vqs[n], vq_idx); > } > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
WARNING: multiple messages have this Message-ID (diff)
From: Jason Wang <jasowang@redhat.com> To: "Eugenio Pérez" <eperezma@redhat.com>, qemu-devel@nongnu.org Cc: Parav Pandit <parav@mellanox.com>, Juan Quintela <quintela@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, Markus Armbruster <armbru@redhat.com>, virtualization@lists.linux-foundation.org, Harpreet Singh Anand <hanand@xilinx.com>, Xiao W Wang <xiao.w.wang@intel.com>, Stefan Hajnoczi <stefanha@redhat.com>, Eli Cohen <eli@mellanox.com>, Eric Blake <eblake@redhat.com>, Michael Lilja <ml@napatech.com>, Stefano Garzarella <sgarzare@redhat.com> Subject: Re: [RFC PATCH v4 11/20] vhost: Route host->guest notification through shadow virtqueue Date: Wed, 13 Oct 2021 11:47:44 +0800 [thread overview] Message-ID: <ab9a7771-5f9b-6413-3e38-bd3dc7373256@redhat.com> (raw) In-Reply-To: <20211001070603.307037-12-eperezma@redhat.com> 在 2021/10/1 下午3:05, Eugenio Pérez 写道: > This will make qemu aware of the device used buffers, allowing it to > write the guest memory with its contents if needed. > > Since the use of vhost_virtqueue_start can unmasks and discard call > events, vhost_virtqueue_start should be modified in one of these ways: > * Split in two: One of them uses all logic to start a queue with no > side effects for the guest, and another one tha actually assumes that > the guest has just started the device. Vdpa should use just the > former. > * Actually store and check if the guest notifier is masked, and do it > conditionally. > * Left as it is, and duplicate all the logic in vhost-vdpa. > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > --- > hw/virtio/vhost-shadow-virtqueue.c | 19 +++++++++++++++ > hw/virtio/vhost-vdpa.c | 38 +++++++++++++++++++++++++++++- > 2 files changed, 56 insertions(+), 1 deletion(-) > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c > index 21dc99ab5d..3fe129cf63 100644 > --- a/hw/virtio/vhost-shadow-virtqueue.c > +++ b/hw/virtio/vhost-shadow-virtqueue.c > @@ -53,6 +53,22 @@ static void vhost_handle_guest_kick(EventNotifier *n) > event_notifier_set(&svq->kick_notifier); > } > > +/* Forward vhost notifications */ > +static void vhost_svq_handle_call_no_test(EventNotifier *n) > +{ > + VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue, > + call_notifier); > + > + event_notifier_set(&svq->guest_call_notifier); > +} > + > +static void vhost_svq_handle_call(EventNotifier *n) > +{ > + if (likely(event_notifier_test_and_clear(n))) { > + vhost_svq_handle_call_no_test(n); > + } > +} > + > /* > * Obtain the SVQ call notifier, where vhost device notifies SVQ that there > * exists pending used buffers. > @@ -180,6 +196,8 @@ VhostShadowVirtqueue *vhost_svq_new(struct vhost_dev *dev, int idx) > } > > svq->vq = virtio_get_queue(dev->vdev, vq_idx); > + event_notifier_set_handler(&svq->call_notifier, > + vhost_svq_handle_call); > return g_steal_pointer(&svq); > > err_init_call_notifier: > @@ -195,6 +213,7 @@ err_init_kick_notifier: > void vhost_svq_free(VhostShadowVirtqueue *vq) > { > event_notifier_cleanup(&vq->kick_notifier); > + event_notifier_set_handler(&vq->call_notifier, NULL); > event_notifier_cleanup(&vq->call_notifier); > g_free(vq); > } > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c > index bc34de2439..6c5f4c98b8 100644 > --- a/hw/virtio/vhost-vdpa.c > +++ b/hw/virtio/vhost-vdpa.c > @@ -712,13 +712,40 @@ static bool vhost_vdpa_svq_start_vq(struct vhost_dev *dev, unsigned idx) > { > struct vhost_vdpa *v = dev->opaque; > VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, idx); > - return vhost_svq_start(dev, idx, svq); > + EventNotifier *vhost_call_notifier = vhost_svq_get_svq_call_notifier(svq); > + struct vhost_vring_file vhost_call_file = { > + .index = idx + dev->vq_index, > + .fd = event_notifier_get_fd(vhost_call_notifier), > + }; > + int r; > + bool b; > + > + /* Set shadow vq -> guest notifier */ > + assert(v->call_fd[idx]); We need aovid the asser() here. On which case we can hit this? > + vhost_svq_set_guest_call_notifier(svq, v->call_fd[idx]); > + > + b = vhost_svq_start(dev, idx, svq); > + if (unlikely(!b)) { > + return false; > + } > + > + /* Set device -> SVQ notifier */ > + r = vhost_vdpa_set_vring_dev_call(dev, &vhost_call_file); > + if (unlikely(r)) { > + error_report("vhost_vdpa_set_vring_call for shadow vq failed"); > + return false; > + } Similar to kick, do we need to set_vring_call() before vhost_svq_start()? > + > + /* Check for pending calls */ > + event_notifier_set(vhost_call_notifier); Interesting, can this result spurious interrupt? > + return true; > } > > static unsigned vhost_vdpa_enable_svq(struct vhost_vdpa *v, bool enable) > { > struct vhost_dev *hdev = v->dev; > unsigned n; > + int r; > > if (enable == v->shadow_vqs_enabled) { > return hdev->nvqs; > @@ -752,9 +779,18 @@ static unsigned vhost_vdpa_enable_svq(struct vhost_vdpa *v, bool enable) > if (!enable) { > /* Disable all queues or clean up failed start */ > for (n = 0; n < v->shadow_vqs->len; ++n) { > + struct vhost_vring_file file = { > + .index = vhost_vdpa_get_vq_index(hdev, n), > + .fd = v->call_fd[n], > + }; > + > + r = vhost_vdpa_set_vring_call(hdev, &file); > + assert(r == 0); > + > unsigned vq_idx = vhost_vdpa_get_vq_index(hdev, n); > VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, n); > vhost_svq_stop(hdev, n, svq); > + /* TODO: This can unmask or override call fd! */ I don't get this comment. Does this mean the current code can't work with mask_notifiers? If yes, this is something we need to fix. Thanks > vhost_virtqueue_start(hdev, hdev->vdev, &hdev->vqs[n], vq_idx); > } >
next prev parent reply other threads:[~2021-10-13 3:48 UTC|newest] Thread overview: 117+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-10-01 7:05 [RFC PATCH v4 00/20] vDPA shadow virtqueue Eugenio Pérez 2021-10-01 7:05 ` [RFC PATCH v4 01/20] virtio: Add VIRTIO_F_QUEUE_STATE Eugenio Pérez 2021-10-01 7:05 ` [RFC PATCH v4 02/20] virtio-net: Honor VIRTIO_CONFIG_S_DEVICE_STOPPED Eugenio Pérez 2021-10-01 7:05 ` [RFC PATCH v4 03/20] virtio: Add virtio_queue_is_host_notifier_enabled Eugenio Pérez 2021-10-01 7:05 ` [RFC PATCH v4 04/20] vhost: Make vhost_virtqueue_{start,stop} public Eugenio Pérez 2021-10-01 7:05 ` [RFC PATCH v4 05/20] vhost: Add x-vhost-enable-shadow-vq qmp Eugenio Pérez 2021-10-12 5:18 ` Markus Armbruster 2021-10-12 13:08 ` Eugenio Perez Martin 2021-10-12 13:45 ` Markus Armbruster 2021-10-14 12:01 ` Eugenio Perez Martin 2021-10-01 7:05 ` [RFC PATCH v4 06/20] vhost: Add VhostShadowVirtqueue Eugenio Pérez 2021-10-01 7:05 ` [RFC PATCH v4 07/20] vdpa: Register vdpa devices in a list Eugenio Pérez 2021-10-01 7:05 ` [RFC PATCH v4 08/20] vhost: Route guest->host notification through shadow virtqueue Eugenio Pérez 2021-10-12 5:19 ` Markus Armbruster 2021-10-12 13:09 ` Eugenio Perez Martin 2021-10-13 3:27 ` Jason Wang 2021-10-13 3:27 ` Jason Wang 2021-10-14 12:00 ` Eugenio Perez Martin 2021-10-15 3:45 ` Jason Wang 2021-10-15 3:45 ` Jason Wang 2021-10-15 9:08 ` Eugenio Perez Martin 2021-10-15 18:21 ` Eugenio Perez Martin 2021-10-01 7:05 ` [RFC PATCH v4 09/20] vdpa: Save call_fd in vhost-vdpa Eugenio Pérez 2021-10-13 3:43 ` Jason Wang 2021-10-13 3:43 ` Jason Wang 2021-10-14 12:11 ` Eugenio Perez Martin 2021-10-01 7:05 ` [RFC PATCH v4 10/20] vhost-vdpa: Take into account SVQ in vhost_vdpa_set_vring_call Eugenio Pérez 2021-10-13 3:43 ` Jason Wang 2021-10-13 3:43 ` Jason Wang 2021-10-14 12:18 ` Eugenio Perez Martin 2021-10-01 7:05 ` [RFC PATCH v4 11/20] vhost: Route host->guest notification through shadow virtqueue Eugenio Pérez 2021-10-13 3:47 ` Jason Wang [this message] 2021-10-13 3:47 ` Jason Wang 2021-10-14 16:39 ` Eugenio Perez Martin 2021-10-15 4:42 ` Jason Wang 2021-10-15 4:42 ` Jason Wang 2021-10-19 8:39 ` Eugenio Perez Martin 2021-10-20 2:01 ` Jason Wang 2021-10-20 2:01 ` Jason Wang 2021-10-20 6:36 ` Eugenio Perez Martin 2021-10-13 3:49 ` Jason Wang 2021-10-13 3:49 ` Jason Wang 2021-10-14 15:58 ` Eugenio Perez Martin 2021-10-15 4:24 ` Jason Wang 2021-10-15 4:24 ` Jason Wang 2021-10-01 7:05 ` [RFC PATCH v4 12/20] virtio: Add vhost_shadow_vq_get_vring_addr Eugenio Pérez 2021-10-13 3:54 ` Jason Wang 2021-10-13 3:54 ` Jason Wang 2021-10-14 14:39 ` Eugenio Perez Martin 2021-10-01 7:05 ` [RFC PATCH v4 13/20] vdpa: Save host and guest features Eugenio Pérez 2021-10-13 3:56 ` Jason Wang 2021-10-13 3:56 ` Jason Wang 2021-10-14 15:03 ` Eugenio Perez Martin 2021-10-01 7:05 ` [RFC PATCH v4 14/20] vhost: Add vhost_svq_valid_device_features to shadow vq Eugenio Pérez 2021-10-01 7:05 ` [RFC PATCH v4 15/20] vhost: Shadow virtqueue buffers forwarding Eugenio Pérez 2021-10-12 5:21 ` Markus Armbruster 2021-10-12 13:28 ` Eugenio Perez Martin 2021-10-12 13:48 ` Markus Armbruster 2021-10-14 15:04 ` Eugenio Perez Martin 2021-10-13 4:31 ` Jason Wang 2021-10-13 4:31 ` Jason Wang 2021-10-14 17:56 ` Eugenio Perez Martin 2021-10-15 4:23 ` Jason Wang 2021-10-15 4:23 ` Jason Wang 2021-10-15 9:33 ` Eugenio Perez Martin 2021-10-01 7:05 ` [RFC PATCH v4 16/20] vhost: Check for device VRING_USED_F_NO_NOTIFY at shadow virtqueue kick Eugenio Pérez 2021-10-13 4:35 ` Jason Wang 2021-10-13 4:35 ` Jason Wang 2021-10-15 6:17 ` Eugenio Perez Martin 2021-10-01 7:06 ` [RFC PATCH v4 17/20] vhost: Use VRING_AVAIL_F_NO_INTERRUPT at device call on shadow virtqueue Eugenio Pérez 2021-10-13 4:36 ` Jason Wang 2021-10-13 4:36 ` Jason Wang 2021-10-15 6:22 ` Eugenio Perez Martin 2021-10-01 7:06 ` [RFC PATCH v4 18/20] vhost: Add VhostIOVATree Eugenio Pérez 2021-10-19 8:32 ` Jason Wang 2021-10-19 8:32 ` Jason Wang 2021-10-19 9:22 ` Jason Wang 2021-10-20 7:54 ` Eugenio Perez Martin 2021-10-20 9:01 ` Jason Wang 2021-10-20 12:06 ` Eugenio Perez Martin 2021-10-21 2:34 ` Jason Wang 2021-10-21 7:03 ` Eugenio Perez Martin 2021-10-21 8:12 ` Jason Wang 2021-10-21 14:33 ` Eugenio Perez Martin 2021-10-26 4:29 ` Jason Wang 2021-10-20 7:36 ` Eugenio Perez Martin 2021-10-01 7:06 ` [RFC PATCH v4 19/20] vhost: Use a tree to store memory mappings Eugenio Pérez 2021-10-01 7:06 ` [RFC PATCH v4 20/20] vdpa: Add custom IOTLB translations to SVQ Eugenio Pérez 2021-10-13 5:34 ` Jason Wang 2021-10-13 5:34 ` Jason Wang 2021-10-15 7:27 ` Eugenio Perez Martin 2021-10-15 7:37 ` Jason Wang 2021-10-15 7:37 ` Jason Wang 2021-10-15 8:20 ` Eugenio Perez Martin 2021-10-15 8:37 ` Jason Wang 2021-10-15 8:37 ` Jason Wang 2021-10-15 9:14 ` Eugenio Perez Martin 2021-10-19 9:24 ` Jason Wang 2021-10-19 9:24 ` Jason Wang 2021-10-19 10:28 ` Eugenio Perez Martin 2021-10-20 2:02 ` Jason Wang 2021-10-20 2:02 ` Jason Wang 2021-10-20 2:07 ` Jason Wang 2021-10-20 2:07 ` Jason Wang 2021-10-20 6:51 ` Eugenio Perez Martin 2021-10-20 9:03 ` Jason Wang 2021-10-20 9:03 ` Jason Wang 2021-10-20 11:56 ` Eugenio Perez Martin 2021-10-21 2:38 ` Jason Wang 2021-10-21 2:38 ` Jason Wang 2021-10-26 4:32 ` Jason Wang 2021-10-26 4:32 ` Jason Wang 2021-10-12 3:59 ` [RFC PATCH v4 00/20] vDPA shadow virtqueue Jason Wang 2021-10-12 3:59 ` Jason Wang 2021-10-12 4:06 ` Jason Wang 2021-10-12 4:06 ` Jason Wang 2021-10-12 9:09 ` Eugenio Perez Martin
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=ab9a7771-5f9b-6413-3e38-bd3dc7373256@redhat.com \ --to=jasowang@redhat.com \ --cc=armbru@redhat.com \ --cc=eblake@redhat.com \ --cc=eli@mellanox.com \ --cc=eperezma@redhat.com \ --cc=hanand@xilinx.com \ --cc=ml@napatech.com \ --cc=mst@redhat.com \ --cc=parav@mellanox.com \ --cc=qemu-devel@nongnu.org \ --cc=stefanha@redhat.com \ --cc=virtualization@lists.linux-foundation.org \ --cc=xiao.w.wang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.