All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/1] vhost: Protect the virtqueue from being cleared whilst still in use
@ 2022-03-02  7:54 ` Lee Jones
  0 siblings, 0 replies; 94+ messages in thread
From: Lee Jones @ 2022-03-02  7:54 UTC (permalink / raw)
  To: lee.jones, mst, jasowang
  Cc: linux-kernel, kvm, virtualization, netdev, stable,
	syzbot+adc3cb32385586bec859

vhost_vsock_handle_tx_kick() already holds the mutex during its call
to vhost_get_vq_desc().  All we have to do is take the same lock
during virtqueue clean-up and we mitigate the reported issues.

Link: https://syzkaller.appspot.com/bug?extid=279432d30d825e63ba00

Cc: <stable@vger.kernel.org>
Reported-by: syzbot+adc3cb32385586bec859@syzkaller.appspotmail.com
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 drivers/vhost/vhost.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 59edb5a1ffe28..bbaff6a5e21b8 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -693,6 +693,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev)
 	int i;
 
 	for (i = 0; i < dev->nvqs; ++i) {
+		mutex_lock(&dev->vqs[i]->mutex);
 		if (dev->vqs[i]->error_ctx)
 			eventfd_ctx_put(dev->vqs[i]->error_ctx);
 		if (dev->vqs[i]->kick)
@@ -700,6 +701,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev)
 		if (dev->vqs[i]->call_ctx.ctx)
 			eventfd_ctx_put(dev->vqs[i]->call_ctx.ctx);
 		vhost_vq_reset(dev, dev->vqs[i]);
+		mutex_unlock(&dev->vqs[i]->mutex);
 	}
 	vhost_dev_free_iovecs(dev);
 	if (dev->log_ctx)
-- 
2.35.1.574.g5d30c73bfb-goog


^ permalink raw reply related	[flat|nested] 94+ messages in thread
* [PATCH 1/1] vhost: Protect the virtqueue from being cleared whilst still in use
@ 2022-03-07 19:17 ` Lee Jones
  0 siblings, 0 replies; 94+ messages in thread
From: Lee Jones @ 2022-03-07 19:17 UTC (permalink / raw)
  To: lee.jones, mst, jasowang
  Cc: linux-kernel, kvm, virtualization, netdev, stable,
	syzbot+adc3cb32385586bec859

vhost_vsock_handle_tx_kick() already holds the mutex during its call
to vhost_get_vq_desc().  All we have to do here is take the same lock
during virtqueue clean-up and we mitigate the reported issues.

Also WARN() as a precautionary measure.  The purpose of this is to
capture possible future race conditions which may pop up over time.

Link: https://syzkaller.appspot.com/bug?extid=279432d30d825e63ba00

Cc: <stable@vger.kernel.org>
Reported-by: syzbot+adc3cb32385586bec859@syzkaller.appspotmail.com
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 drivers/vhost/vhost.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 59edb5a1ffe28..ef7e371e3e649 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -693,6 +693,15 @@ void vhost_dev_cleanup(struct vhost_dev *dev)
 	int i;
 
 	for (i = 0; i < dev->nvqs; ++i) {
+		/* No workers should run here by design. However, races have
+		 * previously occurred where drivers have been unable to flush
+		 * all work properly prior to clean-up.  Without a successful
+		 * flush the guest will malfunction, but avoiding host memory
+		 * corruption in those cases does seem preferable.
+		 */
+		WARN_ON(mutex_is_locked(&dev->vqs[i]->mutex));
+
+		mutex_lock(&dev->vqs[i]->mutex);
 		if (dev->vqs[i]->error_ctx)
 			eventfd_ctx_put(dev->vqs[i]->error_ctx);
 		if (dev->vqs[i]->kick)
@@ -700,6 +709,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev)
 		if (dev->vqs[i]->call_ctx.ctx)
 			eventfd_ctx_put(dev->vqs[i]->call_ctx.ctx);
 		vhost_vq_reset(dev, dev->vqs[i]);
+		mutex_unlock(&dev->vqs[i]->mutex);
 	}
 	vhost_dev_free_iovecs(dev);
 	if (dev->log_ctx)
-- 
2.35.1.616.g0bdcbb4464-goog


^ permalink raw reply related	[flat|nested] 94+ messages in thread
* [PATCH 1/1] vhost: Protect the virtqueue from being cleared whilst still in use
@ 2022-03-14  8:43 ` Lee Jones
  0 siblings, 0 replies; 94+ messages in thread
From: Lee Jones @ 2022-03-14  8:43 UTC (permalink / raw)
  To: lee.jones, mst, jasowang
  Cc: linux-kernel, kvm, virtualization, netdev, stable

vhost_vsock_handle_tx_kick() already holds the mutex during its call
to vhost_get_vq_desc().  All we have to do here is take the same lock
during virtqueue clean-up and we mitigate the reported issues.

Also WARN() as a precautionary measure.  The purpose of this is to
capture possible future race conditions which may pop up over time.

Cc: <stable@vger.kernel.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 drivers/vhost/vhost.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 59edb5a1ffe28..bbaff6a5e21b8 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -693,6 +693,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev)
 	int i;
 
 	for (i = 0; i < dev->nvqs; ++i) {
+		mutex_lock(&dev->vqs[i]->mutex);
 		if (dev->vqs[i]->error_ctx)
 			eventfd_ctx_put(dev->vqs[i]->error_ctx);
 		if (dev->vqs[i]->kick)
@@ -700,6 +701,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev)
 		if (dev->vqs[i]->call_ctx.ctx)
 			eventfd_ctx_put(dev->vqs[i]->call_ctx.ctx);
 		vhost_vq_reset(dev, dev->vqs[i]);
+		mutex_unlock(&dev->vqs[i]->mutex);
 	}
 	vhost_dev_free_iovecs(dev);
 	if (dev->log_ctx)
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 94+ messages in thread

end of thread, other threads:[~2022-03-14 12:48 UTC | newest]

Thread overview: 94+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-02  7:54 [PATCH 1/1] vhost: Protect the virtqueue from being cleared whilst still in use Lee Jones
2022-03-02  7:54 ` Lee Jones
2022-03-02  9:34 ` Stefano Garzarella
2022-03-02  9:34   ` Stefano Garzarella
2022-03-02 10:07   ` Lee Jones
2022-03-02 10:07     ` Lee Jones
2022-03-02 13:35   ` Michael S. Tsirkin
2022-03-02 13:35     ` Michael S. Tsirkin
2022-03-02 14:11     ` Stefano Garzarella
2022-03-02 14:11       ` Stefano Garzarella
2022-03-02 14:50       ` Michael S. Tsirkin
2022-03-02 14:50         ` Michael S. Tsirkin
2022-03-02 15:36         ` Stefano Garzarella
2022-03-02 15:36           ` Stefano Garzarella
2022-03-04 16:46           ` Michael S. Tsirkin
2022-03-04 16:46             ` Michael S. Tsirkin
2022-03-02 13:30 ` Michael S. Tsirkin
2022-03-02 13:30   ` Michael S. Tsirkin
2022-03-02 13:56   ` Lee Jones
2022-03-02 13:56     ` Lee Jones
2022-03-02 14:51     ` Michael S. Tsirkin
2022-03-02 14:51       ` Michael S. Tsirkin
2022-03-02 14:57       ` Lee Jones
2022-03-02 14:57         ` Lee Jones
2022-03-02 16:28         ` Stefano Garzarella
2022-03-02 16:28           ` Stefano Garzarella
2022-03-02 16:30           ` Michael S. Tsirkin
2022-03-02 16:30             ` Michael S. Tsirkin
2022-03-02 16:49             ` Lee Jones
2022-03-02 16:49               ` Lee Jones
2022-03-02 17:10               ` Stefano Garzarella
2022-03-02 17:10                 ` Stefano Garzarella
2022-03-03 14:17                 ` Lee Jones
2022-03-03 14:17                   ` Lee Jones
2022-03-04  5:00 ` Michael S. Tsirkin
2022-03-04  5:00   ` Michael S. Tsirkin
2022-03-04 15:22   ` Lee Jones
2022-03-04 15:22     ` Lee Jones
2022-03-04 16:48 ` Michael S. Tsirkin
2022-03-04 16:48   ` Michael S. Tsirkin
2022-03-04 16:56   ` Lee Jones
2022-03-04 16:56     ` Lee Jones
2022-03-07 19:17 Lee Jones
2022-03-07 19:17 ` Lee Jones
2022-03-07 19:33 ` Greg KH
2022-03-07 19:33   ` Greg KH
2022-03-07 22:39   ` Michael S. Tsirkin
2022-03-07 22:39     ` Michael S. Tsirkin
2022-03-08  8:10   ` Lee Jones
2022-03-08  8:10     ` Lee Jones
2022-03-08  8:11     ` Lee Jones
2022-03-08  8:11       ` Lee Jones
2022-03-08  8:57     ` Greg KH
2022-03-08  8:57       ` Greg KH
2022-03-08  9:15       ` Lee Jones
2022-03-08  9:15         ` Lee Jones
2022-03-08  9:57         ` Greg KH
2022-03-08  9:57           ` Greg KH
2022-03-08 10:08           ` Lee Jones
2022-03-08 10:08             ` Lee Jones
2022-03-08 10:55           ` Michael S. Tsirkin
2022-03-08 10:55             ` Michael S. Tsirkin
2022-03-08 11:45             ` Greg KH
2022-03-08 11:45               ` Greg KH
2022-03-08 12:27               ` Michael S. Tsirkin
2022-03-08 12:27                 ` Michael S. Tsirkin
2022-03-08 13:17                 ` Lee Jones
2022-03-08 13:17                   ` Lee Jones
2022-03-08 17:17                   ` Michael S. Tsirkin
2022-03-08 17:17                     ` Michael S. Tsirkin
2022-03-08 11:05       ` Michael S. Tsirkin
2022-03-08 11:05         ` Michael S. Tsirkin
2022-03-09 18:52       ` Leon Romanovsky
2022-03-09 18:52         ` Leon Romanovsky
2022-03-07 22:37 ` Michael S. Tsirkin
2022-03-07 22:37   ` Michael S. Tsirkin
2022-03-08  8:01   ` Lee Jones
2022-03-08  8:01     ` Lee Jones
2022-03-08 11:07     ` Michael S. Tsirkin
2022-03-08 11:07       ` Michael S. Tsirkin
2022-03-08  6:15 ` Jason Wang
2022-03-08  6:15   ` Jason Wang
2022-03-08  8:08   ` Lee Jones
2022-03-08  8:08     ` Lee Jones
2022-03-08 11:06     ` Michael S. Tsirkin
2022-03-08 11:06       ` Michael S. Tsirkin
2022-03-14  8:43 Lee Jones
2022-03-14  8:43 ` Lee Jones
2022-03-14  8:56 ` Greg KH
2022-03-14  8:56   ` Greg KH
2022-03-14 11:49 ` Michael S. Tsirkin
2022-03-14 11:49   ` Michael S. Tsirkin
2022-03-14 12:47   ` Lee Jones
2022-03-14 12:47     ` Lee Jones

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.