KVM Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove()
@ 2019-05-28 10:56 Stefano Garzarella
  2019-05-28 10:56 ` [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock' Stefano Garzarella
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Stefano Garzarella @ 2019-05-28 10:56 UTC (permalink / raw)
  To: netdev
  Cc: linux-kernel, virtualization, kvm, Stefan Hajnoczi,
	David S. Miller, Michael S . Tsirkin

During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
before registering the driver", Stefan pointed out some possible issues
in the .probe() and .remove() callbacks of the virtio-vsock driver.

This series tries to solve these issues:
- Patch 1 postpones the 'the_virtio_vsock' assignment at the end of the
  .probe() to avoid that some sockets queue works when the initialization
  is not finished.
- Patches 2 and 3 stop workers before to call vdev->config->reset(vdev) to
  be sure that no one is accessing the device, and adds another flush at the
  end of the .remove() to avoid use after free.
- Patch 4 free also used buffers in the virtqueues during the .remove().

Stefano Garzarella (4):
  vsock/virtio: fix locking around 'the_virtio_vsock'
  vsock/virtio: stop workers during the .remove()
  vsock/virtio: fix flush of works during the .remove()
  vsock/virtio: free used buffers during the .remove()

 net/vmw_vsock/virtio_transport.c | 105 ++++++++++++++++++++++++++-----
 1 file changed, 90 insertions(+), 15 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock'
  2019-05-28 10:56 [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefano Garzarella
@ 2019-05-28 10:56 ` Stefano Garzarella
  2019-05-30  4:28   ` David Miller
  2019-05-28 10:56 ` [PATCH 2/4] vsock/virtio: stop workers during the .remove() Stefano Garzarella
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Stefano Garzarella @ 2019-05-28 10:56 UTC (permalink / raw)
  To: netdev
  Cc: linux-kernel, virtualization, kvm, Stefan Hajnoczi,
	David S. Miller, Michael S . Tsirkin

This patch protects the reading of 'the_virtio_vsock' taking the
mutex used when it is set.
We also move the 'the_virtio_vsock' assignment at the end of the
.probe(), when we finished all the initialization, and at the
beginning of .remove(), before to release resources, taking the
lock until the end of the function.

Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
---
 net/vmw_vsock/virtio_transport.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index 96ab344f17bb..d3ba7747aa73 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -68,7 +68,13 @@ struct virtio_vsock {
 
 static struct virtio_vsock *virtio_vsock_get(void)
 {
-	return the_virtio_vsock;
+	struct virtio_vsock *vsock;
+
+	mutex_lock(&the_virtio_vsock_mutex);
+	vsock = the_virtio_vsock;
+	mutex_unlock(&the_virtio_vsock_mutex);
+
+	return vsock;
 }
 
 static u32 virtio_transport_get_local_cid(void)
@@ -592,7 +598,6 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
 	atomic_set(&vsock->queued_replies, 0);
 
 	vdev->priv = vsock;
-	the_virtio_vsock = vsock;
 	mutex_init(&vsock->tx_lock);
 	mutex_init(&vsock->rx_lock);
 	mutex_init(&vsock->event_lock);
@@ -614,6 +619,8 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
 	virtio_vsock_event_fill(vsock);
 	mutex_unlock(&vsock->event_lock);
 
+	the_virtio_vsock = vsock;
+
 	mutex_unlock(&the_virtio_vsock_mutex);
 	return 0;
 
@@ -628,6 +635,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
 	struct virtio_vsock *vsock = vdev->priv;
 	struct virtio_vsock_pkt *pkt;
 
+	mutex_lock(&the_virtio_vsock_mutex);
+	the_virtio_vsock = NULL;
+
 	flush_work(&vsock->loopback_work);
 	flush_work(&vsock->rx_work);
 	flush_work(&vsock->tx_work);
@@ -667,13 +677,10 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
 	}
 	spin_unlock_bh(&vsock->loopback_list_lock);
 
-	mutex_lock(&the_virtio_vsock_mutex);
-	the_virtio_vsock = NULL;
-	mutex_unlock(&the_virtio_vsock_mutex);
-
 	vdev->config->del_vqs(vdev);
 
 	kfree(vsock);
+	mutex_unlock(&the_virtio_vsock_mutex);
 }
 
 static struct virtio_device_id id_table[] = {
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 2/4] vsock/virtio: stop workers during the .remove()
  2019-05-28 10:56 [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefano Garzarella
  2019-05-28 10:56 ` [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock' Stefano Garzarella
@ 2019-05-28 10:56 ` Stefano Garzarella
  2019-05-28 10:56 ` [PATCH 3/4] vsock/virtio: fix flush of works " Stefano Garzarella
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 19+ messages in thread
From: Stefano Garzarella @ 2019-05-28 10:56 UTC (permalink / raw)
  To: netdev
  Cc: linux-kernel, virtualization, kvm, Stefan Hajnoczi,
	David S. Miller, Michael S . Tsirkin

Before to call vdev->config->reset(vdev) we need to be sure that
no one is accessing the device, for this reason, we add new variables
in the struct virtio_vsock to stop the workers during the .remove().

This patch also add few comments before vdev->config->reset(vdev)
and vdev->config->del_vqs(vdev).

Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
---
 net/vmw_vsock/virtio_transport.c | 49 +++++++++++++++++++++++++++++++-
 1 file changed, 48 insertions(+), 1 deletion(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index d3ba7747aa73..e694df10ab61 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -39,6 +39,7 @@ struct virtio_vsock {
 	 * must be accessed with tx_lock held.
 	 */
 	struct mutex tx_lock;
+	bool tx_run;
 
 	struct work_struct send_pkt_work;
 	spinlock_t send_pkt_list_lock;
@@ -54,6 +55,7 @@ struct virtio_vsock {
 	 * must be accessed with rx_lock held.
 	 */
 	struct mutex rx_lock;
+	bool rx_run;
 	int rx_buf_nr;
 	int rx_buf_max_nr;
 
@@ -61,6 +63,7 @@ struct virtio_vsock {
 	 * vqs[VSOCK_VQ_EVENT] must be accessed with event_lock held.
 	 */
 	struct mutex event_lock;
+	bool event_run;
 	struct virtio_vsock_event event_list[8];
 
 	u32 guest_cid;
@@ -98,6 +101,10 @@ static void virtio_transport_loopback_work(struct work_struct *work)
 	spin_unlock_bh(&vsock->loopback_list_lock);
 
 	mutex_lock(&vsock->rx_lock);
+
+	if (!vsock->rx_run)
+		goto out;
+
 	while (!list_empty(&pkts)) {
 		struct virtio_vsock_pkt *pkt;
 
@@ -106,6 +113,7 @@ static void virtio_transport_loopback_work(struct work_struct *work)
 
 		virtio_transport_recv_pkt(pkt);
 	}
+out:
 	mutex_unlock(&vsock->rx_lock);
 }
 
@@ -134,6 +142,9 @@ virtio_transport_send_pkt_work(struct work_struct *work)
 
 	mutex_lock(&vsock->tx_lock);
 
+	if (!vsock->tx_run)
+		goto out;
+
 	vq = vsock->vqs[VSOCK_VQ_TX];
 
 	for (;;) {
@@ -192,6 +203,7 @@ virtio_transport_send_pkt_work(struct work_struct *work)
 	if (added)
 		virtqueue_kick(vq);
 
+out:
 	mutex_unlock(&vsock->tx_lock);
 
 	if (restart_rx)
@@ -314,6 +326,10 @@ static void virtio_transport_tx_work(struct work_struct *work)
 
 	vq = vsock->vqs[VSOCK_VQ_TX];
 	mutex_lock(&vsock->tx_lock);
+
+	if (!vsock->tx_run)
+		goto out;
+
 	do {
 		struct virtio_vsock_pkt *pkt;
 		unsigned int len;
@@ -324,6 +340,8 @@ static void virtio_transport_tx_work(struct work_struct *work)
 			added = true;
 		}
 	} while (!virtqueue_enable_cb(vq));
+
+out:
 	mutex_unlock(&vsock->tx_lock);
 
 	if (added)
@@ -352,6 +370,9 @@ static void virtio_transport_rx_work(struct work_struct *work)
 
 	mutex_lock(&vsock->rx_lock);
 
+	if (!vsock->rx_run)
+		goto out;
+
 	do {
 		virtqueue_disable_cb(vq);
 		for (;;) {
@@ -461,6 +482,9 @@ static void virtio_transport_event_work(struct work_struct *work)
 
 	mutex_lock(&vsock->event_lock);
 
+	if (!vsock->event_run)
+		goto out;
+
 	do {
 		struct virtio_vsock_event *event;
 		unsigned int len;
@@ -475,7 +499,7 @@ static void virtio_transport_event_work(struct work_struct *work)
 	} while (!virtqueue_enable_cb(vq));
 
 	virtqueue_kick(vsock->vqs[VSOCK_VQ_EVENT]);
-
+out:
 	mutex_unlock(&vsock->event_lock);
 }
 
@@ -611,12 +635,18 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
 	INIT_WORK(&vsock->send_pkt_work, virtio_transport_send_pkt_work);
 	INIT_WORK(&vsock->loopback_work, virtio_transport_loopback_work);
 
+	mutex_lock(&vsock->tx_lock);
+	vsock->tx_run = true;
+	mutex_unlock(&vsock->tx_lock);
+
 	mutex_lock(&vsock->rx_lock);
 	virtio_vsock_rx_fill(vsock);
+	vsock->rx_run = true;
 	mutex_unlock(&vsock->rx_lock);
 
 	mutex_lock(&vsock->event_lock);
 	virtio_vsock_event_fill(vsock);
+	vsock->event_run = true;
 	mutex_unlock(&vsock->event_lock);
 
 	the_virtio_vsock = vsock;
@@ -647,6 +677,22 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
 	/* Reset all connected sockets when the device disappear */
 	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
 
+	/* Stop all work handlers to make sure no one is accessing the device */
+	mutex_lock(&vsock->rx_lock);
+	vsock->rx_run = false;
+	mutex_unlock(&vsock->rx_lock);
+
+	mutex_lock(&vsock->tx_lock);
+	vsock->tx_run = false;
+	mutex_unlock(&vsock->tx_lock);
+
+	mutex_lock(&vsock->event_lock);
+	vsock->event_run = false;
+	mutex_unlock(&vsock->event_lock);
+
+	/* Flush all device writes and interrupts, device will not use any
+	 * more buffers.
+	 */
 	vdev->config->reset(vdev);
 
 	mutex_lock(&vsock->rx_lock);
@@ -677,6 +723,7 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
 	}
 	spin_unlock_bh(&vsock->loopback_list_lock);
 
+	/* Delete virtqueues and flush outstanding callbacks if any */
 	vdev->config->del_vqs(vdev);
 
 	kfree(vsock);
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-05-28 10:56 [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefano Garzarella
  2019-05-28 10:56 ` [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock' Stefano Garzarella
  2019-05-28 10:56 ` [PATCH 2/4] vsock/virtio: stop workers during the .remove() Stefano Garzarella
@ 2019-05-28 10:56 ` " Stefano Garzarella
  2019-05-29  3:22   ` Jason Wang
  2019-05-28 10:56 ` [PATCH 4/4] vsock/virtio: free used buffers " Stefano Garzarella
  2019-06-10 13:09 ` [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefan Hajnoczi
  4 siblings, 1 reply; 19+ messages in thread
From: Stefano Garzarella @ 2019-05-28 10:56 UTC (permalink / raw)
  To: netdev
  Cc: linux-kernel, virtualization, kvm, Stefan Hajnoczi,
	David S. Miller, Michael S . Tsirkin

We flush all pending works before to call vdev->config->reset(vdev),
but other works can be queued before the vdev->config->del_vqs(vdev),
so we add another flush after it, to avoid use after free.

Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
---
 net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index e694df10ab61..ad093ce96693 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
 	return ret;
 }
 
+static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
+{
+	flush_work(&vsock->loopback_work);
+	flush_work(&vsock->rx_work);
+	flush_work(&vsock->tx_work);
+	flush_work(&vsock->event_work);
+	flush_work(&vsock->send_pkt_work);
+}
+
 static void virtio_vsock_remove(struct virtio_device *vdev)
 {
 	struct virtio_vsock *vsock = vdev->priv;
@@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
 	mutex_lock(&the_virtio_vsock_mutex);
 	the_virtio_vsock = NULL;
 
-	flush_work(&vsock->loopback_work);
-	flush_work(&vsock->rx_work);
-	flush_work(&vsock->tx_work);
-	flush_work(&vsock->event_work);
-	flush_work(&vsock->send_pkt_work);
-
 	/* Reset all connected sockets when the device disappear */
 	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
 
@@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
 	vsock->event_run = false;
 	mutex_unlock(&vsock->event_lock);
 
+	/* Flush all pending works */
+	virtio_vsock_flush_works(vsock);
+
 	/* Flush all device writes and interrupts, device will not use any
 	 * more buffers.
 	 */
@@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
 	/* Delete virtqueues and flush outstanding callbacks if any */
 	vdev->config->del_vqs(vdev);
 
+	/* Other works can be queued before 'config->del_vqs()', so we flush
+	 * all works before to free the vsock object to avoid use after free.
+	 */
+	virtio_vsock_flush_works(vsock);
+
 	kfree(vsock);
 	mutex_unlock(&the_virtio_vsock_mutex);
 }
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 4/4] vsock/virtio: free used buffers during the .remove()
  2019-05-28 10:56 [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefano Garzarella
                   ` (2 preceding siblings ...)
  2019-05-28 10:56 ` [PATCH 3/4] vsock/virtio: fix flush of works " Stefano Garzarella
@ 2019-05-28 10:56 ` " Stefano Garzarella
  2019-06-10 13:09 ` [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefan Hajnoczi
  4 siblings, 0 replies; 19+ messages in thread
From: Stefano Garzarella @ 2019-05-28 10:56 UTC (permalink / raw)
  To: netdev
  Cc: linux-kernel, virtualization, kvm, Stefan Hajnoczi,
	David S. Miller, Michael S . Tsirkin

Before this patch, we only freed unused buffers, but there may
still be used buffers to be freed.

Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
---
 net/vmw_vsock/virtio_transport.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index ad093ce96693..6a2afb989562 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -669,6 +669,18 @@ static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
 	flush_work(&vsock->send_pkt_work);
 }
 
+static void virtio_vsock_free_buf(struct virtqueue *vq)
+{
+	struct virtio_vsock_pkt *pkt;
+	unsigned int len;
+
+	while ((pkt = virtqueue_detach_unused_buf(vq)))
+		virtio_transport_free_pkt(pkt);
+
+	while ((pkt = virtqueue_get_buf(vq, &len)))
+		virtio_transport_free_pkt(pkt);
+}
+
 static void virtio_vsock_remove(struct virtio_device *vdev)
 {
 	struct virtio_vsock *vsock = vdev->priv;
@@ -702,13 +714,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
 	vdev->config->reset(vdev);
 
 	mutex_lock(&vsock->rx_lock);
-	while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_RX])))
-		virtio_transport_free_pkt(pkt);
+	virtio_vsock_free_buf(vsock->vqs[VSOCK_VQ_RX]);
 	mutex_unlock(&vsock->rx_lock);
 
 	mutex_lock(&vsock->tx_lock);
-	while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_TX])))
-		virtio_transport_free_pkt(pkt);
+	virtio_vsock_free_buf(vsock->vqs[VSOCK_VQ_TX]);
 	mutex_unlock(&vsock->tx_lock);
 
 	spin_lock_bh(&vsock->send_pkt_list_lock);
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-05-28 10:56 ` [PATCH 3/4] vsock/virtio: fix flush of works " Stefano Garzarella
@ 2019-05-29  3:22   ` Jason Wang
  2019-05-29 10:58     ` Stefano Garzarella
  0 siblings, 1 reply; 19+ messages in thread
From: Jason Wang @ 2019-05-29  3:22 UTC (permalink / raw)
  To: Stefano Garzarella, netdev
  Cc: linux-kernel, virtualization, kvm, Stefan Hajnoczi,
	David S. Miller, Michael S . Tsirkin


On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> We flush all pending works before to call vdev->config->reset(vdev),
> but other works can be queued before the vdev->config->del_vqs(vdev),
> so we add another flush after it, to avoid use after free.
>
> Suggested-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> ---
>   net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
>   1 file changed, 17 insertions(+), 6 deletions(-)
>
> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> index e694df10ab61..ad093ce96693 100644
> --- a/net/vmw_vsock/virtio_transport.c
> +++ b/net/vmw_vsock/virtio_transport.c
> @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
>   	return ret;
>   }
>   
> +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
> +{
> +	flush_work(&vsock->loopback_work);
> +	flush_work(&vsock->rx_work);
> +	flush_work(&vsock->tx_work);
> +	flush_work(&vsock->event_work);
> +	flush_work(&vsock->send_pkt_work);
> +}
> +
>   static void virtio_vsock_remove(struct virtio_device *vdev)
>   {
>   	struct virtio_vsock *vsock = vdev->priv;
> @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>   	mutex_lock(&the_virtio_vsock_mutex);
>   	the_virtio_vsock = NULL;
>   
> -	flush_work(&vsock->loopback_work);
> -	flush_work(&vsock->rx_work);
> -	flush_work(&vsock->tx_work);
> -	flush_work(&vsock->event_work);
> -	flush_work(&vsock->send_pkt_work);
> -
>   	/* Reset all connected sockets when the device disappear */
>   	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
>   
> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>   	vsock->event_run = false;
>   	mutex_unlock(&vsock->event_lock);
>   
> +	/* Flush all pending works */
> +	virtio_vsock_flush_works(vsock);
> +
>   	/* Flush all device writes and interrupts, device will not use any
>   	 * more buffers.
>   	 */
> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>   	/* Delete virtqueues and flush outstanding callbacks if any */
>   	vdev->config->del_vqs(vdev);
>   
> +	/* Other works can be queued before 'config->del_vqs()', so we flush
> +	 * all works before to free the vsock object to avoid use after free.
> +	 */
> +	virtio_vsock_flush_works(vsock);


Some questions after a quick glance:

1) It looks to me that the work could be queued from the path of 
vsock_transport_cancel_pkt() . Is that synchronized here?

2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run 
still needed? It looks to me we've already done except that we need 
flush rx_work in the end since send_pkt_work can requeue rx_work.

Thanks


> +
>   	kfree(vsock);
>   	mutex_unlock(&the_virtio_vsock_mutex);
>   }

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-05-29  3:22   ` Jason Wang
@ 2019-05-29 10:58     ` Stefano Garzarella
  2019-05-30  9:46       ` Jason Wang
  0 siblings, 1 reply; 19+ messages in thread
From: Stefano Garzarella @ 2019-05-29 10:58 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, linux-kernel, virtualization, kvm, Stefan Hajnoczi,
	David S. Miller, Michael S . Tsirkin

On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> 
> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > We flush all pending works before to call vdev->config->reset(vdev),
> > but other works can be queued before the vdev->config->del_vqs(vdev),
> > so we add another flush after it, to avoid use after free.
> > 
> > Suggested-by: Michael S. Tsirkin <mst@redhat.com>
> > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> > ---
> >   net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
> >   1 file changed, 17 insertions(+), 6 deletions(-)
> > 
> > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> > index e694df10ab61..ad093ce96693 100644
> > --- a/net/vmw_vsock/virtio_transport.c
> > +++ b/net/vmw_vsock/virtio_transport.c
> > @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
> >   	return ret;
> >   }
> > +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
> > +{
> > +	flush_work(&vsock->loopback_work);
> > +	flush_work(&vsock->rx_work);
> > +	flush_work(&vsock->tx_work);
> > +	flush_work(&vsock->event_work);
> > +	flush_work(&vsock->send_pkt_work);
> > +}
> > +
> >   static void virtio_vsock_remove(struct virtio_device *vdev)
> >   {
> >   	struct virtio_vsock *vsock = vdev->priv;
> > @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> >   	mutex_lock(&the_virtio_vsock_mutex);
> >   	the_virtio_vsock = NULL;
> > -	flush_work(&vsock->loopback_work);
> > -	flush_work(&vsock->rx_work);
> > -	flush_work(&vsock->tx_work);
> > -	flush_work(&vsock->event_work);
> > -	flush_work(&vsock->send_pkt_work);
> > -
> >   	/* Reset all connected sockets when the device disappear */
> >   	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
> > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> >   	vsock->event_run = false;
> >   	mutex_unlock(&vsock->event_lock);
> > +	/* Flush all pending works */
> > +	virtio_vsock_flush_works(vsock);
> > +
> >   	/* Flush all device writes and interrupts, device will not use any
> >   	 * more buffers.
> >   	 */
> > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> >   	/* Delete virtqueues and flush outstanding callbacks if any */
> >   	vdev->config->del_vqs(vdev);
> > +	/* Other works can be queued before 'config->del_vqs()', so we flush
> > +	 * all works before to free the vsock object to avoid use after free.
> > +	 */
> > +	virtio_vsock_flush_works(vsock);
> 
> 
> Some questions after a quick glance:
> 
> 1) It looks to me that the work could be queued from the path of
> vsock_transport_cancel_pkt() . Is that synchronized here?
>

Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
queue work from the upper layer (socket).

Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
a rare issue could happen:
we are setting the_virtio_vsock to NULL at the start of .remove() and we
are freeing the object pointed by it at the end of .remove(), so
virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
running, accessing the object that we are freed.

Should I use something like RCU to prevent this issue?

    virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
    {
        rcu_read_lock();
        vsock = rcu_dereference(the_virtio_vsock_mutex);
        ...
        rcu_read_unlock();
    }

    virtio_vsock_remove()
    {
        rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
        synchronize_rcu();

        ...

        free(vsock);
    }

Could there be a better approach?


> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> needed? It looks to me we've already done except that we need flush rx_work
> in the end since send_pkt_work can requeue rx_work.

The main reason of tx_run/rx_run/event_run is to prevent that a worker
function is running while we are calling config->reset().

E.g. if an interrupt comes between virtio_vsock_flush_works() and
config->reset(), it can queue new works that can access the device while
we are in config->reset().

IMHO they are still needed.

What do you think?


Thanks for your questions,
Stefano

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock'
  2019-05-28 10:56 ` [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock' Stefano Garzarella
@ 2019-05-30  4:28   ` David Miller
  2019-05-30 10:27     ` Stefano Garzarella
  0 siblings, 1 reply; 19+ messages in thread
From: David Miller @ 2019-05-30  4:28 UTC (permalink / raw)
  To: sgarzare; +Cc: netdev, linux-kernel, virtualization, kvm, stefanha, mst

From: Stefano Garzarella <sgarzare@redhat.com>
Date: Tue, 28 May 2019 12:56:20 +0200

> @@ -68,7 +68,13 @@ struct virtio_vsock {
>  
>  static struct virtio_vsock *virtio_vsock_get(void)
>  {
> -	return the_virtio_vsock;
> +	struct virtio_vsock *vsock;
> +
> +	mutex_lock(&the_virtio_vsock_mutex);
> +	vsock = the_virtio_vsock;
> +	mutex_unlock(&the_virtio_vsock_mutex);
> +
> +	return vsock;

This doesn't do anything as far as I can tell.

No matter what, you will either get the value before it's changed or
after it's changed.

Since you should never publish the pointer by assigning it until the
object is fully initialized, this can never be a problem even without
the mutex being there.

Even if you sampled the the_virtio_sock value right before it's being
set to NULL by the remove function, that still can happen with the
mutex held too.

This function is also terribly named btw, it implies that a reference
count is being taken.  But that's not what this function does, it
just returns the pointer value as-is.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-05-29 10:58     ` Stefano Garzarella
@ 2019-05-30  9:46       ` Jason Wang
  2019-05-30 10:10         ` Stefano Garzarella
  0 siblings, 1 reply; 19+ messages in thread
From: Jason Wang @ 2019-05-30  9:46 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: netdev, linux-kernel, virtualization, kvm, Stefan Hajnoczi,
	David S. Miller, Michael S . Tsirkin


On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
>>> We flush all pending works before to call vdev->config->reset(vdev),
>>> but other works can be queued before the vdev->config->del_vqs(vdev),
>>> so we add another flush after it, to avoid use after free.
>>>
>>> Suggested-by: Michael S. Tsirkin <mst@redhat.com>
>>> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
>>> ---
>>>    net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
>>>    1 file changed, 17 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>> index e694df10ab61..ad093ce96693 100644
>>> --- a/net/vmw_vsock/virtio_transport.c
>>> +++ b/net/vmw_vsock/virtio_transport.c
>>> @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
>>>    	return ret;
>>>    }
>>> +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
>>> +{
>>> +	flush_work(&vsock->loopback_work);
>>> +	flush_work(&vsock->rx_work);
>>> +	flush_work(&vsock->tx_work);
>>> +	flush_work(&vsock->event_work);
>>> +	flush_work(&vsock->send_pkt_work);
>>> +}
>>> +
>>>    static void virtio_vsock_remove(struct virtio_device *vdev)
>>>    {
>>>    	struct virtio_vsock *vsock = vdev->priv;
>>> @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>    	mutex_lock(&the_virtio_vsock_mutex);
>>>    	the_virtio_vsock = NULL;
>>> -	flush_work(&vsock->loopback_work);
>>> -	flush_work(&vsock->rx_work);
>>> -	flush_work(&vsock->tx_work);
>>> -	flush_work(&vsock->event_work);
>>> -	flush_work(&vsock->send_pkt_work);
>>> -
>>>    	/* Reset all connected sockets when the device disappear */
>>>    	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
>>> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>    	vsock->event_run = false;
>>>    	mutex_unlock(&vsock->event_lock);
>>> +	/* Flush all pending works */
>>> +	virtio_vsock_flush_works(vsock);
>>> +
>>>    	/* Flush all device writes and interrupts, device will not use any
>>>    	 * more buffers.
>>>    	 */
>>> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>    	/* Delete virtqueues and flush outstanding callbacks if any */
>>>    	vdev->config->del_vqs(vdev);
>>> +	/* Other works can be queued before 'config->del_vqs()', so we flush
>>> +	 * all works before to free the vsock object to avoid use after free.
>>> +	 */
>>> +	virtio_vsock_flush_works(vsock);
>>
>> Some questions after a quick glance:
>>
>> 1) It looks to me that the work could be queued from the path of
>> vsock_transport_cancel_pkt() . Is that synchronized here?
>>
> Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> queue work from the upper layer (socket).
>
> Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> a rare issue could happen:
> we are setting the_virtio_vsock to NULL at the start of .remove() and we
> are freeing the object pointed by it at the end of .remove(), so
> virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> running, accessing the object that we are freed.


Yes, that's my point.


>
> Should I use something like RCU to prevent this issue?
>
>      virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
>      {
>          rcu_read_lock();
>          vsock = rcu_dereference(the_virtio_vsock_mutex);


RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).


>          ...
>          rcu_read_unlock();
>      }
>
>      virtio_vsock_remove()
>      {
>          rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
>          synchronize_rcu();
>
>          ...
>
>          free(vsock);
>      }
>
> Could there be a better approach?
>
>
>> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
>> needed? It looks to me we've already done except that we need flush rx_work
>> in the end since send_pkt_work can requeue rx_work.
> The main reason of tx_run/rx_run/event_run is to prevent that a worker
> function is running while we are calling config->reset().
>
> E.g. if an interrupt comes between virtio_vsock_flush_works() and
> config->reset(), it can queue new works that can access the device while
> we are in config->reset().
>
> IMHO they are still needed.
>
> What do you think?


I mean could we simply do flush after reset once and without 
tx_rx/rx_run tricks?

rest();

virtio_vsock_flush_work();

virtio_vsock_free_buf();


Thanks


>
>
> Thanks for your questions,
> Stefano

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-05-30  9:46       ` Jason Wang
@ 2019-05-30 10:10         ` Stefano Garzarella
  2019-05-30 11:59           ` Jason Wang
  0 siblings, 1 reply; 19+ messages in thread
From: Stefano Garzarella @ 2019-05-30 10:10 UTC (permalink / raw)
  To: Jason Wang, Michael S . Tsirkin
  Cc: netdev, linux-kernel, virtualization, kvm, Stefan Hajnoczi,
	David S. Miller

On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
> 
> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > We flush all pending works before to call vdev->config->reset(vdev),
> > > > but other works can be queued before the vdev->config->del_vqs(vdev),
> > > > so we add another flush after it, to avoid use after free.
> > > > 
> > > > Suggested-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> > > > ---
> > > >    net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
> > > >    1 file changed, 17 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> > > > index e694df10ab61..ad093ce96693 100644
> > > > --- a/net/vmw_vsock/virtio_transport.c
> > > > +++ b/net/vmw_vsock/virtio_transport.c
> > > > @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
> > > >    	return ret;
> > > >    }
> > > > +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
> > > > +{
> > > > +	flush_work(&vsock->loopback_work);
> > > > +	flush_work(&vsock->rx_work);
> > > > +	flush_work(&vsock->tx_work);
> > > > +	flush_work(&vsock->event_work);
> > > > +	flush_work(&vsock->send_pkt_work);
> > > > +}
> > > > +
> > > >    static void virtio_vsock_remove(struct virtio_device *vdev)
> > > >    {
> > > >    	struct virtio_vsock *vsock = vdev->priv;
> > > > @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > >    	mutex_lock(&the_virtio_vsock_mutex);
> > > >    	the_virtio_vsock = NULL;
> > > > -	flush_work(&vsock->loopback_work);
> > > > -	flush_work(&vsock->rx_work);
> > > > -	flush_work(&vsock->tx_work);
> > > > -	flush_work(&vsock->event_work);
> > > > -	flush_work(&vsock->send_pkt_work);
> > > > -
> > > >    	/* Reset all connected sockets when the device disappear */
> > > >    	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
> > > > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > >    	vsock->event_run = false;
> > > >    	mutex_unlock(&vsock->event_lock);
> > > > +	/* Flush all pending works */
> > > > +	virtio_vsock_flush_works(vsock);
> > > > +
> > > >    	/* Flush all device writes and interrupts, device will not use any
> > > >    	 * more buffers.
> > > >    	 */
> > > > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > >    	/* Delete virtqueues and flush outstanding callbacks if any */
> > > >    	vdev->config->del_vqs(vdev);
> > > > +	/* Other works can be queued before 'config->del_vqs()', so we flush
> > > > +	 * all works before to free the vsock object to avoid use after free.
> > > > +	 */
> > > > +	virtio_vsock_flush_works(vsock);
> > > 
> > > Some questions after a quick glance:
> > > 
> > > 1) It looks to me that the work could be queued from the path of
> > > vsock_transport_cancel_pkt() . Is that synchronized here?
> > > 
> > Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> > queue work from the upper layer (socket).
> > 
> > Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> > a rare issue could happen:
> > we are setting the_virtio_vsock to NULL at the start of .remove() and we
> > are freeing the object pointed by it at the end of .remove(), so
> > virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> > running, accessing the object that we are freed.
> 
> 
> Yes, that's my point.
> 
> 
> > 
> > Should I use something like RCU to prevent this issue?
> > 
> >      virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> >      {
> >          rcu_read_lock();
> >          vsock = rcu_dereference(the_virtio_vsock_mutex);
> 
> 
> RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
> 

Okay, I'm going this way.

> 
> >          ...
> >          rcu_read_unlock();
> >      }
> > 
> >      virtio_vsock_remove()
> >      {
> >          rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> >          synchronize_rcu();
> > 
> >          ...
> > 
> >          free(vsock);
> >      }
> > 
> > Could there be a better approach?
> > 
> > 
> > > 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> > > needed? It looks to me we've already done except that we need flush rx_work
> > > in the end since send_pkt_work can requeue rx_work.
> > The main reason of tx_run/rx_run/event_run is to prevent that a worker
> > function is running while we are calling config->reset().
> > 
> > E.g. if an interrupt comes between virtio_vsock_flush_works() and
> > config->reset(), it can queue new works that can access the device while
> > we are in config->reset().
> > 
> > IMHO they are still needed.
> > 
> > What do you think?
> 
> 
> I mean could we simply do flush after reset once and without tx_rx/rx_run
> tricks?
> 
> rest();
> 
> virtio_vsock_flush_work();
> 
> virtio_vsock_free_buf();

My only doubt is:
is it safe to call config->reset() while a worker function could access
the device?

I had this doubt reading the Michael's advice[1] and looking at
virtnet_remove() where there are these lines before the config->reset():

	/* Make sure no work handler is accessing the device. */
	flush_work(&vi->config_work);

Thanks,
Stefano

[1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock'
  2019-05-30  4:28   ` David Miller
@ 2019-05-30 10:27     ` Stefano Garzarella
  0 siblings, 0 replies; 19+ messages in thread
From: Stefano Garzarella @ 2019-05-30 10:27 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, linux-kernel, virtualization, kvm, stefanha, mst

On Wed, May 29, 2019 at 09:28:52PM -0700, David Miller wrote:
> From: Stefano Garzarella <sgarzare@redhat.com>
> Date: Tue, 28 May 2019 12:56:20 +0200
> 
> > @@ -68,7 +68,13 @@ struct virtio_vsock {
> >  
> >  static struct virtio_vsock *virtio_vsock_get(void)
> >  {
> > -	return the_virtio_vsock;
> > +	struct virtio_vsock *vsock;
> > +
> > +	mutex_lock(&the_virtio_vsock_mutex);
> > +	vsock = the_virtio_vsock;
> > +	mutex_unlock(&the_virtio_vsock_mutex);
> > +
> > +	return vsock;
> 
> This doesn't do anything as far as I can tell.
> 
> No matter what, you will either get the value before it's changed or
> after it's changed.
> 
> Since you should never publish the pointer by assigning it until the
> object is fully initialized, this can never be a problem even without
> the mutex being there.
> 
> Even if you sampled the the_virtio_sock value right before it's being
> set to NULL by the remove function, that still can happen with the
> mutex held too.

Yes, I found out when I was answering Jason's question [1]. :(

I proposed to use RCU to solve this issue, do you agree?
Let me know if there is a better way.

> 
> This function is also terribly named btw, it implies that a reference
> count is being taken.  But that's not what this function does, it
> just returns the pointer value as-is.

What do you think if I remove the function, using directly the_virtio_vsock?
(should be easier to use with RCU API)

Thanks,
Stefano

[1] https://lore.kernel.org/netdev/20190529105832.oz3sagbne5teq3nt@steredhat

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-05-30 10:10         ` Stefano Garzarella
@ 2019-05-30 11:59           ` Jason Wang
  2019-05-31  8:18             ` Stefano Garzarella
  0 siblings, 1 reply; 19+ messages in thread
From: Jason Wang @ 2019-05-30 11:59 UTC (permalink / raw)
  To: Stefano Garzarella, Michael S . Tsirkin
  Cc: netdev, linux-kernel, virtualization, kvm, Stefan Hajnoczi,
	David S. Miller


On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
>> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
>>> On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>>>> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
>>>>> We flush all pending works before to call vdev->config->reset(vdev),
>>>>> but other works can be queued before the vdev->config->del_vqs(vdev),
>>>>> so we add another flush after it, to avoid use after free.
>>>>>
>>>>> Suggested-by: Michael S. Tsirkin <mst@redhat.com>
>>>>> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
>>>>> ---
>>>>>     net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
>>>>>     1 file changed, 17 insertions(+), 6 deletions(-)
>>>>>
>>>>> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>>>> index e694df10ab61..ad093ce96693 100644
>>>>> --- a/net/vmw_vsock/virtio_transport.c
>>>>> +++ b/net/vmw_vsock/virtio_transport.c
>>>>> @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
>>>>>     	return ret;
>>>>>     }
>>>>> +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
>>>>> +{
>>>>> +	flush_work(&vsock->loopback_work);
>>>>> +	flush_work(&vsock->rx_work);
>>>>> +	flush_work(&vsock->tx_work);
>>>>> +	flush_work(&vsock->event_work);
>>>>> +	flush_work(&vsock->send_pkt_work);
>>>>> +}
>>>>> +
>>>>>     static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>     {
>>>>>     	struct virtio_vsock *vsock = vdev->priv;
>>>>> @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>     	mutex_lock(&the_virtio_vsock_mutex);
>>>>>     	the_virtio_vsock = NULL;
>>>>> -	flush_work(&vsock->loopback_work);
>>>>> -	flush_work(&vsock->rx_work);
>>>>> -	flush_work(&vsock->tx_work);
>>>>> -	flush_work(&vsock->event_work);
>>>>> -	flush_work(&vsock->send_pkt_work);
>>>>> -
>>>>>     	/* Reset all connected sockets when the device disappear */
>>>>>     	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
>>>>> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>     	vsock->event_run = false;
>>>>>     	mutex_unlock(&vsock->event_lock);
>>>>> +	/* Flush all pending works */
>>>>> +	virtio_vsock_flush_works(vsock);
>>>>> +
>>>>>     	/* Flush all device writes and interrupts, device will not use any
>>>>>     	 * more buffers.
>>>>>     	 */
>>>>> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>     	/* Delete virtqueues and flush outstanding callbacks if any */
>>>>>     	vdev->config->del_vqs(vdev);
>>>>> +	/* Other works can be queued before 'config->del_vqs()', so we flush
>>>>> +	 * all works before to free the vsock object to avoid use after free.
>>>>> +	 */
>>>>> +	virtio_vsock_flush_works(vsock);
>>>> Some questions after a quick glance:
>>>>
>>>> 1) It looks to me that the work could be queued from the path of
>>>> vsock_transport_cancel_pkt() . Is that synchronized here?
>>>>
>>> Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
>>> queue work from the upper layer (socket).
>>>
>>> Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
>>> a rare issue could happen:
>>> we are setting the_virtio_vsock to NULL at the start of .remove() and we
>>> are freeing the object pointed by it at the end of .remove(), so
>>> virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
>>> running, accessing the object that we are freed.
>>
>> Yes, that's my point.
>>
>>
>>> Should I use something like RCU to prevent this issue?
>>>
>>>       virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
>>>       {
>>>           rcu_read_lock();
>>>           vsock = rcu_dereference(the_virtio_vsock_mutex);
>>
>> RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
>>
> Okay, I'm going this way.
>
>>>           ...
>>>           rcu_read_unlock();
>>>       }
>>>
>>>       virtio_vsock_remove()
>>>       {
>>>           rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
>>>           synchronize_rcu();
>>>
>>>           ...
>>>
>>>           free(vsock);
>>>       }
>>>
>>> Could there be a better approach?
>>>
>>>
>>>> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
>>>> needed? It looks to me we've already done except that we need flush rx_work
>>>> in the end since send_pkt_work can requeue rx_work.
>>> The main reason of tx_run/rx_run/event_run is to prevent that a worker
>>> function is running while we are calling config->reset().
>>>
>>> E.g. if an interrupt comes between virtio_vsock_flush_works() and
>>> config->reset(), it can queue new works that can access the device while
>>> we are in config->reset().
>>>
>>> IMHO they are still needed.
>>>
>>> What do you think?
>>
>> I mean could we simply do flush after reset once and without tx_rx/rx_run
>> tricks?
>>
>> rest();
>>
>> virtio_vsock_flush_work();
>>
>> virtio_vsock_free_buf();
> My only doubt is:
> is it safe to call config->reset() while a worker function could access
> the device?
>
> I had this doubt reading the Michael's advice[1] and looking at
> virtnet_remove() where there are these lines before the config->reset():
>
> 	/* Make sure no work handler is accessing the device. */
> 	flush_work(&vi->config_work);
>
> Thanks,
> Stefano
>
> [1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org


Good point. Then I agree with you. But if we can use the RCU to detect 
the detach of device from socket for these, it would be even better.

Thanks



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-05-30 11:59           ` Jason Wang
@ 2019-05-31  8:18             ` Stefano Garzarella
  2019-05-31  9:56               ` Jason Wang
  0 siblings, 1 reply; 19+ messages in thread
From: Stefano Garzarella @ 2019-05-31  8:18 UTC (permalink / raw)
  To: Jason Wang
  Cc: Michael S . Tsirkin, netdev, linux-kernel, virtualization, kvm,
	Stefan Hajnoczi, David S. Miller

On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
> 
> On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> > On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
> > > On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > > > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > > > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > >     	vsock->event_run = false;
> > > > > >     	mutex_unlock(&vsock->event_lock);
> > > > > > +	/* Flush all pending works */
> > > > > > +	virtio_vsock_flush_works(vsock);
> > > > > > +
> > > > > >     	/* Flush all device writes and interrupts, device will not use any
> > > > > >     	 * more buffers.
> > > > > >     	 */
> > > > > > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > >     	/* Delete virtqueues and flush outstanding callbacks if any */
> > > > > >     	vdev->config->del_vqs(vdev);
> > > > > > +	/* Other works can be queued before 'config->del_vqs()', so we flush
> > > > > > +	 * all works before to free the vsock object to avoid use after free.
> > > > > > +	 */
> > > > > > +	virtio_vsock_flush_works(vsock);
> > > > > Some questions after a quick glance:
> > > > > 
> > > > > 1) It looks to me that the work could be queued from the path of
> > > > > vsock_transport_cancel_pkt() . Is that synchronized here?
> > > > > 
> > > > Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> > > > queue work from the upper layer (socket).
> > > > 
> > > > Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> > > > a rare issue could happen:
> > > > we are setting the_virtio_vsock to NULL at the start of .remove() and we
> > > > are freeing the object pointed by it at the end of .remove(), so
> > > > virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> > > > running, accessing the object that we are freed.
> > > 
> > > Yes, that's my point.
> > > 
> > > 
> > > > Should I use something like RCU to prevent this issue?
> > > > 
> > > >       virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> > > >       {
> > > >           rcu_read_lock();
> > > >           vsock = rcu_dereference(the_virtio_vsock_mutex);
> > > 
> > > RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
> > > 
> > Okay, I'm going this way.
> > 
> > > >           ...
> > > >           rcu_read_unlock();
> > > >       }
> > > > 
> > > >       virtio_vsock_remove()
> > > >       {
> > > >           rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> > > >           synchronize_rcu();
> > > > 
> > > >           ...
> > > > 
> > > >           free(vsock);
> > > >       }
> > > > 
> > > > Could there be a better approach?
> > > > 
> > > > 
> > > > > 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> > > > > needed? It looks to me we've already done except that we need flush rx_work
> > > > > in the end since send_pkt_work can requeue rx_work.
> > > > The main reason of tx_run/rx_run/event_run is to prevent that a worker
> > > > function is running while we are calling config->reset().
> > > > 
> > > > E.g. if an interrupt comes between virtio_vsock_flush_works() and
> > > > config->reset(), it can queue new works that can access the device while
> > > > we are in config->reset().
> > > > 
> > > > IMHO they are still needed.
> > > > 
> > > > What do you think?
> > > 
> > > I mean could we simply do flush after reset once and without tx_rx/rx_run
> > > tricks?
> > > 
> > > rest();
> > > 
> > > virtio_vsock_flush_work();
> > > 
> > > virtio_vsock_free_buf();
> > My only doubt is:
> > is it safe to call config->reset() while a worker function could access
> > the device?
> > 
> > I had this doubt reading the Michael's advice[1] and looking at
> > virtnet_remove() where there are these lines before the config->reset():
> > 
> > 	/* Make sure no work handler is accessing the device. */
> > 	flush_work(&vi->config_work);
> > 
> > Thanks,
> > Stefano
> > 
> > [1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org
> 
> 
> Good point. Then I agree with you. But if we can use the RCU to detect the
> detach of device from socket for these, it would be even better.
> 

What about checking 'the_virtio_vsock' in the worker functions in a RCU
critical section?
In this way, I can remove the rx_run/tx_run/event_run.

Do you think it's cleaner?

Thank you very much,
Stefano

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-05-31  8:18             ` Stefano Garzarella
@ 2019-05-31  9:56               ` Jason Wang
  2019-06-06  8:11                 ` Stefano Garzarella
  0 siblings, 1 reply; 19+ messages in thread
From: Jason Wang @ 2019-05-31  9:56 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Michael S . Tsirkin, netdev, linux-kernel, virtualization, kvm,
	Stefan Hajnoczi, David S. Miller


On 2019/5/31 下午4:18, Stefano Garzarella wrote:
> On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
>> On 2019/5/30 下午6:10, Stefano Garzarella wrote:
>>> On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
>>>> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
>>>>> On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>>>>>> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
>>>>>>> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>>>      	vsock->event_run = false;
>>>>>>>      	mutex_unlock(&vsock->event_lock);
>>>>>>> +	/* Flush all pending works */
>>>>>>> +	virtio_vsock_flush_works(vsock);
>>>>>>> +
>>>>>>>      	/* Flush all device writes and interrupts, device will not use any
>>>>>>>      	 * more buffers.
>>>>>>>      	 */
>>>>>>> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>>>      	/* Delete virtqueues and flush outstanding callbacks if any */
>>>>>>>      	vdev->config->del_vqs(vdev);
>>>>>>> +	/* Other works can be queued before 'config->del_vqs()', so we flush
>>>>>>> +	 * all works before to free the vsock object to avoid use after free.
>>>>>>> +	 */
>>>>>>> +	virtio_vsock_flush_works(vsock);
>>>>>> Some questions after a quick glance:
>>>>>>
>>>>>> 1) It looks to me that the work could be queued from the path of
>>>>>> vsock_transport_cancel_pkt() . Is that synchronized here?
>>>>>>
>>>>> Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
>>>>> queue work from the upper layer (socket).
>>>>>
>>>>> Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
>>>>> a rare issue could happen:
>>>>> we are setting the_virtio_vsock to NULL at the start of .remove() and we
>>>>> are freeing the object pointed by it at the end of .remove(), so
>>>>> virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
>>>>> running, accessing the object that we are freed.
>>>> Yes, that's my point.
>>>>
>>>>
>>>>> Should I use something like RCU to prevent this issue?
>>>>>
>>>>>        virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
>>>>>        {
>>>>>            rcu_read_lock();
>>>>>            vsock = rcu_dereference(the_virtio_vsock_mutex);
>>>> RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
>>>>
>>> Okay, I'm going this way.
>>>
>>>>>            ...
>>>>>            rcu_read_unlock();
>>>>>        }
>>>>>
>>>>>        virtio_vsock_remove()
>>>>>        {
>>>>>            rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
>>>>>            synchronize_rcu();
>>>>>
>>>>>            ...
>>>>>
>>>>>            free(vsock);
>>>>>        }
>>>>>
>>>>> Could there be a better approach?
>>>>>
>>>>>
>>>>>> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
>>>>>> needed? It looks to me we've already done except that we need flush rx_work
>>>>>> in the end since send_pkt_work can requeue rx_work.
>>>>> The main reason of tx_run/rx_run/event_run is to prevent that a worker
>>>>> function is running while we are calling config->reset().
>>>>>
>>>>> E.g. if an interrupt comes between virtio_vsock_flush_works() and
>>>>> config->reset(), it can queue new works that can access the device while
>>>>> we are in config->reset().
>>>>>
>>>>> IMHO they are still needed.
>>>>>
>>>>> What do you think?
>>>> I mean could we simply do flush after reset once and without tx_rx/rx_run
>>>> tricks?
>>>>
>>>> rest();
>>>>
>>>> virtio_vsock_flush_work();
>>>>
>>>> virtio_vsock_free_buf();
>>> My only doubt is:
>>> is it safe to call config->reset() while a worker function could access
>>> the device?
>>>
>>> I had this doubt reading the Michael's advice[1] and looking at
>>> virtnet_remove() where there are these lines before the config->reset():
>>>
>>> 	/* Make sure no work handler is accessing the device. */
>>> 	flush_work(&vi->config_work);
>>>
>>> Thanks,
>>> Stefano
>>>
>>> [1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org
>>
>> Good point. Then I agree with you. But if we can use the RCU to detect the
>> detach of device from socket for these, it would be even better.
>>
> What about checking 'the_virtio_vsock' in the worker functions in a RCU
> critical section?
> In this way, I can remove the rx_run/tx_run/event_run.
>
> Do you think it's cleaner?


Yes, I think so.

Thanks


>
> Thank you very much,
> Stefano

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-05-31  9:56               ` Jason Wang
@ 2019-06-06  8:11                 ` Stefano Garzarella
  2019-06-13  8:57                   ` Jason Wang
  0 siblings, 1 reply; 19+ messages in thread
From: Stefano Garzarella @ 2019-06-06  8:11 UTC (permalink / raw)
  To: Jason Wang
  Cc: Michael S . Tsirkin, netdev, linux-kernel, virtualization, kvm,
	Stefan Hajnoczi, David S. Miller

On Fri, May 31, 2019 at 05:56:39PM +0800, Jason Wang wrote:
> On 2019/5/31 下午4:18, Stefano Garzarella wrote:
> > On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
> > > On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> > > > On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
> > > > > On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > > > > > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > > > > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > > > > > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > > >      	vsock->event_run = false;
> > > > > > > >      	mutex_unlock(&vsock->event_lock);
> > > > > > > > +	/* Flush all pending works */
> > > > > > > > +	virtio_vsock_flush_works(vsock);
> > > > > > > > +
> > > > > > > >      	/* Flush all device writes and interrupts, device will not use any
> > > > > > > >      	 * more buffers.
> > > > > > > >      	 */
> > > > > > > > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > > >      	/* Delete virtqueues and flush outstanding callbacks if any */
> > > > > > > >      	vdev->config->del_vqs(vdev);
> > > > > > > > +	/* Other works can be queued before 'config->del_vqs()', so we flush
> > > > > > > > +	 * all works before to free the vsock object to avoid use after free.
> > > > > > > > +	 */
> > > > > > > > +	virtio_vsock_flush_works(vsock);
> > > > > > > Some questions after a quick glance:
> > > > > > > 
> > > > > > > 1) It looks to me that the work could be queued from the path of
> > > > > > > vsock_transport_cancel_pkt() . Is that synchronized here?
> > > > > > > 
> > > > > > Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> > > > > > queue work from the upper layer (socket).
> > > > > > 
> > > > > > Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> > > > > > a rare issue could happen:
> > > > > > we are setting the_virtio_vsock to NULL at the start of .remove() and we
> > > > > > are freeing the object pointed by it at the end of .remove(), so
> > > > > > virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> > > > > > running, accessing the object that we are freed.
> > > > > Yes, that's my point.
> > > > > 
> > > > > 
> > > > > > Should I use something like RCU to prevent this issue?
> > > > > > 
> > > > > >        virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> > > > > >        {
> > > > > >            rcu_read_lock();
> > > > > >            vsock = rcu_dereference(the_virtio_vsock_mutex);
> > > > > RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
> > > > > 
> > > > Okay, I'm going this way.
> > > > 
> > > > > >            ...
> > > > > >            rcu_read_unlock();
> > > > > >        }
> > > > > > 
> > > > > >        virtio_vsock_remove()
> > > > > >        {
> > > > > >            rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> > > > > >            synchronize_rcu();
> > > > > > 
> > > > > >            ...
> > > > > > 
> > > > > >            free(vsock);
> > > > > >        }
> > > > > > 
> > > > > > Could there be a better approach?
> > > > > > 
> > > > > > 
> > > > > > > 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> > > > > > > needed? It looks to me we've already done except that we need flush rx_work
> > > > > > > in the end since send_pkt_work can requeue rx_work.
> > > > > > The main reason of tx_run/rx_run/event_run is to prevent that a worker
> > > > > > function is running while we are calling config->reset().
> > > > > > 
> > > > > > E.g. if an interrupt comes between virtio_vsock_flush_works() and
> > > > > > config->reset(), it can queue new works that can access the device while
> > > > > > we are in config->reset().
> > > > > > 
> > > > > > IMHO they are still needed.
> > > > > > 
> > > > > > What do you think?
> > > > > I mean could we simply do flush after reset once and without tx_rx/rx_run
> > > > > tricks?
> > > > > 
> > > > > rest();
> > > > > 
> > > > > virtio_vsock_flush_work();
> > > > > 
> > > > > virtio_vsock_free_buf();
> > > > My only doubt is:
> > > > is it safe to call config->reset() while a worker function could access
> > > > the device?
> > > > 
> > > > I had this doubt reading the Michael's advice[1] and looking at
> > > > virtnet_remove() where there are these lines before the config->reset():
> > > > 
> > > > 	/* Make sure no work handler is accessing the device. */
> > > > 	flush_work(&vi->config_work);
> > > > 
> > > > Thanks,
> > > > Stefano
> > > > 
> > > > [1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org
> > > 
> > > Good point. Then I agree with you. But if we can use the RCU to detect the
> > > detach of device from socket for these, it would be even better.
> > > 
> > What about checking 'the_virtio_vsock' in the worker functions in a RCU
> > critical section?
> > In this way, I can remove the rx_run/tx_run/event_run.
> > 
> > Do you think it's cleaner?
> 
> 
> Yes, I think so.
> 

Hi Jason,
while I was trying to use RCU also for workers, I discovered that it can
not be used if we can sleep. (Workers have mutex, memory allocation, etc.).
There is SRCU, but I think the rx_run/tx_run/event_run is cleaner.

So, if you agree I'd send a v2 using RCU only for the
virtio_transport_send_pkt() or vsock_transport_cancel_pkt(), and leave
this patch as is to be sure that no one is accessing the device while we
call config->reset().

Thanks,
Stefano

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove()
  2019-05-28 10:56 [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefano Garzarella
                   ` (3 preceding siblings ...)
  2019-05-28 10:56 ` [PATCH 4/4] vsock/virtio: free used buffers " Stefano Garzarella
@ 2019-06-10 13:09 ` Stefan Hajnoczi
  2019-06-27 10:05   ` Stefano Garzarella
  4 siblings, 1 reply; 19+ messages in thread
From: Stefan Hajnoczi @ 2019-06-10 13:09 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: netdev, kvm, Michael S . Tsirkin, linux-kernel, virtualization,
	Stefan Hajnoczi, David S. Miller

[-- Attachment #1: Type: text/plain, Size: 1482 bytes --]

On Tue, May 28, 2019 at 12:56:19PM +0200, Stefano Garzarella wrote:
> During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
> before registering the driver", Stefan pointed out some possible issues
> in the .probe() and .remove() callbacks of the virtio-vsock driver.
> 
> This series tries to solve these issues:
> - Patch 1 postpones the 'the_virtio_vsock' assignment at the end of the
>   .probe() to avoid that some sockets queue works when the initialization
>   is not finished.
> - Patches 2 and 3 stop workers before to call vdev->config->reset(vdev) to
>   be sure that no one is accessing the device, and adds another flush at the
>   end of the .remove() to avoid use after free.
> - Patch 4 free also used buffers in the virtqueues during the .remove().
> 
> Stefano Garzarella (4):
>   vsock/virtio: fix locking around 'the_virtio_vsock'
>   vsock/virtio: stop workers during the .remove()
>   vsock/virtio: fix flush of works during the .remove()
>   vsock/virtio: free used buffers during the .remove()
> 
>  net/vmw_vsock/virtio_transport.c | 105 ++++++++++++++++++++++++++-----
>  1 file changed, 90 insertions(+), 15 deletions(-)

Looking forward to v2.  I took a look at the discussion and I'll review
v2 from scratch.  Just keep in mind that the mutex is used more for
mutual exclusion of the init/exit code than to protect the_virtio_vsock,
so we'll still need protection of init/exit code even with RCU.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-06-06  8:11                 ` Stefano Garzarella
@ 2019-06-13  8:57                   ` Jason Wang
  2019-06-27 10:26                     ` Stefano Garzarella
  0 siblings, 1 reply; 19+ messages in thread
From: Jason Wang @ 2019-06-13  8:57 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Michael S . Tsirkin, netdev, linux-kernel, virtualization, kvm,
	Stefan Hajnoczi, David S. Miller


On 2019/6/6 下午4:11, Stefano Garzarella wrote:
> On Fri, May 31, 2019 at 05:56:39PM +0800, Jason Wang wrote:
>> On 2019/5/31 下午4:18, Stefano Garzarella wrote:
>>> On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
>>>> On 2019/5/30 下午6:10, Stefano Garzarella wrote:
>>>>> On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
>>>>>> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
>>>>>>> On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>>>>>>>> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
>>>>>>>>> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>>>>>       	vsock->event_run = false;
>>>>>>>>>       	mutex_unlock(&vsock->event_lock);
>>>>>>>>> +	/* Flush all pending works */
>>>>>>>>> +	virtio_vsock_flush_works(vsock);
>>>>>>>>> +
>>>>>>>>>       	/* Flush all device writes and interrupts, device will not use any
>>>>>>>>>       	 * more buffers.
>>>>>>>>>       	 */
>>>>>>>>> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>>>>>       	/* Delete virtqueues and flush outstanding callbacks if any */
>>>>>>>>>       	vdev->config->del_vqs(vdev);
>>>>>>>>> +	/* Other works can be queued before 'config->del_vqs()', so we flush
>>>>>>>>> +	 * all works before to free the vsock object to avoid use after free.
>>>>>>>>> +	 */
>>>>>>>>> +	virtio_vsock_flush_works(vsock);
>>>>>>>> Some questions after a quick glance:
>>>>>>>>
>>>>>>>> 1) It looks to me that the work could be queued from the path of
>>>>>>>> vsock_transport_cancel_pkt() . Is that synchronized here?
>>>>>>>>
>>>>>>> Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
>>>>>>> queue work from the upper layer (socket).
>>>>>>>
>>>>>>> Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
>>>>>>> a rare issue could happen:
>>>>>>> we are setting the_virtio_vsock to NULL at the start of .remove() and we
>>>>>>> are freeing the object pointed by it at the end of .remove(), so
>>>>>>> virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
>>>>>>> running, accessing the object that we are freed.
>>>>>> Yes, that's my point.
>>>>>>
>>>>>>
>>>>>>> Should I use something like RCU to prevent this issue?
>>>>>>>
>>>>>>>         virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
>>>>>>>         {
>>>>>>>             rcu_read_lock();
>>>>>>>             vsock = rcu_dereference(the_virtio_vsock_mutex);
>>>>>> RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
>>>>>>
>>>>> Okay, I'm going this way.
>>>>>
>>>>>>>             ...
>>>>>>>             rcu_read_unlock();
>>>>>>>         }
>>>>>>>
>>>>>>>         virtio_vsock_remove()
>>>>>>>         {
>>>>>>>             rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
>>>>>>>             synchronize_rcu();
>>>>>>>
>>>>>>>             ...
>>>>>>>
>>>>>>>             free(vsock);
>>>>>>>         }
>>>>>>>
>>>>>>> Could there be a better approach?
>>>>>>>
>>>>>>>
>>>>>>>> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
>>>>>>>> needed? It looks to me we've already done except that we need flush rx_work
>>>>>>>> in the end since send_pkt_work can requeue rx_work.
>>>>>>> The main reason of tx_run/rx_run/event_run is to prevent that a worker
>>>>>>> function is running while we are calling config->reset().
>>>>>>>
>>>>>>> E.g. if an interrupt comes between virtio_vsock_flush_works() and
>>>>>>> config->reset(), it can queue new works that can access the device while
>>>>>>> we are in config->reset().
>>>>>>>
>>>>>>> IMHO they are still needed.
>>>>>>>
>>>>>>> What do you think?
>>>>>> I mean could we simply do flush after reset once and without tx_rx/rx_run
>>>>>> tricks?
>>>>>>
>>>>>> rest();
>>>>>>
>>>>>> virtio_vsock_flush_work();
>>>>>>
>>>>>> virtio_vsock_free_buf();
>>>>> My only doubt is:
>>>>> is it safe to call config->reset() while a worker function could access
>>>>> the device?
>>>>>
>>>>> I had this doubt reading the Michael's advice[1] and looking at
>>>>> virtnet_remove() where there are these lines before the config->reset():
>>>>>
>>>>> 	/* Make sure no work handler is accessing the device. */
>>>>> 	flush_work(&vi->config_work);
>>>>>
>>>>> Thanks,
>>>>> Stefano
>>>>>
>>>>> [1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org
>>>> Good point. Then I agree with you. But if we can use the RCU to detect the
>>>> detach of device from socket for these, it would be even better.
>>>>
>>> What about checking 'the_virtio_vsock' in the worker functions in a RCU
>>> critical section?
>>> In this way, I can remove the rx_run/tx_run/event_run.
>>>
>>> Do you think it's cleaner?
>>
>> Yes, I think so.
>>
> Hi Jason,
> while I was trying to use RCU also for workers, I discovered that it can
> not be used if we can sleep. (Workers have mutex, memory allocation, etc.).
> There is SRCU, but I think the rx_run/tx_run/event_run is cleaner.
>
> So, if you agree I'd send a v2 using RCU only for the
> virtio_transport_send_pkt() or vsock_transport_cancel_pkt(), and leave
> this patch as is to be sure that no one is accessing the device while we
> call config->reset().
>
> Thanks,
> Stefano


If it work, I don't object to use that consider it was suggested by 
Michael. You can go this way and let's see.

Personally I would like something more cleaner. E.g RCU + some kind of 
reference count (kref?).

Thanks


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove()
  2019-06-10 13:09 ` [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefan Hajnoczi
@ 2019-06-27 10:05   ` Stefano Garzarella
  0 siblings, 0 replies; 19+ messages in thread
From: Stefano Garzarella @ 2019-06-27 10:05 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: netdev, kvm, Michael S . Tsirkin, linux-kernel, virtualization,
	Stefan Hajnoczi, David S. Miller

On Mon, Jun 10, 2019 at 02:09:45PM +0100, Stefan Hajnoczi wrote:
> On Tue, May 28, 2019 at 12:56:19PM +0200, Stefano Garzarella wrote:
> > During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
> > before registering the driver", Stefan pointed out some possible issues
> > in the .probe() and .remove() callbacks of the virtio-vsock driver.
> > 
> > This series tries to solve these issues:
> > - Patch 1 postpones the 'the_virtio_vsock' assignment at the end of the
> >   .probe() to avoid that some sockets queue works when the initialization
> >   is not finished.
> > - Patches 2 and 3 stop workers before to call vdev->config->reset(vdev) to
> >   be sure that no one is accessing the device, and adds another flush at the
> >   end of the .remove() to avoid use after free.
> > - Patch 4 free also used buffers in the virtqueues during the .remove().
> > 
> > Stefano Garzarella (4):
> >   vsock/virtio: fix locking around 'the_virtio_vsock'
> >   vsock/virtio: stop workers during the .remove()
> >   vsock/virtio: fix flush of works during the .remove()
> >   vsock/virtio: free used buffers during the .remove()
> > 
> >  net/vmw_vsock/virtio_transport.c | 105 ++++++++++++++++++++++++++-----
> >  1 file changed, 90 insertions(+), 15 deletions(-)
> 
> Looking forward to v2.  I took a look at the discussion and I'll review
> v2 from scratch.  Just keep in mind that the mutex is used more for
> mutual exclusion of the init/exit code than to protect the_virtio_vsock,
> so we'll still need protection of init/exit code even with RCU.

Thanks for the advice! I'll send the v2 ASAP.

Thanks,
Stefano

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
  2019-06-13  8:57                   ` Jason Wang
@ 2019-06-27 10:26                     ` Stefano Garzarella
  0 siblings, 0 replies; 19+ messages in thread
From: Stefano Garzarella @ 2019-06-27 10:26 UTC (permalink / raw)
  To: Jason Wang
  Cc: Michael S . Tsirkin, netdev, linux-kernel, virtualization, kvm,
	Stefan Hajnoczi, David S. Miller

On Thu, Jun 13, 2019 at 04:57:15PM +0800, Jason Wang wrote:
> 
> On 2019/6/6 下午4:11, Stefano Garzarella wrote:
> > On Fri, May 31, 2019 at 05:56:39PM +0800, Jason Wang wrote:
> > > On 2019/5/31 下午4:18, Stefano Garzarella wrote:
> > > > On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
> > > > > On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> > > > > > On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
> > > > > > > On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > > > > > > > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > > > > > > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > > > > > > > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > > > > >       	vsock->event_run = false;
> > > > > > > > > >       	mutex_unlock(&vsock->event_lock);
> > > > > > > > > > +	/* Flush all pending works */
> > > > > > > > > > +	virtio_vsock_flush_works(vsock);
> > > > > > > > > > +
> > > > > > > > > >       	/* Flush all device writes and interrupts, device will not use any
> > > > > > > > > >       	 * more buffers.
> > > > > > > > > >       	 */
> > > > > > > > > > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > > > > >       	/* Delete virtqueues and flush outstanding callbacks if any */
> > > > > > > > > >       	vdev->config->del_vqs(vdev);
> > > > > > > > > > +	/* Other works can be queued before 'config->del_vqs()', so we flush
> > > > > > > > > > +	 * all works before to free the vsock object to avoid use after free.
> > > > > > > > > > +	 */
> > > > > > > > > > +	virtio_vsock_flush_works(vsock);
> > > > > > > > > Some questions after a quick glance:
> > > > > > > > > 
> > > > > > > > > 1) It looks to me that the work could be queued from the path of
> > > > > > > > > vsock_transport_cancel_pkt() . Is that synchronized here?
> > > > > > > > > 
> > > > > > > > Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> > > > > > > > queue work from the upper layer (socket).
> > > > > > > > 
> > > > > > > > Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> > > > > > > > a rare issue could happen:
> > > > > > > > we are setting the_virtio_vsock to NULL at the start of .remove() and we
> > > > > > > > are freeing the object pointed by it at the end of .remove(), so
> > > > > > > > virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> > > > > > > > running, accessing the object that we are freed.
> > > > > > > Yes, that's my point.
> > > > > > > 
> > > > > > > 
> > > > > > > > Should I use something like RCU to prevent this issue?
> > > > > > > > 
> > > > > > > >         virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> > > > > > > >         {
> > > > > > > >             rcu_read_lock();
> > > > > > > >             vsock = rcu_dereference(the_virtio_vsock_mutex);
> > > > > > > RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
> > > > > > > 
> > > > > > Okay, I'm going this way.
> > > > > > 
> > > > > > > >             ...
> > > > > > > >             rcu_read_unlock();
> > > > > > > >         }
> > > > > > > > 
> > > > > > > >         virtio_vsock_remove()
> > > > > > > >         {
> > > > > > > >             rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> > > > > > > >             synchronize_rcu();
> > > > > > > > 
> > > > > > > >             ...
> > > > > > > > 
> > > > > > > >             free(vsock);
> > > > > > > >         }
> > > > > > > > 
> > > > > > > > Could there be a better approach?
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> > > > > > > > > needed? It looks to me we've already done except that we need flush rx_work
> > > > > > > > > in the end since send_pkt_work can requeue rx_work.
> > > > > > > > The main reason of tx_run/rx_run/event_run is to prevent that a worker
> > > > > > > > function is running while we are calling config->reset().
> > > > > > > > 
> > > > > > > > E.g. if an interrupt comes between virtio_vsock_flush_works() and
> > > > > > > > config->reset(), it can queue new works that can access the device while
> > > > > > > > we are in config->reset().
> > > > > > > > 
> > > > > > > > IMHO they are still needed.
> > > > > > > > 
> > > > > > > > What do you think?
> > > > > > > I mean could we simply do flush after reset once and without tx_rx/rx_run
> > > > > > > tricks?
> > > > > > > 
> > > > > > > rest();
> > > > > > > 
> > > > > > > virtio_vsock_flush_work();
> > > > > > > 
> > > > > > > virtio_vsock_free_buf();
> > > > > > My only doubt is:
> > > > > > is it safe to call config->reset() while a worker function could access
> > > > > > the device?
> > > > > > 
> > > > > > I had this doubt reading the Michael's advice[1] and looking at
> > > > > > virtnet_remove() where there are these lines before the config->reset():
> > > > > > 
> > > > > > 	/* Make sure no work handler is accessing the device. */
> > > > > > 	flush_work(&vi->config_work);
> > > > > > 
> > > > > > Thanks,
> > > > > > Stefano
> > > > > > 
> > > > > > [1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org
> > > > > Good point. Then I agree with you. But if we can use the RCU to detect the
> > > > > detach of device from socket for these, it would be even better.
> > > > > 
> > > > What about checking 'the_virtio_vsock' in the worker functions in a RCU
> > > > critical section?
> > > > In this way, I can remove the rx_run/tx_run/event_run.
> > > > 
> > > > Do you think it's cleaner?
> > > 
> > > Yes, I think so.
> > > 
> > Hi Jason,
> > while I was trying to use RCU also for workers, I discovered that it can
> > not be used if we can sleep. (Workers have mutex, memory allocation, etc.).
> > There is SRCU, but I think the rx_run/tx_run/event_run is cleaner.
> > 
> > So, if you agree I'd send a v2 using RCU only for the
> > virtio_transport_send_pkt() or vsock_transport_cancel_pkt(), and leave
> > this patch as is to be sure that no one is accessing the device while we
> > call config->reset().
> > 
> > Thanks,
> > Stefano
> 
> 
> If it work, I don't object to use that consider it was suggested by Michael.
> You can go this way and let's see.

Okay, I'll try if it works.

> 
> Personally I would like something more cleaner. E.g RCU + some kind of
> reference count (kref?).

I'll try to check if kref can help to have a cleaner solution in this
case.

Thanks for your comments,
Stefano

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, back to index

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-28 10:56 [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefano Garzarella
2019-05-28 10:56 ` [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock' Stefano Garzarella
2019-05-30  4:28   ` David Miller
2019-05-30 10:27     ` Stefano Garzarella
2019-05-28 10:56 ` [PATCH 2/4] vsock/virtio: stop workers during the .remove() Stefano Garzarella
2019-05-28 10:56 ` [PATCH 3/4] vsock/virtio: fix flush of works " Stefano Garzarella
2019-05-29  3:22   ` Jason Wang
2019-05-29 10:58     ` Stefano Garzarella
2019-05-30  9:46       ` Jason Wang
2019-05-30 10:10         ` Stefano Garzarella
2019-05-30 11:59           ` Jason Wang
2019-05-31  8:18             ` Stefano Garzarella
2019-05-31  9:56               ` Jason Wang
2019-06-06  8:11                 ` Stefano Garzarella
2019-06-13  8:57                   ` Jason Wang
2019-06-27 10:26                     ` Stefano Garzarella
2019-05-28 10:56 ` [PATCH 4/4] vsock/virtio: free used buffers " Stefano Garzarella
2019-06-10 13:09 ` [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefan Hajnoczi
2019-06-27 10:05   ` Stefano Garzarella

KVM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/kvm/0 kvm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 kvm kvm/ https://lore.kernel.org/kvm \
		kvm@vger.kernel.org kvm@archiver.kernel.org
	public-inbox-index kvm


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.kvm


AGPL code for this site: git clone https://public-inbox.org/ public-inbox