All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
Date: Thu, 30 May 2019 19:59:14 +0800	[thread overview]
Message-ID: <4c881585-8fee-0a53-865c-05d41ffb8ed1@redhat.com> (raw)
In-Reply-To: <20190530101036.wnjphmajrz6nz6zc@steredhat.homenet.telecomitalia.it>


On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
>> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
>>> On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>>>> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
>>>>> We flush all pending works before to call vdev->config->reset(vdev),
>>>>> but other works can be queued before the vdev->config->del_vqs(vdev),
>>>>> so we add another flush after it, to avoid use after free.
>>>>>
>>>>> Suggested-by: Michael S. Tsirkin <mst@redhat.com>
>>>>> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
>>>>> ---
>>>>>     net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
>>>>>     1 file changed, 17 insertions(+), 6 deletions(-)
>>>>>
>>>>> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>>>> index e694df10ab61..ad093ce96693 100644
>>>>> --- a/net/vmw_vsock/virtio_transport.c
>>>>> +++ b/net/vmw_vsock/virtio_transport.c
>>>>> @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
>>>>>     	return ret;
>>>>>     }
>>>>> +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
>>>>> +{
>>>>> +	flush_work(&vsock->loopback_work);
>>>>> +	flush_work(&vsock->rx_work);
>>>>> +	flush_work(&vsock->tx_work);
>>>>> +	flush_work(&vsock->event_work);
>>>>> +	flush_work(&vsock->send_pkt_work);
>>>>> +}
>>>>> +
>>>>>     static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>     {
>>>>>     	struct virtio_vsock *vsock = vdev->priv;
>>>>> @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>     	mutex_lock(&the_virtio_vsock_mutex);
>>>>>     	the_virtio_vsock = NULL;
>>>>> -	flush_work(&vsock->loopback_work);
>>>>> -	flush_work(&vsock->rx_work);
>>>>> -	flush_work(&vsock->tx_work);
>>>>> -	flush_work(&vsock->event_work);
>>>>> -	flush_work(&vsock->send_pkt_work);
>>>>> -
>>>>>     	/* Reset all connected sockets when the device disappear */
>>>>>     	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
>>>>> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>     	vsock->event_run = false;
>>>>>     	mutex_unlock(&vsock->event_lock);
>>>>> +	/* Flush all pending works */
>>>>> +	virtio_vsock_flush_works(vsock);
>>>>> +
>>>>>     	/* Flush all device writes and interrupts, device will not use any
>>>>>     	 * more buffers.
>>>>>     	 */
>>>>> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>     	/* Delete virtqueues and flush outstanding callbacks if any */
>>>>>     	vdev->config->del_vqs(vdev);
>>>>> +	/* Other works can be queued before 'config->del_vqs()', so we flush
>>>>> +	 * all works before to free the vsock object to avoid use after free.
>>>>> +	 */
>>>>> +	virtio_vsock_flush_works(vsock);
>>>> Some questions after a quick glance:
>>>>
>>>> 1) It looks to me that the work could be queued from the path of
>>>> vsock_transport_cancel_pkt() . Is that synchronized here?
>>>>
>>> Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
>>> queue work from the upper layer (socket).
>>>
>>> Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
>>> a rare issue could happen:
>>> we are setting the_virtio_vsock to NULL at the start of .remove() and we
>>> are freeing the object pointed by it at the end of .remove(), so
>>> virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
>>> running, accessing the object that we are freed.
>>
>> Yes, that's my point.
>>
>>
>>> Should I use something like RCU to prevent this issue?
>>>
>>>       virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
>>>       {
>>>           rcu_read_lock();
>>>           vsock = rcu_dereference(the_virtio_vsock_mutex);
>>
>> RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
>>
> Okay, I'm going this way.
>
>>>           ...
>>>           rcu_read_unlock();
>>>       }
>>>
>>>       virtio_vsock_remove()
>>>       {
>>>           rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
>>>           synchronize_rcu();
>>>
>>>           ...
>>>
>>>           free(vsock);
>>>       }
>>>
>>> Could there be a better approach?
>>>
>>>
>>>> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
>>>> needed? It looks to me we've already done except that we need flush rx_work
>>>> in the end since send_pkt_work can requeue rx_work.
>>> The main reason of tx_run/rx_run/event_run is to prevent that a worker
>>> function is running while we are calling config->reset().
>>>
>>> E.g. if an interrupt comes between virtio_vsock_flush_works() and
>>> config->reset(), it can queue new works that can access the device while
>>> we are in config->reset().
>>>
>>> IMHO they are still needed.
>>>
>>> What do you think?
>>
>> I mean could we simply do flush after reset once and without tx_rx/rx_run
>> tricks?
>>
>> rest();
>>
>> virtio_vsock_flush_work();
>>
>> virtio_vsock_free_buf();
> My only doubt is:
> is it safe to call config->reset() while a worker function could access
> the device?
>
> I had this doubt reading the Michael's advice[1] and looking at
> virtnet_remove() where there are these lines before the config->reset():
>
> 	/* Make sure no work handler is accessing the device. */
> 	flush_work(&vi->config_work);
>
> Thanks,
> Stefano
>
> [1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org


Good point. Then I agree with you. But if we can use the RCU to detect 
the detach of device from socket for these, it would be even better.

Thanks



  reply	other threads:[~2019-05-30 11:59 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-28 10:56 [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefano Garzarella
2019-05-28 10:56 ` [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock' Stefano Garzarella
2019-05-30  4:28   ` David Miller
2019-05-30  4:28   ` David Miller
2019-05-30 10:27     ` Stefano Garzarella
2019-05-30 10:27     ` Stefano Garzarella
2019-05-28 10:56 ` Stefano Garzarella
2019-05-28 10:56 ` [PATCH 2/4] vsock/virtio: stop workers during the .remove() Stefano Garzarella
2019-05-28 10:56 ` Stefano Garzarella
2019-05-28 10:56 ` [PATCH 3/4] vsock/virtio: fix flush of works " Stefano Garzarella
2019-05-28 10:56 ` Stefano Garzarella
2019-05-29  3:22   ` Jason Wang
2019-05-29  3:22   ` Jason Wang
2019-05-29 10:58     ` Stefano Garzarella
2019-05-29 10:58     ` Stefano Garzarella
2019-05-30  9:46       ` Jason Wang
2019-05-30  9:46       ` Jason Wang
2019-05-30 10:10         ` Stefano Garzarella
2019-05-30 11:59           ` Jason Wang [this message]
2019-05-31  8:18             ` Stefano Garzarella
2019-05-31  8:18             ` Stefano Garzarella
2019-05-31  9:56               ` Jason Wang
2019-05-31  9:56               ` Jason Wang
2019-06-06  8:11                 ` Stefano Garzarella
2019-06-06  8:11                 ` Stefano Garzarella
2019-06-13  8:57                   ` Jason Wang
2019-06-13  8:57                     ` Jason Wang
2019-06-27 10:26                     ` Stefano Garzarella
2019-06-27 10:26                     ` Stefano Garzarella
2019-05-30 11:59           ` Jason Wang
2019-05-30 10:10         ` Stefano Garzarella
2019-05-28 10:56 ` [PATCH 4/4] vsock/virtio: free used buffers " Stefano Garzarella
2019-05-28 10:56 ` Stefano Garzarella
2019-06-10 13:09 ` [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefan Hajnoczi
2019-06-10 13:09   ` Stefan Hajnoczi
2019-06-27 10:05   ` Stefano Garzarella
2019-06-27 10:05   ` Stefano Garzarella

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4c881585-8fee-0a53-865c-05d41ffb8ed1@redhat.com \
    --to=jasowang@redhat.com \
    --cc=davem@davemloft.net \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.