All of lore.kernel.org
 help / color / mirror / Atom feed
* Question about IRQs during the .remove() of virtio-vsock driver
@ 2019-05-21  9:44 Stefano Garzarella
  2019-05-21 10:05 ` Michael S. Tsirkin
  0 siblings, 1 reply; 7+ messages in thread
From: Stefano Garzarella @ 2019-05-21  9:44 UTC (permalink / raw)
  To: Michael Tsirkin, Jason Wang; +Cc: netdev, Stefan Hajnoczi

Hi Micheal, Jason,
as suggested by Stefan, I'm checking if we have some races in the
virtio-vsock driver. We found some races in the .probe() and .remove()
with the upper layer (socket) and I'll fix it.

Now my attention is on the bottom layer (virtio device) and my question is:
during the .remove() of virtio-vsock driver (virtio_vsock_remove), could happen
that an IRQ comes and one of our callback (e.g. virtio_vsock_rx_done()) is
executed, queueing new works?

I tried to follow the code in both cases (device unplugged or module removed)
and maybe it couldn't happen because we remove it from bus's knowledge,
but I'm not sure and your advice would be very helpful.

Thanks in advance,
Stefano

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Question about IRQs during the .remove() of virtio-vsock driver
  2019-05-21  9:44 Question about IRQs during the .remove() of virtio-vsock driver Stefano Garzarella
@ 2019-05-21 10:05 ` Michael S. Tsirkin
  2019-05-21 13:49   ` Stefano Garzarella
  0 siblings, 1 reply; 7+ messages in thread
From: Michael S. Tsirkin @ 2019-05-21 10:05 UTC (permalink / raw)
  To: Stefano Garzarella; +Cc: Jason Wang, netdev, Stefan Hajnoczi

On Tue, May 21, 2019 at 11:44:07AM +0200, Stefano Garzarella wrote:
> Hi Micheal, Jason,
> as suggested by Stefan, I'm checking if we have some races in the
> virtio-vsock driver. We found some races in the .probe() and .remove()
> with the upper layer (socket) and I'll fix it.
> 
> Now my attention is on the bottom layer (virtio device) and my question is:
> during the .remove() of virtio-vsock driver (virtio_vsock_remove), could happen
> that an IRQ comes and one of our callback (e.g. virtio_vsock_rx_done()) is
> executed, queueing new works?
> 
> I tried to follow the code in both cases (device unplugged or module removed)
> and maybe it couldn't happen because we remove it from bus's knowledge,
> but I'm not sure and your advice would be very helpful.
> 
> Thanks in advance,
> Stefano


Great question! This should be better documented: patches welcome!

Here's my understanding:


A typical removal flow works like this:

- prevent linux from sending new kick requests to device
  and flush such outstanding requests if any
  (device can still send notifications to linux)

- call
          vi->vdev->config->reset(vi->vdev);
  this will flush all device writes and interrupts.
  device will not use any more buffers.
  previously outstanding callbacks might still be active.

- Then call
          vdev->config->del_vqs(vdev);
  to flush outstanding callbacks if any.

-- 
MST

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Question about IRQs during the .remove() of virtio-vsock driver
  2019-05-21 10:05 ` Michael S. Tsirkin
@ 2019-05-21 13:49   ` Stefano Garzarella
  2019-05-21 13:56     ` Michael S. Tsirkin
  0 siblings, 1 reply; 7+ messages in thread
From: Stefano Garzarella @ 2019-05-21 13:49 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Jason Wang, netdev, Stefan Hajnoczi

On Tue, May 21, 2019 at 06:05:31AM -0400, Michael S. Tsirkin wrote:
> On Tue, May 21, 2019 at 11:44:07AM +0200, Stefano Garzarella wrote:
> > Hi Micheal, Jason,
> > as suggested by Stefan, I'm checking if we have some races in the
> > virtio-vsock driver. We found some races in the .probe() and .remove()
> > with the upper layer (socket) and I'll fix it.
> > 
> > Now my attention is on the bottom layer (virtio device) and my question is:
> > during the .remove() of virtio-vsock driver (virtio_vsock_remove), could happen
> > that an IRQ comes and one of our callback (e.g. virtio_vsock_rx_done()) is
> > executed, queueing new works?
> > 
> > I tried to follow the code in both cases (device unplugged or module removed)
> > and maybe it couldn't happen because we remove it from bus's knowledge,
> > but I'm not sure and your advice would be very helpful.
> > 
> > Thanks in advance,
> > Stefano
> 
> 
> Great question! This should be better documented: patches welcome!

When I'm clear, I'll be happy to document this.

> 
> Here's my understanding:
> 
> 
> A typical removal flow works like this:
> 
> - prevent linux from sending new kick requests to device
>   and flush such outstanding requests if any
>   (device can still send notifications to linux)
> 
> - call
>           vi->vdev->config->reset(vi->vdev);
>   this will flush all device writes and interrupts.
>   device will not use any more buffers.
>   previously outstanding callbacks might still be active.
> 
> - Then call
>           vdev->config->del_vqs(vdev);
>   to flush outstanding callbacks if any.

Thanks for sharing these useful information.

So, IIUC between step 1 (e.g. in virtio-vsock we flush all work-queues) and
step 2, new IRQs could happen, and in the virtio-vsock driver new work
will be queued.

In order to handle this case, I'm thinking to add a new variable
'work_enabled' in the struct virtio_vsock, put it to false at the start
of the .remove(), then call synchronize_rcu() before to flush all work
queues and use an helper function virtio_transport_queue_work() to queue
a new work, where the check of work_enabled and the queue_work are in the
RCU read critical section.

Here a pseudo code to explain better the idea:

virtio_vsock_remove() {
    vsock->work_enabled = false;

    /* Wait for other CPUs to finish to queue works */
    synchronize_rcu();

    flush_works();

    vdev->config->reset(vdev);

    ...

    vdev->config->del_vqs(vdev);
}

virtio_vsock_queue_work(vsock, work) {
    rcu_read_lock();

    if (!vsock->work_enabled) {
        goto out;
    }

    queue_work(virtio_vsock_workqueue, work);

out:
    rcu_read_unlock();
}


Do you think can work?
Please tell me if there is a better way to handle this case.

Thanks,
Stefano

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Question about IRQs during the .remove() of virtio-vsock driver
  2019-05-21 13:49   ` Stefano Garzarella
@ 2019-05-21 13:56     ` Michael S. Tsirkin
  2019-05-21 14:12       ` Stefano Garzarella
  2019-05-22  3:44       ` Jason Wang
  0 siblings, 2 replies; 7+ messages in thread
From: Michael S. Tsirkin @ 2019-05-21 13:56 UTC (permalink / raw)
  To: Stefano Garzarella; +Cc: Jason Wang, netdev, Stefan Hajnoczi

On Tue, May 21, 2019 at 03:49:20PM +0200, Stefano Garzarella wrote:
> On Tue, May 21, 2019 at 06:05:31AM -0400, Michael S. Tsirkin wrote:
> > On Tue, May 21, 2019 at 11:44:07AM +0200, Stefano Garzarella wrote:
> > > Hi Micheal, Jason,
> > > as suggested by Stefan, I'm checking if we have some races in the
> > > virtio-vsock driver. We found some races in the .probe() and .remove()
> > > with the upper layer (socket) and I'll fix it.
> > > 
> > > Now my attention is on the bottom layer (virtio device) and my question is:
> > > during the .remove() of virtio-vsock driver (virtio_vsock_remove), could happen
> > > that an IRQ comes and one of our callback (e.g. virtio_vsock_rx_done()) is
> > > executed, queueing new works?
> > > 
> > > I tried to follow the code in both cases (device unplugged or module removed)
> > > and maybe it couldn't happen because we remove it from bus's knowledge,
> > > but I'm not sure and your advice would be very helpful.
> > > 
> > > Thanks in advance,
> > > Stefano
> > 
> > 
> > Great question! This should be better documented: patches welcome!
> 
> When I'm clear, I'll be happy to document this.
> 
> > 
> > Here's my understanding:
> > 
> > 
> > A typical removal flow works like this:
> > 
> > - prevent linux from sending new kick requests to device
> >   and flush such outstanding requests if any
> >   (device can still send notifications to linux)
> > 
> > - call
> >           vi->vdev->config->reset(vi->vdev);
> >   this will flush all device writes and interrupts.
> >   device will not use any more buffers.
> >   previously outstanding callbacks might still be active.
> > 
> > - Then call
> >           vdev->config->del_vqs(vdev);
> >   to flush outstanding callbacks if any.
> 
> Thanks for sharing these useful information.
> 
> So, IIUC between step 1 (e.g. in virtio-vsock we flush all work-queues) and
> step 2, new IRQs could happen, and in the virtio-vsock driver new work
> will be queued.
> 
> In order to handle this case, I'm thinking to add a new variable
> 'work_enabled' in the struct virtio_vsock, put it to false at the start
> of the .remove(), then call synchronize_rcu() before to flush all work
> queues
> and use an helper function virtio_transport_queue_work() to queue
> a new work, where the check of work_enabled and the queue_work are in the
> RCU read critical section.
> 
> Here a pseudo code to explain better the idea:
> 
> virtio_vsock_remove() {
>     vsock->work_enabled = false;
> 
>     /* Wait for other CPUs to finish to queue works */
>     synchronize_rcu();
> 
>     flush_works();
> 
>     vdev->config->reset(vdev);
> 
>     ...
> 
>     vdev->config->del_vqs(vdev);
> }
> 
> virtio_vsock_queue_work(vsock, work) {
>     rcu_read_lock();
> 
>     if (!vsock->work_enabled) {
>         goto out;
>     }
> 
>     queue_work(virtio_vsock_workqueue, work);
> 
> out:
>     rcu_read_unlock();
> }
> 
> 
> Do you think can work?
> Please tell me if there is a better way to handle this case.
> 
> Thanks,
> Stefano


instead of rcu tricks I would just have rx_run and tx_run and check it
within the queued work - presumably under tx or rx lock.

then queueing an extra work becomes harmless,
and you flush it after del vqs which flushes everything for you.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Question about IRQs during the .remove() of virtio-vsock driver
  2019-05-21 13:56     ` Michael S. Tsirkin
@ 2019-05-21 14:12       ` Stefano Garzarella
  2019-05-22  3:44       ` Jason Wang
  1 sibling, 0 replies; 7+ messages in thread
From: Stefano Garzarella @ 2019-05-21 14:12 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Jason Wang, netdev, Stefan Hajnoczi

On Tue, May 21, 2019 at 09:56:42AM -0400, Michael S. Tsirkin wrote:
> On Tue, May 21, 2019 at 03:49:20PM +0200, Stefano Garzarella wrote:
> > On Tue, May 21, 2019 at 06:05:31AM -0400, Michael S. Tsirkin wrote:
> > > On Tue, May 21, 2019 at 11:44:07AM +0200, Stefano Garzarella wrote:
> > > > Hi Micheal, Jason,
> > > > as suggested by Stefan, I'm checking if we have some races in the
> > > > virtio-vsock driver. We found some races in the .probe() and .remove()
> > > > with the upper layer (socket) and I'll fix it.
> > > > 
> > > > Now my attention is on the bottom layer (virtio device) and my question is:
> > > > during the .remove() of virtio-vsock driver (virtio_vsock_remove), could happen
> > > > that an IRQ comes and one of our callback (e.g. virtio_vsock_rx_done()) is
> > > > executed, queueing new works?
> > > > 
> > > > I tried to follow the code in both cases (device unplugged or module removed)
> > > > and maybe it couldn't happen because we remove it from bus's knowledge,
> > > > but I'm not sure and your advice would be very helpful.
> > > > 
> > > > Thanks in advance,
> > > > Stefano
> > > 
> > > 
> > > Great question! This should be better documented: patches welcome!
> > 
> > When I'm clear, I'll be happy to document this.
> > 
> > > 
> > > Here's my understanding:
> > > 
> > > 
> > > A typical removal flow works like this:
> > > 
> > > - prevent linux from sending new kick requests to device
> > >   and flush such outstanding requests if any
> > >   (device can still send notifications to linux)
> > > 
> > > - call
> > >           vi->vdev->config->reset(vi->vdev);
> > >   this will flush all device writes and interrupts.
> > >   device will not use any more buffers.
> > >   previously outstanding callbacks might still be active.
> > > 
> > > - Then call
> > >           vdev->config->del_vqs(vdev);
> > >   to flush outstanding callbacks if any.
> > 
> > Thanks for sharing these useful information.
> > 
> > So, IIUC between step 1 (e.g. in virtio-vsock we flush all work-queues) and
> > step 2, new IRQs could happen, and in the virtio-vsock driver new work
> > will be queued.
> > 
> > In order to handle this case, I'm thinking to add a new variable
> > 'work_enabled' in the struct virtio_vsock, put it to false at the start
> > of the .remove(), then call synchronize_rcu() before to flush all work
> > queues
> > and use an helper function virtio_transport_queue_work() to queue
> > a new work, where the check of work_enabled and the queue_work are in the
> > RCU read critical section.
> > 
> > Here a pseudo code to explain better the idea:
> > 
> > virtio_vsock_remove() {
> >     vsock->work_enabled = false;
> > 
> >     /* Wait for other CPUs to finish to queue works */
> >     synchronize_rcu();
> > 
> >     flush_works();
> > 
> >     vdev->config->reset(vdev);
> > 
> >     ...
> > 
> >     vdev->config->del_vqs(vdev);
> > }
> > 
> > virtio_vsock_queue_work(vsock, work) {
> >     rcu_read_lock();
> > 
> >     if (!vsock->work_enabled) {
> >         goto out;
> >     }
> > 
> >     queue_work(virtio_vsock_workqueue, work);
> > 
> > out:
> >     rcu_read_unlock();
> > }
> > 
> > 
> > Do you think can work?
> > Please tell me if there is a better way to handle this case.
> > 
> > Thanks,
> > Stefano
> 
> 
> instead of rcu tricks I would just have rx_run and tx_run and check it
> within the queued work - presumably under tx or rx lock.
> 
> then queueing an extra work becomes harmless,
> and you flush it after del vqs which flushes everything for you.
> 
> 

Sure, the patch should be even smaller!

Thank you very much for the advice,
Stefano

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Question about IRQs during the .remove() of virtio-vsock driver
  2019-05-21 13:56     ` Michael S. Tsirkin
  2019-05-21 14:12       ` Stefano Garzarella
@ 2019-05-22  3:44       ` Jason Wang
  2019-05-23 13:56         ` Stefano Garzarella
  1 sibling, 1 reply; 7+ messages in thread
From: Jason Wang @ 2019-05-22  3:44 UTC (permalink / raw)
  To: Michael S. Tsirkin, Stefano Garzarella; +Cc: netdev, Stefan Hajnoczi


On 2019/5/21 下午9:56, Michael S. Tsirkin wrote:
> On Tue, May 21, 2019 at 03:49:20PM +0200, Stefano Garzarella wrote:
>> On Tue, May 21, 2019 at 06:05:31AM -0400, Michael S. Tsirkin wrote:
>>> On Tue, May 21, 2019 at 11:44:07AM +0200, Stefano Garzarella wrote:
>>>> Hi Micheal, Jason,
>>>> as suggested by Stefan, I'm checking if we have some races in the
>>>> virtio-vsock driver. We found some races in the .probe() and .remove()
>>>> with the upper layer (socket) and I'll fix it.
>>>>
>>>> Now my attention is on the bottom layer (virtio device) and my question is:
>>>> during the .remove() of virtio-vsock driver (virtio_vsock_remove), could happen
>>>> that an IRQ comes and one of our callback (e.g. virtio_vsock_rx_done()) is
>>>> executed, queueing new works?
>>>>
>>>> I tried to follow the code in both cases (device unplugged or module removed)
>>>> and maybe it couldn't happen because we remove it from bus's knowledge,
>>>> but I'm not sure and your advice would be very helpful.
>>>>
>>>> Thanks in advance,
>>>> Stefano
>>>
>>> Great question! This should be better documented: patches welcome!
>> When I'm clear, I'll be happy to document this.
>>
>>> Here's my understanding:
>>>
>>>
>>> A typical removal flow works like this:
>>>
>>> - prevent linux from sending new kick requests to device
>>>    and flush such outstanding requests if any
>>>    (device can still send notifications to linux)
>>>
>>> - call
>>>            vi->vdev->config->reset(vi->vdev);
>>>    this will flush all device writes and interrupts.
>>>    device will not use any more buffers.
>>>    previously outstanding callbacks might still be active.
>>>
>>> - Then call
>>>            vdev->config->del_vqs(vdev);
>>>    to flush outstanding callbacks if any.
>> Thanks for sharing these useful information.
>>
>> So, IIUC between step 1 (e.g. in virtio-vsock we flush all work-queues) and
>> step 2, new IRQs could happen, and in the virtio-vsock driver new work
>> will be queued.
>>
>> In order to handle this case, I'm thinking to add a new variable
>> 'work_enabled' in the struct virtio_vsock, put it to false at the start
>> of the .remove(), then call synchronize_rcu() before to flush all work
>> queues
>> and use an helper function virtio_transport_queue_work() to queue
>> a new work, where the check of work_enabled and the queue_work are in the
>> RCU read critical section.
>>
>> Here a pseudo code to explain better the idea:
>>
>> virtio_vsock_remove() {
>>      vsock->work_enabled = false;
>>
>>      /* Wait for other CPUs to finish to queue works */
>>      synchronize_rcu();
>>
>>      flush_works();
>>
>>      vdev->config->reset(vdev);
>>
>>      ...
>>
>>      vdev->config->del_vqs(vdev);
>> }
>>
>> virtio_vsock_queue_work(vsock, work) {
>>      rcu_read_lock();
>>
>>      if (!vsock->work_enabled) {
>>          goto out;
>>      }
>>
>>      queue_work(virtio_vsock_workqueue, work);
>>
>> out:
>>      rcu_read_unlock();
>> }
>>
>>
>> Do you think can work?
>> Please tell me if there is a better way to handle this case.
>>
>> Thanks,
>> Stefano
>
> instead of rcu tricks I would just have rx_run and tx_run and check it
> within the queued work - presumably under tx or rx lock.
>
> then queueing an extra work becomes harmless,
> and you flush it after del vqs which flushes everything for you.
>
>

It looks to me that we need guarantee no work queued or scheduled before 
del_vqs. Otherwise it may lead use after free? (E.g net disable NAPI 
before del_vqs).

Thanks


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Question about IRQs during the .remove() of virtio-vsock driver
  2019-05-22  3:44       ` Jason Wang
@ 2019-05-23 13:56         ` Stefano Garzarella
  0 siblings, 0 replies; 7+ messages in thread
From: Stefano Garzarella @ 2019-05-23 13:56 UTC (permalink / raw)
  To: Jason Wang; +Cc: Michael S. Tsirkin, netdev, Stefan Hajnoczi

On Wed, May 22, 2019 at 11:44:26AM +0800, Jason Wang wrote:
> 
> On 2019/5/21 下午9:56, Michael S. Tsirkin wrote:
> > On Tue, May 21, 2019 at 03:49:20PM +0200, Stefano Garzarella wrote:
> > > On Tue, May 21, 2019 at 06:05:31AM -0400, Michael S. Tsirkin wrote:
> > > > On Tue, May 21, 2019 at 11:44:07AM +0200, Stefano Garzarella wrote:
> > > > > Hi Micheal, Jason,
> > > > > as suggested by Stefan, I'm checking if we have some races in the
> > > > > virtio-vsock driver. We found some races in the .probe() and .remove()
> > > > > with the upper layer (socket) and I'll fix it.
> > > > > 
> > > > > Now my attention is on the bottom layer (virtio device) and my question is:
> > > > > during the .remove() of virtio-vsock driver (virtio_vsock_remove), could happen
> > > > > that an IRQ comes and one of our callback (e.g. virtio_vsock_rx_done()) is
> > > > > executed, queueing new works?
> > > > > 
> > > > > I tried to follow the code in both cases (device unplugged or module removed)
> > > > > and maybe it couldn't happen because we remove it from bus's knowledge,
> > > > > but I'm not sure and your advice would be very helpful.
> > > > > 
> > > > > Thanks in advance,
> > > > > Stefano
> > > > 
> > > > Great question! This should be better documented: patches welcome!
> > > When I'm clear, I'll be happy to document this.
> > > 
> > > > Here's my understanding:
> > > > 
> > > > 
> > > > A typical removal flow works like this:
> > > > 
> > > > - prevent linux from sending new kick requests to device
> > > >    and flush such outstanding requests if any
> > > >    (device can still send notifications to linux)
> > > > 
> > > > - call
> > > >            vi->vdev->config->reset(vi->vdev);
> > > >    this will flush all device writes and interrupts.
> > > >    device will not use any more buffers.
> > > >    previously outstanding callbacks might still be active.
> > > > 
> > > > - Then call
> > > >            vdev->config->del_vqs(vdev);
> > > >    to flush outstanding callbacks if any.
> > > Thanks for sharing these useful information.
> > > 
> > > So, IIUC between step 1 (e.g. in virtio-vsock we flush all work-queues) and
> > > step 2, new IRQs could happen, and in the virtio-vsock driver new work
> > > will be queued.
> > > 
> > > In order to handle this case, I'm thinking to add a new variable
> > > 'work_enabled' in the struct virtio_vsock, put it to false at the start
> > > of the .remove(), then call synchronize_rcu() before to flush all work
> > > queues
> > > and use an helper function virtio_transport_queue_work() to queue
> > > a new work, where the check of work_enabled and the queue_work are in the
> > > RCU read critical section.
> > > 
> > > Here a pseudo code to explain better the idea:
> > > 
> > > virtio_vsock_remove() {
> > >      vsock->work_enabled = false;
> > > 
> > >      /* Wait for other CPUs to finish to queue works */
> > >      synchronize_rcu();
> > > 
> > >      flush_works();
> > > 
> > >      vdev->config->reset(vdev);
> > > 
> > >      ...
> > > 
> > >      vdev->config->del_vqs(vdev);
> > > }
> > > 
> > > virtio_vsock_queue_work(vsock, work) {
> > >      rcu_read_lock();
> > > 
> > >      if (!vsock->work_enabled) {
> > >          goto out;
> > >      }
> > > 
> > >      queue_work(virtio_vsock_workqueue, work);
> > > 
> > > out:
> > >      rcu_read_unlock();
> > > }
> > > 
> > > 
> > > Do you think can work?
> > > Please tell me if there is a better way to handle this case.
> > > 
> > > Thanks,
> > > Stefano
> > 
> > instead of rcu tricks I would just have rx_run and tx_run and check it
> > within the queued work - presumably under tx or rx lock.
> > 
> > then queueing an extra work becomes harmless,
> > and you flush it after del vqs which flushes everything for you.
> > 
> > 
> 
> It looks to me that we need guarantee no work queued or scheduled before
> del_vqs. Otherwise it may lead use after free? (E.g net disable NAPI before
> del_vqs).
> 

I'm disabling works before del_vqs (setting tx_run or rx_run to false under the
tx or rx lock), this means that work can still be queued, but the worker
function will exit just after tx or rx lock is acquired, where I'm
checking the tx_run or rx_run, so no workers should use the virtqueues.

Then I'll call del_vqs and I'll flush all works before to free the
'vsock' object (struct virtio_vsock) to be sure that no workers will run.

I hope this will match your advice.

Thanks,
Stefano

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-05-23 13:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-21  9:44 Question about IRQs during the .remove() of virtio-vsock driver Stefano Garzarella
2019-05-21 10:05 ` Michael S. Tsirkin
2019-05-21 13:49   ` Stefano Garzarella
2019-05-21 13:56     ` Michael S. Tsirkin
2019-05-21 14:12       ` Stefano Garzarella
2019-05-22  3:44       ` Jason Wang
2019-05-23 13:56         ` Stefano Garzarella

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.