linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] vdpa/mlx5: Clear vq ready indication upon device reset
@ 2021-06-02  8:59 Eli Cohen
  2021-06-03  7:00 ` Jason Wang
  0 siblings, 1 reply; 5+ messages in thread
From: Eli Cohen @ 2021-06-02  8:59 UTC (permalink / raw)
  To: mst, jasowang, virtualization, linux-kernel; +Cc: elic

After device reset, the virtqueues are not ready so clear the ready
field.

Failing to do so can result in virtio_vdpa failing to load if the device
was previously used by vhost_vdpa and the old values are ready.
virtio_vdpa expects to find VQs in "not ready" state.

Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
Signed-off-by: Eli Cohen <elic@nvidia.com>
---
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 02a05492204c..e8bc0842b44c 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -862,6 +862,7 @@ static void destroy_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtq
 		return;
 	}
 	umems_destroy(ndev, mvq);
+	mvq->ready = false;
 }
 
 static u32 get_rqpn(struct mlx5_vdpa_virtqueue *mvq, bool fw)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] vdpa/mlx5: Clear vq ready indication upon device reset
  2021-06-02  8:59 [PATCH] vdpa/mlx5: Clear vq ready indication upon device reset Eli Cohen
@ 2021-06-03  7:00 ` Jason Wang
  2021-06-03  7:06   ` Jason Wang
  0 siblings, 1 reply; 5+ messages in thread
From: Jason Wang @ 2021-06-03  7:00 UTC (permalink / raw)
  To: Eli Cohen, mst, virtualization, linux-kernel


在 2021/6/2 下午4:59, Eli Cohen 写道:
> After device reset, the virtqueues are not ready so clear the ready
> field.
>
> Failing to do so can result in virtio_vdpa failing to load if the device
> was previously used by vhost_vdpa and the old values are ready.
> virtio_vdpa expects to find VQs in "not ready" state.
>
> Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
> Signed-off-by: Eli Cohen <elic@nvidia.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/vdpa/mlx5/net/mlx5_vnet.c | 1 +
>   1 file changed, 1 insertion(+)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 02a05492204c..e8bc0842b44c 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -862,6 +862,7 @@ static void destroy_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtq
>   		return;
>   	}
>   	umems_destroy(ndev, mvq);
> +	mvq->ready = false;
>   }
>   
>   static u32 get_rqpn(struct mlx5_vdpa_virtqueue *mvq, bool fw)


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] vdpa/mlx5: Clear vq ready indication upon device reset
  2021-06-03  7:00 ` Jason Wang
@ 2021-06-03  7:06   ` Jason Wang
  2021-06-03  7:30     ` Eli Cohen
  0 siblings, 1 reply; 5+ messages in thread
From: Jason Wang @ 2021-06-03  7:06 UTC (permalink / raw)
  To: Eli Cohen, mst, virtualization, linux-kernel


在 2021/6/3 下午3:00, Jason Wang 写道:
>
> 在 2021/6/2 下午4:59, Eli Cohen 写道:
>> After device reset, the virtqueues are not ready so clear the ready
>> field.
>>
>> Failing to do so can result in virtio_vdpa failing to load if the device
>> was previously used by vhost_vdpa and the old values are ready.
>> virtio_vdpa expects to find VQs in "not ready" state.
>>
>> Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 
>> devices")
>> Signed-off-by: Eli Cohen <elic@nvidia.com>
>
>
> Acked-by: Jason Wang <jasowang@redhat.com>


A second thought.

destroy_virtqueue() could be called many places.

One of them is the mlx5_vdpa_change_map(), if this is case, this looks 
wrong.

It looks to me it's simpler to do this in clear_virtqueues() which can 
only be called during reset.

Thanks


>
>
>> ---
>>   drivers/vdpa/mlx5/net/mlx5_vnet.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c 
>> b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>> index 02a05492204c..e8bc0842b44c 100644
>> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
>> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>> @@ -862,6 +862,7 @@ static void destroy_virtqueue(struct 
>> mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtq
>>           return;
>>       }
>>       umems_destroy(ndev, mvq);
>> +    mvq->ready = false;
>>   }
>>     static u32 get_rqpn(struct mlx5_vdpa_virtqueue *mvq, bool fw)


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] vdpa/mlx5: Clear vq ready indication upon device reset
  2021-06-03  7:06   ` Jason Wang
@ 2021-06-03  7:30     ` Eli Cohen
  2021-06-03  7:37       ` Jason Wang
  0 siblings, 1 reply; 5+ messages in thread
From: Eli Cohen @ 2021-06-03  7:30 UTC (permalink / raw)
  To: Jason Wang; +Cc: mst, virtualization, linux-kernel

On Thu, Jun 03, 2021 at 03:06:31PM +0800, Jason Wang wrote:
> 
> 在 2021/6/3 下午3:00, Jason Wang 写道:
> > 
> > 在 2021/6/2 下午4:59, Eli Cohen 写道:
> > > After device reset, the virtqueues are not ready so clear the ready
> > > field.
> > > 
> > > Failing to do so can result in virtio_vdpa failing to load if the device
> > > was previously used by vhost_vdpa and the old values are ready.
> > > virtio_vdpa expects to find VQs in "not ready" state.
> > > 
> > > Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5
> > > devices")
> > > Signed-off-by: Eli Cohen <elic@nvidia.com>
> > 
> > 
> > Acked-by: Jason Wang <jasowang@redhat.com>
> 
> 
> A second thought.
> 
> destroy_virtqueue() could be called many places.
> 
> One of them is the mlx5_vdpa_change_map(), if this is case, this looks
> wrong.

Right, although most likely VQs become ready only after all map changes
occur becuase I did not encounter any issue while testing.
> 
> It looks to me it's simpler to do this in clear_virtqueues() which can only
> be called during reset.

There is no clear_virtqueues() function. You probably mean to insert a
call in mlx5_vdpa_set_status() in case it performs reset. This function
will go over all virtqueues and clear their ready flag.

Alternatively we can add boolean argument to teardown_driver() that
signifies if we are in reset flow and in this case we clear ready.

> 
> Thanks
> 
> 
> > 
> > 
> > > ---
> > >   drivers/vdpa/mlx5/net/mlx5_vnet.c | 1 +
> > >   1 file changed, 1 insertion(+)
> > > 
> > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > index 02a05492204c..e8bc0842b44c 100644
> > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > @@ -862,6 +862,7 @@ static void destroy_virtqueue(struct
> > > mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtq
> > >           return;
> > >       }
> > >       umems_destroy(ndev, mvq);
> > > +    mvq->ready = false;
> > >   }
> > >     static u32 get_rqpn(struct mlx5_vdpa_virtqueue *mvq, bool fw)
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] vdpa/mlx5: Clear vq ready indication upon device reset
  2021-06-03  7:30     ` Eli Cohen
@ 2021-06-03  7:37       ` Jason Wang
  0 siblings, 0 replies; 5+ messages in thread
From: Jason Wang @ 2021-06-03  7:37 UTC (permalink / raw)
  To: Eli Cohen; +Cc: mst, virtualization, linux-kernel


在 2021/6/3 下午3:30, Eli Cohen 写道:
> On Thu, Jun 03, 2021 at 03:06:31PM +0800, Jason Wang wrote:
>> 在 2021/6/3 下午3:00, Jason Wang 写道:
>>> 在 2021/6/2 下午4:59, Eli Cohen 写道:
>>>> After device reset, the virtqueues are not ready so clear the ready
>>>> field.
>>>>
>>>> Failing to do so can result in virtio_vdpa failing to load if the device
>>>> was previously used by vhost_vdpa and the old values are ready.
>>>> virtio_vdpa expects to find VQs in "not ready" state.
>>>>
>>>> Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5
>>>> devices")
>>>> Signed-off-by: Eli Cohen <elic@nvidia.com>
>>>
>>> Acked-by: Jason Wang <jasowang@redhat.com>
>>
>> A second thought.
>>
>> destroy_virtqueue() could be called many places.
>>
>> One of them is the mlx5_vdpa_change_map(), if this is case, this looks
>> wrong.
> Right, although most likely VQs become ready only after all map changes
> occur becuase I did not encounter any issue while testing.


Yes, but it's not guaranteed that the map won't be changed. Userspace 
can update the mapping when new memory is plugged into the guest for 
example.


>> It looks to me it's simpler to do this in clear_virtqueues() which can only
>> be called during reset.
> There is no clear_virtqueues() function. You probably mean to insert a
> call in mlx5_vdpa_set_status() in case it performs reset. This function
> will go over all virtqueues and clear their ready flag.


Right.


>
> Alternatively we can add boolean argument to teardown_driver() that
> signifies if we are in reset flow and in this case we clear ready.


Yes, but doing in set_status() seems easier.

Thanks


>
>> Thanks
>>
>>
>>>
>>>> ---
>>>>    drivers/vdpa/mlx5/net/mlx5_vnet.c | 1 +
>>>>    1 file changed, 1 insertion(+)
>>>>
>>>> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>>> b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>>> index 02a05492204c..e8bc0842b44c 100644
>>>> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>>> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>>> @@ -862,6 +862,7 @@ static void destroy_virtqueue(struct
>>>> mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtq
>>>>            return;
>>>>        }
>>>>        umems_destroy(ndev, mvq);
>>>> +    mvq->ready = false;
>>>>    }
>>>>      static u32 get_rqpn(struct mlx5_vdpa_virtqueue *mvq, bool fw)


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-06-03  7:37 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-02  8:59 [PATCH] vdpa/mlx5: Clear vq ready indication upon device reset Eli Cohen
2021-06-03  7:00 ` Jason Wang
2021-06-03  7:06   ` Jason Wang
2021-06-03  7:30     ` Eli Cohen
2021-06-03  7:37       ` Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).