All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: RFC: control virtqueue size by the vdpa tool
       [not found] <DM8PR12MB5400FEB0322A9FD6B3271D45AB799@DM8PR12MB5400.namprd12.prod.outlook.com>
@ 2022-08-30  8:21 ` Michael S. Tsirkin
  2022-08-30 19:58 ` Michael S. Tsirkin
  1 sibling, 0 replies; 6+ messages in thread
From: Michael S. Tsirkin @ 2022-08-30  8:21 UTC (permalink / raw)
  To: Eli Cohen; +Cc: eperezma, virtualization

On Tue, Aug 30, 2022 at 06:22:31AM +0000, Eli Cohen wrote:
>  
> 
> Hi,
> 
>  
> 
> I have been experimenting with different queue sizes with mlx5_vdpa and noticed
> that queue size can affect performance.

Absolutely. Can you share the results btw? Would be very interesting.

> I would like to propose an extension to vdpa tool to allow to specify the queue
> size. Valid values will conform to the max of 32768 specified by the spec.
> 
>  
> 
> “vdpa mgmtdev show” will have another line specifying the valid range for a
> management device which could be narrower than the spec allows. This range will
> be valid for data queues only (not for control VQ).
> 
> Another line will display the default queue size
> 
>  
> 
> Example:
> 
> $ vdpa mgmtdev show
> 
> auxiliary/mlx5_core.sf.6:
> 
>   supported_classes net
> 
>   max_supported_vqs 65
> 
>   dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN
> MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
> 
>   data queue range 256-4096
> 
>   default queue size 256
> 
>  
> 
> When you create the vdpa device you can specify the requested value:
> 
> $ vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.6 max_vqp 1 mtu 9000
> queue_size 1024
> 
>  

Makes sense to me, however, note that
1. the value controlled from the host is actually the max queue size
   not the queue size. queue size can be controlled from guest with
   the recent reset extension
2. different sizes for rx and tx probably make sense

-- 
MST

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RFC: control virtqueue size by the vdpa tool
       [not found] <DM8PR12MB5400FEB0322A9FD6B3271D45AB799@DM8PR12MB5400.namprd12.prod.outlook.com>
  2022-08-30  8:21 ` RFC: control virtqueue size by the vdpa tool Michael S. Tsirkin
@ 2022-08-30 19:58 ` Michael S. Tsirkin
  2022-08-30 21:04   ` Si-Wei Liu
  1 sibling, 1 reply; 6+ messages in thread
From: Michael S. Tsirkin @ 2022-08-30 19:58 UTC (permalink / raw)
  To: Eli Cohen; +Cc: eperezma, virtualization

On Tue, Aug 30, 2022 at 06:22:31AM +0000, Eli Cohen wrote:
>  
> 
> Hi,
> 
>  
> 
> I have been experimenting with different queue sizes with mlx5_vdpa and noticed
> that queue size can affect performance.
> 
> I would like to propose an extension to vdpa tool to allow to specify the queue
> size. Valid values will conform to the max of 32768 specified by the spec.
> 
>  
> 
> “vdpa mgmtdev show” will have another line specifying the valid range for a
> management device which could be narrower than the spec allows. This range will
> be valid for data queues only (not for control VQ).
> 
> Another line will display the default queue size
> 
>  
> 
> Example:
> 
> $ vdpa mgmtdev show
> 
> auxiliary/mlx5_core.sf.6:
> 
>   supported_classes net
> 
>   max_supported_vqs 65
> 
>   dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN
> MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
> 
>   data queue range 256-4096
> 
>   default queue size 256
> 
>  
> 
> When you create the vdpa device you can specify the requested value:
> 
> $ vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.6 max_vqp 1 mtu 9000
> queue_size 1024
> 
>  


A follow up question: isn't it enough to control the size
from qemu? do we need ability to control it at the kernel level?

-- 
MST

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RFC: control virtqueue size by the vdpa tool
  2022-08-30 19:58 ` Michael S. Tsirkin
@ 2022-08-30 21:04   ` Si-Wei Liu
  2022-08-30 22:01     ` Michael S. Tsirkin
  0 siblings, 1 reply; 6+ messages in thread
From: Si-Wei Liu @ 2022-08-30 21:04 UTC (permalink / raw)
  To: Michael S. Tsirkin, Eli Cohen; +Cc: eperezma, virtualization



On 8/30/2022 12:58 PM, Michael S. Tsirkin wrote:
> On Tue, Aug 30, 2022 at 06:22:31AM +0000, Eli Cohen wrote:
>>   
>>
>> Hi,
>>
>>   
>>
>> I have been experimenting with different queue sizes with mlx5_vdpa and noticed
>> that queue size can affect performance.
>>
>> I would like to propose an extension to vdpa tool to allow to specify the queue
>> size. Valid values will conform to the max of 32768 specified by the spec.
>>
>>   
>>
>> “vdpa mgmtdev show” will have another line specifying the valid range for a
>> management device which could be narrower than the spec allows. This range will
>> be valid for data queues only (not for control VQ).
>>
>> Another line will display the default queue size
>>
>>   
>>
>> Example:
>>
>> $ vdpa mgmtdev show
>>
>> auxiliary/mlx5_core.sf.6:
>>
>>    supported_classes net
>>
>>    max_supported_vqs 65
>>
>>    dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN
>> MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
>>
>>    data queue range 256-4096
>>
>>    default queue size 256
>>
>>   
>>
>> When you create the vdpa device you can specify the requested value:
>>
>> $ vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.6 max_vqp 1 mtu 9000
>> queue_size 1024
>>
>>   
>
> A follow up question: isn't it enough to control the size
> from qemu? do we need ability to control it at the kernel level?
>
Right, I think today we can optionally control the queue size from qemu 
via rx_queue_size or tx_queue_size, but it has a limit of 1024 (btw why 
it has such limit, which is relatively lower in my opinion). I think 
what was missing for QEMU is to query the max number of queue size that 
the hardware can support from the backend.

-Siwei


_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RFC: control virtqueue size by the vdpa tool
  2022-08-30 21:04   ` Si-Wei Liu
@ 2022-08-30 22:01     ` Michael S. Tsirkin
  2022-08-30 23:22       ` Si-Wei Liu
  0 siblings, 1 reply; 6+ messages in thread
From: Michael S. Tsirkin @ 2022-08-30 22:01 UTC (permalink / raw)
  To: Si-Wei Liu; +Cc: eperezma, Eli Cohen, virtualization

On Tue, Aug 30, 2022 at 02:04:55PM -0700, Si-Wei Liu wrote:
> 
> 
> On 8/30/2022 12:58 PM, Michael S. Tsirkin wrote:
> > On Tue, Aug 30, 2022 at 06:22:31AM +0000, Eli Cohen wrote:
> > > 
> > > Hi,
> > > 
> > > 
> > > I have been experimenting with different queue sizes with mlx5_vdpa and noticed
> > > that queue size can affect performance.
> > > 
> > > I would like to propose an extension to vdpa tool to allow to specify the queue
> > > size. Valid values will conform to the max of 32768 specified by the spec.
> > > 
> > > 
> > > “vdpa mgmtdev show” will have another line specifying the valid range for a
> > > management device which could be narrower than the spec allows. This range will
> > > be valid for data queues only (not for control VQ).
> > > 
> > > Another line will display the default queue size
> > > 
> > > 
> > > Example:
> > > 
> > > $ vdpa mgmtdev show
> > > 
> > > auxiliary/mlx5_core.sf.6:
> > > 
> > >    supported_classes net
> > > 
> > >    max_supported_vqs 65
> > > 
> > >    dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN
> > > MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
> > > 
> > >    data queue range 256-4096
> > > 
> > >    default queue size 256
> > > 
> > > 
> > > When you create the vdpa device you can specify the requested value:
> > > 
> > > $ vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.6 max_vqp 1 mtu 9000
> > > queue_size 1024
> > > 
> > 
> > A follow up question: isn't it enough to control the size
> > from qemu? do we need ability to control it at the kernel level?
> > 
> Right, I think today we can optionally control the queue size from qemu via
> rx_queue_size or tx_queue_size, but it has a limit of 1024 (btw why it has
> such limit, which is relatively lower in my opinion). I think what was
> missing for QEMU is to query the max number of queue size that the hardware
> can support from the backend.
> 
> -Siwei
> 

okay sure. my question is how important is it to control it in the
kernel?

-- 
MST

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RFC: control virtqueue size by the vdpa tool
  2022-08-30 22:01     ` Michael S. Tsirkin
@ 2022-08-30 23:22       ` Si-Wei Liu
       [not found]         ` <DM8PR12MB54006D58A2ACA88CC786B9A8AB789@DM8PR12MB5400.namprd12.prod.outlook.com>
  0 siblings, 1 reply; 6+ messages in thread
From: Si-Wei Liu @ 2022-08-30 23:22 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: eperezma, Eli Cohen, virtualization



On 8/30/2022 3:01 PM, Michael S. Tsirkin wrote:
> On Tue, Aug 30, 2022 at 02:04:55PM -0700, Si-Wei Liu wrote:
>>
>> On 8/30/2022 12:58 PM, Michael S. Tsirkin wrote:
>>> On Tue, Aug 30, 2022 at 06:22:31AM +0000, Eli Cohen wrote:
>>>> Hi,
>>>>
>>>>
>>>> I have been experimenting with different queue sizes with mlx5_vdpa and noticed
>>>> that queue size can affect performance.
>>>>
>>>> I would like to propose an extension to vdpa tool to allow to specify the queue
>>>> size. Valid values will conform to the max of 32768 specified by the spec.
>>>>
>>>>
>>>> “vdpa mgmtdev show” will have another line specifying the valid range for a
>>>> management device which could be narrower than the spec allows. This range will
>>>> be valid for data queues only (not for control VQ).
>>>>
>>>> Another line will display the default queue size
>>>>
>>>>
>>>> Example:
>>>>
>>>> $ vdpa mgmtdev show
>>>>
>>>> auxiliary/mlx5_core.sf.6:
>>>>
>>>>     supported_classes net
>>>>
>>>>     max_supported_vqs 65
>>>>
>>>>     dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN
>>>> MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
>>>>
>>>>     data queue range 256-4096
>>>>
>>>>     default queue size 256
>>>>
>>>>
>>>> When you create the vdpa device you can specify the requested value:
>>>>
>>>> $ vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.6 max_vqp 1 mtu 9000
>>>> queue_size 1024
>>>>
>>> A follow up question: isn't it enough to control the size
>>> from qemu? do we need ability to control it at the kernel level?
>>>
>> Right, I think today we can optionally control the queue size from qemu via
>> rx_queue_size or tx_queue_size, but it has a limit of 1024 (btw why it has
>> such limit, which is relatively lower in my opinion). I think what was
>> missing for QEMU is to query the max number of queue size that the hardware
>> can support from the backend.
>>
>> -Siwei
>>
> okay sure. my question is how important is it to control it in the
> kernel?
I don't have a specific use case for that (in kernel)

-Siwei
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RFC: control virtqueue size by the vdpa tool
       [not found]         ` <DM8PR12MB54006D58A2ACA88CC786B9A8AB789@DM8PR12MB5400.namprd12.prod.outlook.com>
@ 2022-08-31 22:22           ` Si-Wei Liu
  0 siblings, 0 replies; 6+ messages in thread
From: Si-Wei Liu @ 2022-08-31 22:22 UTC (permalink / raw)
  To: Eli Cohen, Michael S. Tsirkin; +Cc: eperezma, virtualization



On 8/30/2022 10:22 PM, Eli Cohen wrote:
>> From: Si-Wei Liu <si-wei.liu@oracle.com>
>> Sent: Wednesday, August 31, 2022 2:23 AM
>> To: Michael S. Tsirkin <mst@redhat.com>
>> Cc: Eli Cohen <elic@nvidia.com>; virtualization@lists.linux-foundation.org;
>> Jason Wang <jasowang@redhat.com>; eperezma@redhat.com
>> Subject: Re: RFC: control virtqueue size by the vdpa tool
>>
>>
>>
>> On 8/30/2022 3:01 PM, Michael S. Tsirkin wrote:
>>> On Tue, Aug 30, 2022 at 02:04:55PM -0700, Si-Wei Liu wrote:
>>>> On 8/30/2022 12:58 PM, Michael S. Tsirkin wrote:
>>>>> On Tue, Aug 30, 2022 at 06:22:31AM +0000, Eli Cohen wrote:
>>>>>> Hi,
>>>>>>
>>>>>>
>>>>>> I have been experimenting with different queue sizes with mlx5_vdpa
>> and noticed
>>>>>> that queue size can affect performance.
>>>>>>
>>>>>> I would like to propose an extension to vdpa tool to allow to specify
>> the queue
>>>>>> size. Valid values will conform to the max of 32768 specified by the
>> spec.
>>>>>>
>>>>>> “vdpa mgmtdev show” will have another line specifying the valid range
>> for a
>>>>>> management device which could be narrower than the spec allows.
>> This range will
>>>>>> be valid for data queues only (not for control VQ).
>>>>>>
>>>>>> Another line will display the default queue size
>>>>>>
>>>>>>
>>>>>> Example:
>>>>>>
>>>>>> $ vdpa mgmtdev show
>>>>>>
>>>>>> auxiliary/mlx5_core.sf.6:
>>>>>>
>>>>>>      supported_classes net
>>>>>>
>>>>>>      max_supported_vqs 65
>>>>>>
>>>>>>      dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6
>> STATUS CTRL_VQ CTRL_VLAN
>>>>>> MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
>>>>>>
>>>>>>      data queue range 256-4096
>>>>>>
>>>>>>      default queue size 256
>>>>>>
>>>>>>
>>>>>> When you create the vdpa device you can specify the requested value:
>>>>>>
>>>>>> $ vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.6
>> max_vqp 1 mtu 9000
>>>>>> queue_size 1024
>>>>>>
>>>>> A follow up question: isn't it enough to control the size
>>>>> from qemu? do we need ability to control it at the kernel level?
>>>>>
>>>> Right, I think today we can optionally control the queue size from qemu
>> via
>>>> rx_queue_size or tx_queue_size, but it has a limit of 1024 (btw why it has
>>>> such limit, which is relatively lower in my opinion). I think what was
>>>> missing for QEMU is to query the max number of queue size that the
>> hardware
>>>> can support from the backend.
> I agree that ethtool is the way to go.
> BTW, Si-Wei, can you point to the code that limits the configuration to 1024?
It's in QEMU's virtio_net_device_realize():

     virtio_net_set_config_size(n, n->host_features);
     virtio_init(vdev, "virtio-net", VIRTIO_ID_NET, n->config_size);

     /*
      * We set a lower limit on RX queue size to what it always was.
      * Guests that want a smaller ring can always resize it without
      * help from us (using virtio 1 and up).
      */
     if (n->net_conf.rx_queue_size < VIRTIO_NET_RX_QUEUE_MIN_SIZE ||
         n->net_conf.rx_queue_size > VIRTQUEUE_MAX_SIZE ||
         !is_power_of_2(n->net_conf.rx_queue_size)) {
         error_setg(errp, "Invalid rx_queue_size (= %" PRIu16 "), "
                    "must be a power of 2 between %d and %d.",
                    n->net_conf.rx_queue_size, VIRTIO_NET_RX_QUEUE_MIN_SIZE,
                    VIRTQUEUE_MAX_SIZE);

-Siwei
> And if ethtool does not provide a way to show the max we can add this support in the future.
>
>>>> -Siwei
>>>>
>>> okay sure. my question is how important is it to control it in the
>>> kernel?
>> I don't have a specific use case for that (in kernel)
>>
>> -Siwei

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-08-31 22:23 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <DM8PR12MB5400FEB0322A9FD6B3271D45AB799@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-08-30  8:21 ` RFC: control virtqueue size by the vdpa tool Michael S. Tsirkin
2022-08-30 19:58 ` Michael S. Tsirkin
2022-08-30 21:04   ` Si-Wei Liu
2022-08-30 22:01     ` Michael S. Tsirkin
2022-08-30 23:22       ` Si-Wei Liu
     [not found]         ` <DM8PR12MB54006D58A2ACA88CC786B9A8AB789@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-08-31 22:22           ` Si-Wei Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.