All of lore.kernel.org
 help / color / mirror / Atom feed
From: Si-Wei Liu <si-wei.liu@oracle.com>
To: Jason Wang <jasowang@redhat.com>, Eli Cohen <elic@nvidia.com>,
	mst@redhat.com, virtualization@lists.linux-foundation.org
Cc: lvivier@redhat.com, eperezma@redhat.com
Subject: Re: [PATCH v7 07/14] vdpa/mlx5: Support configuring max data virtqueue
Date: Fri, 7 Jan 2022 17:38:26 -0800	[thread overview]
Message-ID: <0b0c6fd0-934c-8234-85da-6f99b5a3fe4d@oracle.com> (raw)
In-Reply-To: <d6b55b8a-c119-6316-3b85-27b097390cfe@redhat.com>



On 1/6/2022 9:43 PM, Jason Wang wrote:
>
> 在 2022/1/7 上午9:50, Si-Wei Liu 写道:
>>
>>
>> On 1/6/2022 5:27 PM, Si-Wei Liu wrote:
>>>
>>>
>>> On 1/5/2022 3:46 AM, Eli Cohen wrote:
>>>> Check whether the max number of data virtqueue pairs was provided 
>>>> when a
>>>> adding a new device and verify the new value does not exceed device
>>>> capabilities.
>>>>
>>>> In addition, change the arrays holding virtqueue and callback contexts
>>>> to be dynamically allocated.
>>>>
>>>> Signed-off-by: Eli Cohen <elic@nvidia.com>
>>>> ---
>>>> v6 -> v7:
>>>> 1. Evaluate RQT table size based on config.max_virtqueue_pairs.
>>>>
>>>>   drivers/vdpa/mlx5/net/mlx5_vnet.c | 51 
>>>> ++++++++++++++++++++++---------
>>>>   1 file changed, 37 insertions(+), 14 deletions(-)
>>>>
>>>> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c 
>>>> b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>>> index 4a2149f70f1e..d4720444bf78 100644
>>>> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>>> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>>> @@ -131,11 +131,6 @@ struct mlx5_vdpa_virtqueue {
>>>>       struct mlx5_vq_restore_info ri;
>>>>   };
>>>>   -/* We will remove this limitation once mlx5_vdpa_alloc_resources()
>>>> - * provides for driver space allocation
>>>> - */
>>>> -#define MLX5_MAX_SUPPORTED_VQS 16
>>>> -
>>>>   static bool is_index_valid(struct mlx5_vdpa_dev *mvdev, u16 idx)
>>>>   {
>>>>       if (unlikely(idx > mvdev->max_idx))
>>>> @@ -148,8 +143,8 @@ struct mlx5_vdpa_net {
>>>>       struct mlx5_vdpa_dev mvdev;
>>>>       struct mlx5_vdpa_net_resources res;
>>>>       struct virtio_net_config config;
>>>> -    struct mlx5_vdpa_virtqueue vqs[MLX5_MAX_SUPPORTED_VQS];
>>>> -    struct vdpa_callback event_cbs[MLX5_MAX_SUPPORTED_VQS + 1];
>>>> +    struct mlx5_vdpa_virtqueue *vqs;
>>>> +    struct vdpa_callback *event_cbs;
>>>>         /* Serialize vq resources creation and destruction. This is 
>>>> required
>>>>        * since memory map might change and we need to destroy and 
>>>> create
>>>> @@ -1218,7 +1213,7 @@ static void suspend_vqs(struct mlx5_vdpa_net 
>>>> *ndev)
>>>>   {
>>>>       int i;
>>>>   -    for (i = 0; i < MLX5_MAX_SUPPORTED_VQS; i++)
>>>> +    for (i = 0; i < ndev->mvdev.max_vqs; i++)
>>>>           suspend_vq(ndev, &ndev->vqs[i]);
>>>>   }
>>>>   @@ -1244,8 +1239,14 @@ static int create_rqt(struct mlx5_vdpa_net 
>>>> *ndev)
>>>>       void *in;
>>>>       int i, j;
>>>>       int err;
>>>> +    int num;
>>>>   -    max_rqt = min_t(int, MLX5_MAX_SUPPORTED_VQS / 2,
>>>> +    if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_MQ)))
>>>> +        num = 1;
>>>> +    else
>>>> +        num = le16_to_cpu(ndev->config.max_virtqueue_pairs);
>>>> +
>>>> +    max_rqt = min_t(int, roundup_pow_of_two(num),
>>>>               1 << MLX5_CAP_GEN(ndev->mvdev.mdev, log_max_rqt_size));
>>>>       if (max_rqt < 1)
>>>>           return -EOPNOTSUPP;
>>>> @@ -1262,7 +1263,7 @@ static int create_rqt(struct mlx5_vdpa_net 
>>>> *ndev)
>>>>       MLX5_SET(rqtc, rqtc, rqt_max_size, max_rqt);
>>>>       list = MLX5_ADDR_OF(rqtc, rqtc, rq_num[0]);
>>>>       for (i = 0, j = 0; i < max_rqt; i++, j += 2)
>>>> -        list[i] = cpu_to_be32(ndev->vqs[j % 
>>>> ndev->mvdev.max_vqs].virtq_id);
>>>> +        list[i] = cpu_to_be32(ndev->vqs[j % (2 * num)].virtq_id);
>>> Good catch. LGTM.
>>>
>>> Reviewed-by: Si-Wei Liu<si-wei.liu@oracle.com>
>>>
>> Apologies to reply to myself. It looks to me we need to set 
>> cur_num_vqs to the negotiated num of queues. Otherwise any site 
>> referencing cur_num_vqs won't work properly. Further, we need to 
>> validate VIRTIO_NET_F_MQ is present in handle_ctrl_mq() before 
>> changing the number of queue pairs.
>
>
> Such validation is not mandated in the spec. And if we want to do 
> that, it needs to be done in a separate patch.
Agreed. The userspace (qemu) has similar validation for software 
virtio-net although the spec doesn't mandate.

-Siwei

>
> Thanks
>
>
>>
>> So just disregard my previous R-b for this patch.
>>
>> Thanks,
>> -Siwei
>>
>>
>>>
>>>>         MLX5_SET(rqtc, rqtc, rqt_actual_size, max_rqt);
>>>>       err = mlx5_vdpa_create_rqt(&ndev->mvdev, in, inlen, 
>>>> &ndev->res.rqtn);
>>>> @@ -2220,7 +2221,7 @@ static int mlx5_vdpa_reset(struct vdpa_device 
>>>> *vdev)
>>>>       clear_vqs_ready(ndev);
>>>>       mlx5_vdpa_destroy_mr(&ndev->mvdev);
>>>>       ndev->mvdev.status = 0;
>>>> -    memset(ndev->event_cbs, 0, sizeof(ndev->event_cbs));
>>>> +    memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * 
>>>> (mvdev->max_vqs + 1));
>>>>       ndev->mvdev.actual_features = 0;
>>>>       ++mvdev->generation;
>>>>       if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
>>>> @@ -2293,6 +2294,8 @@ static void mlx5_vdpa_free(struct vdpa_device 
>>>> *vdev)
>>>>       }
>>>>       mlx5_vdpa_free_resources(&ndev->mvdev);
>>>>       mutex_destroy(&ndev->reslock);
>>>> +    kfree(ndev->event_cbs);
>>>> +    kfree(ndev->vqs);
>>>>   }
>>>>     static struct vdpa_notification_area 
>>>> mlx5_get_vq_notification(struct vdpa_device *vdev, u16 idx)
>>>> @@ -2538,15 +2541,33 @@ static int mlx5_vdpa_dev_add(struct 
>>>> vdpa_mgmt_dev *v_mdev, const char *name,
>>>>           return -EOPNOTSUPP;
>>>>       }
>>>>   -    /* we save one virtqueue for control virtqueue should we 
>>>> require it */
>>>>       max_vqs = MLX5_CAP_DEV_VDPA_EMULATION(mdev, 
>>>> max_num_virtio_queues);
>>>> -    max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS);
>>>> +    if (max_vqs < 2) {
>>>> +        dev_warn(mdev->device,
>>>> +             "%d virtqueues are supported. At least 2 are 
>>>> required\n",
>>>> +             max_vqs);
>>>> +        return -EAGAIN;
>>>> +    }
>>>> +
>>>> +    if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
>>>> +        if (add_config->net.max_vq_pairs > max_vqs / 2)
>>>> +            return -EINVAL;
>>>> +        max_vqs = min_t(u32, max_vqs, 2 * 
>>>> add_config->net.max_vq_pairs);
>>>> +    } else {
>>>> +        max_vqs = 2;
>>>> +    }
>>>>         ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, 
>>>> mdev->device, &mlx5_vdpa_ops,
>>>>                    name, false);
>>>>       if (IS_ERR(ndev))
>>>>           return PTR_ERR(ndev);
>>>>   +    ndev->vqs = kcalloc(max_vqs, sizeof(*ndev->vqs), GFP_KERNEL);
>>>> +    ndev->event_cbs = kcalloc(max_vqs + 1, 
>>>> sizeof(*ndev->event_cbs), GFP_KERNEL);
>>>> +    if (!ndev->vqs || !ndev->event_cbs) {
>>>> +        err = -ENOMEM;
>>>> +        goto err_alloc;
>>>> +    }
>>>>       ndev->mvdev.max_vqs = max_vqs;
>>>>       mvdev = &ndev->mvdev;
>>>>       mvdev->mdev = mdev;
>>>> @@ -2627,6 +2648,7 @@ static int mlx5_vdpa_dev_add(struct 
>>>> vdpa_mgmt_dev *v_mdev, const char *name,
>>>>           mlx5_mpfs_del_mac(pfmdev, config->mac);
>>>>   err_mtu:
>>>>       mutex_destroy(&ndev->reslock);
>>>> +err_alloc:
>>>>       put_device(&mvdev->vdev.dev);
>>>>       return err;
>>>>   }
>>>> @@ -2669,7 +2691,8 @@ static int mlx5v_probe(struct 
>>>> auxiliary_device *adev,
>>>>       mgtdev->mgtdev.ops = &mdev_ops;
>>>>       mgtdev->mgtdev.device = mdev->device;
>>>>       mgtdev->mgtdev.id_table = id_table;
>>>> -    mgtdev->mgtdev.config_attr_mask = 
>>>> BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR);
>>>> +    mgtdev->mgtdev.config_attr_mask = 
>>>> BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR) |
>>>> + BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP);
>>>>       mgtdev->madev = madev;
>>>>         err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
>>>
>>
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2022-01-08  1:38 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20220105114646.577224-1-elic@nvidia.com>
     [not found] ` <20220105114646.577224-4-elic@nvidia.com>
2022-01-07  0:33   ` [PATCH v7 03/14] vdpa: Sync calls set/get config/status with cf_mutex Si-Wei Liu
2022-01-07  5:08     ` Jason Wang
2022-01-08  1:23       ` Si-Wei Liu
2022-01-10  6:05         ` Jason Wang
2022-01-11  1:30           ` Si-Wei Liu
2022-01-11  4:46             ` Jason Wang
2022-01-11  6:26               ` Parav Pandit via Virtualization
2022-01-11  9:15                 ` Si-Wei Liu
2022-01-11  9:23               ` Si-Wei Liu
     [not found]     ` <20220109140956.GA70879@mtl-vdi-166.wap.labs.mlnx>
2022-01-11  1:14       ` Si-Wei Liu
     [not found] ` <20220105114646.577224-6-elic@nvidia.com>
2022-01-07  1:25   ` [PATCH v7 05/14] vdpa: Allow to configure max data virtqueues Si-Wei Liu
     [not found] ` <20220105114646.577224-8-elic@nvidia.com>
2022-01-07  1:27   ` [PATCH v7 07/14] vdpa/mlx5: Support configuring max data virtqueue Si-Wei Liu
2022-01-07  1:50     ` Si-Wei Liu
2022-01-07  5:43       ` Jason Wang
2022-01-08  1:38         ` Si-Wei Liu [this message]
     [not found]       ` <20220109141023.GB70879@mtl-vdi-166.wap.labs.mlnx>
2022-01-11  1:00         ` Si-Wei Liu
     [not found]           ` <20220111073416.GB149570@mtl-vdi-166.wap.labs.mlnx>
2022-01-11  8:28             ` Jason Wang
2022-01-11 12:05               ` Michael S. Tsirkin
2022-01-12  2:36                 ` Jason Wang
2022-01-11  8:52             ` Si-Wei Liu
     [not found]               ` <20220111152154.GA165838@mtl-vdi-166.wap.labs.mlnx>
2022-01-11 15:31                 ` Michael S. Tsirkin
2022-01-11 22:21                 ` Si-Wei Liu
2022-01-07 18:01     ` Nathan Chancellor
2022-01-08  1:43       ` Si-Wei Liu
2022-01-08  1:43         ` Si-Wei Liu
2022-01-10  6:53         ` Michael S. Tsirkin
2022-01-10  6:53           ` Michael S. Tsirkin
2022-01-10  6:58           ` Eli Cohen
     [not found] ` <20220105114646.577224-11-elic@nvidia.com>
2022-01-07  2:12   ` [PATCH v7 10/14] vdpa: Support reporting max device capabilities Si-Wei Liu
     [not found] ` <20220105114646.577224-12-elic@nvidia.com>
2022-01-07  2:12   ` [PATCH v7 11/14] vdpa/mlx5: Report " Si-Wei Liu
2022-01-07  5:49     ` Jason Wang
     [not found] ` <20220105114646.577224-5-elic@nvidia.com>
2022-01-07  5:14   ` [PATCH v7 04/14] vdpa: Read device configuration only if FEATURES_OK Jason Wang
     [not found] ` <20220105114646.577224-9-elic@nvidia.com>
2022-01-07  5:46   ` [PATCH v7 08/14] vdpa: Add support for returning device configuration information Jason Wang
     [not found] ` <20220105114646.577224-13-elic@nvidia.com>
2022-01-07  5:50   ` [PATCH v7 12/14] vdpa/vdpa_sim: Configure max supported virtqueues Jason Wang
     [not found] ` <20220105114646.577224-14-elic@nvidia.com>
2022-01-07  5:51   ` [PATCH v7 13/14] vdpa: Use BIT_ULL for bit operations Jason Wang
     [not found] ` <20220105114646.577224-15-elic@nvidia.com>
2022-01-07  5:51   ` [PATCH v7 14/14] vdpa/vdpa_sim_net: Report max device capabilities Jason Wang
2022-01-10  7:04 ` [PATCH v7 00/14] Allow for configuring max number of virtqueue pairs Michael S. Tsirkin
2022-01-10  7:09   ` Jason Wang
2022-01-11  1:59   ` Si-Wei Liu
     [not found]   ` <20220110074958.GA105688@mtl-vdi-166.wap.labs.mlnx>
2022-01-11  2:02     ` Si-Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0b0c6fd0-934c-8234-85da-6f99b5a3fe4d@oracle.com \
    --to=si-wei.liu@oracle.com \
    --cc=elic@nvidia.com \
    --cc=eperezma@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=mst@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.