* RE: [PATCH 1/7] vdpa: Allow to configure max data virtqueues
[not found] ` <20211201195724.17503-2-elic@nvidia.com>
@ 2021-12-02 3:32 ` Parav Pandit via Virtualization
2021-12-02 3:50 ` Parav Pandit via Virtualization
2021-12-03 6:09 ` Si-Wei Liu
2 siblings, 0 replies; 17+ messages in thread
From: Parav Pandit via Virtualization @ 2021-12-02 3:32 UTC (permalink / raw)
To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma, si-wei.liu
> From: Eli Cohen <elic@nvidia.com>
> Sent: Thursday, December 2, 2021 1:27 AM
>
> Add netlink support to configure the max virtqueue pairs for a device.
> At least one pair is required. The maximum is dictated by the device.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
> drivers/vdpa/vdpa.c | 14 +++++++++++++- include/linux/vdpa.h | 1 +
> 2 files changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index
> 7332a74a4b00..f06f949d5fa1 100644
> --- a/drivers/vdpa/vdpa.c
> +++ b/drivers/vdpa/vdpa.c
> @@ -480,7 +480,8 @@ vdpa_nl_cmd_mgmtdev_get_dumpit(struct sk_buff
> *msg, struct netlink_callback *cb) }
>
> #define VDPA_DEV_NET_ATTRS_MASK ((1 <<
> VDPA_ATTR_DEV_NET_CFG_MACADDR) | \
> - (1 << VDPA_ATTR_DEV_NET_CFG_MTU))
> + (1 << VDPA_ATTR_DEV_NET_CFG_MTU) | \
> + (1 <<
> VDPA_ATTR_DEV_NET_CFG_MAX_VQP))
>
> static int vdpa_nl_cmd_dev_add_set_doit(struct sk_buff *skb, struct
> genl_info *info) { @@ -506,6 +507,17 @@ static int
> vdpa_nl_cmd_dev_add_set_doit(struct sk_buff *skb, struct genl_info *i
>
> nla_get_u16(nl_attrs[VDPA_ATTR_DEV_NET_CFG_MTU]);
> config.mask |= (1 << VDPA_ATTR_DEV_NET_CFG_MTU);
> }
> + if (nl_attrs[VDPA_ATTR_DEV_NET_CFG_MAX_VQP]) {
> + config.max_vq_pairs =
> +
I think you need to add to the vdpa_nl_policy[] array so that it gets validated for min and max range range.
> nla_get_u16(nl_attrs[VDPA_ATTR_DEV_NET_CFG_MAX_VQP]);
> + if (!config.max_vq_pairs) {
This will go away one you nl_policy is use for NLA_U16_RANGE(1, 32768) for this attribute.
> + NL_SET_ERR_MSG_MOD(info->extack,
> + "At list one pair of VQs is required");
> + err = -EINVAL;
> + goto err;
> + }
> + config.mask |=
> BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP);
> + }
>
> /* Skip checking capability if user didn't prefer to configure any
> * device networking attributes. It is likely that user might have used
> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index
> c3011ccda430..820621c59365 100644
> --- a/include/linux/vdpa.h
> +++ b/include/linux/vdpa.h
> @@ -101,6 +101,7 @@ struct vdpa_dev_set_config {
> u16 mtu;
> } net;
> u64 mask;
> + u16 max_vq_pairs;
This is net only field. Please move it inside the net struct above.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH 2/7] vdpa/mlx5: Fix config_attr_mask assignment
[not found] ` <20211201195724.17503-3-elic@nvidia.com>
@ 2021-12-02 3:34 ` Parav Pandit via Virtualization
2021-12-03 2:20 ` Jason Wang
1 sibling, 0 replies; 17+ messages in thread
From: Parav Pandit via Virtualization @ 2021-12-02 3:34 UTC (permalink / raw)
To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma, si-wei.liu
> From: Eli Cohen <elic@nvidia.com>
> Sent: Thursday, December 2, 2021 1:27 AM
>
> Fix VDPA_ATTR_DEV_NET_CFG_MACADDR assignment to be explicit 64 bit
> assignment.
>
> No issue was seen since the value is well below 64 bit max value.
> Nevertheless it needs to be fixed.
>
> Fixes: a007d940040c ("vdpa/mlx5: Support configuration of MAC")
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
> drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 63813fbb5f62..b66f41ccbac2 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -2676,7 +2676,7 @@ static int mlx5v_probe(struct auxiliary_device *adev,
> mgtdev->mgtdev.ops = &mdev_ops;
> mgtdev->mgtdev.device = mdev->device;
> mgtdev->mgtdev.id_table = id_table;
> - mgtdev->mgtdev.config_attr_mask = (1 <<
> VDPA_ATTR_DEV_NET_CFG_MACADDR);
> + mgtdev->mgtdev.config_attr_mask =
> +BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR);
> mgtdev->madev = madev;
>
> err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
> --
> 2.33.1
Reviewed-by: Parav Pandit <parav@nvidia.com>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH 1/7] vdpa: Allow to configure max data virtqueues
[not found] ` <20211201195724.17503-2-elic@nvidia.com>
2021-12-02 3:32 ` [PATCH 1/7] vdpa: Allow to configure max data virtqueues Parav Pandit via Virtualization
@ 2021-12-02 3:50 ` Parav Pandit via Virtualization
2021-12-03 6:09 ` Si-Wei Liu
2 siblings, 0 replies; 17+ messages in thread
From: Parav Pandit via Virtualization @ 2021-12-02 3:50 UTC (permalink / raw)
To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma, si-wei.liu
> From: Eli Cohen <elic@nvidia.com>
> Sent: Thursday, December 2, 2021 1:27 AM
>
> Add netlink support to configure the max virtqueue pairs for a device.
> At least one pair is required. The maximum is dictated by the device.
>
Cover letter and this patch is missing the example iproute2 example to do this.
Please add.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 2/7] vdpa/mlx5: Fix config_attr_mask assignment
[not found] ` <20211201195724.17503-3-elic@nvidia.com>
2021-12-02 3:34 ` [PATCH 2/7] vdpa/mlx5: Fix config_attr_mask assignment Parav Pandit via Virtualization
@ 2021-12-03 2:20 ` Jason Wang
1 sibling, 0 replies; 17+ messages in thread
From: Jason Wang @ 2021-12-03 2:20 UTC (permalink / raw)
To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu
On Thu, Dec 2, 2021 at 3:58 AM Eli Cohen <elic@nvidia.com> wrote:
>
> Fix VDPA_ATTR_DEV_NET_CFG_MACADDR assignment to be explicit 64 bit
> assignment.
>
> No issue was seen since the value is well below 64 bit max value.
> Nevertheless it needs to be fixed.
>
> Fixes: a007d940040c ("vdpa/mlx5: Support configuration of MAC")
> Signed-off-by: Eli Cohen <elic@nvidia.com>
Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 63813fbb5f62..b66f41ccbac2 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -2676,7 +2676,7 @@ static int mlx5v_probe(struct auxiliary_device *adev,
> mgtdev->mgtdev.ops = &mdev_ops;
> mgtdev->mgtdev.device = mdev->device;
> mgtdev->mgtdev.id_table = id_table;
> - mgtdev->mgtdev.config_attr_mask = (1 << VDPA_ATTR_DEV_NET_CFG_MACADDR);
> + mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR);
> mgtdev->madev = madev;
>
> err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
> --
> 2.33.1
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/7] vdpa/mlx5: Support configuring max data virtqueue pairs
[not found] ` <20211201195724.17503-4-elic@nvidia.com>
@ 2021-12-03 2:29 ` Jason Wang
2021-12-03 7:28 ` Si-Wei Liu
1 sibling, 0 replies; 17+ messages in thread
From: Jason Wang @ 2021-12-03 2:29 UTC (permalink / raw)
To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu
On Thu, Dec 2, 2021 at 3:58 AM Eli Cohen <elic@nvidia.com> wrote:
>
> Check whether the max number of data virtqueue pairs was provided when a
> adding a new device and verify the new value does not exceed device
> capabilities.
>
> In addition, change the arrays holding virtqueue and callback contexts
> to be dynamically allocated.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
> drivers/vdpa/mlx5/net/mlx5_vnet.c | 33 ++++++++++++++++++++-----------
> 1 file changed, 21 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index b66f41ccbac2..62aba5dbd8fa 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -131,11 +131,6 @@ struct mlx5_vdpa_virtqueue {
> struct mlx5_vq_restore_info ri;
> };
>
> -/* We will remove this limitation once mlx5_vdpa_alloc_resources()
> - * provides for driver space allocation
> - */
> -#define MLX5_MAX_SUPPORTED_VQS 16
> -
> static bool is_index_valid(struct mlx5_vdpa_dev *mvdev, u16 idx)
> {
> if (unlikely(idx > mvdev->max_idx))
> @@ -148,8 +143,8 @@ struct mlx5_vdpa_net {
> struct mlx5_vdpa_dev mvdev;
> struct mlx5_vdpa_net_resources res;
> struct virtio_net_config config;
> - struct mlx5_vdpa_virtqueue vqs[MLX5_MAX_SUPPORTED_VQS];
> - struct vdpa_callback event_cbs[MLX5_MAX_SUPPORTED_VQS + 1];
> + struct mlx5_vdpa_virtqueue *vqs;
> + struct vdpa_callback *event_cbs;
>
> /* Serialize vq resources creation and destruction. This is required
> * since memory map might change and we need to destroy and create
> @@ -1218,7 +1213,7 @@ static void suspend_vqs(struct mlx5_vdpa_net *ndev)
> {
> int i;
>
> - for (i = 0; i < MLX5_MAX_SUPPORTED_VQS; i++)
> + for (i = 0; i < ndev->mvdev.max_vqs; i++)
> suspend_vq(ndev, &ndev->vqs[i]);
> }
>
> @@ -1245,7 +1240,7 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
> int i, j;
> int err;
>
> - max_rqt = min_t(int, MLX5_MAX_SUPPORTED_VQS / 2,
> + max_rqt = min_t(int, ndev->mvdev.max_vqs / 2,
> 1 << MLX5_CAP_GEN(ndev->mvdev.mdev, log_max_rqt_size));
> if (max_rqt < 1)
> return -EOPNOTSUPP;
> @@ -2235,7 +2230,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev)
> clear_vqs_ready(ndev);
> mlx5_vdpa_destroy_mr(&ndev->mvdev);
> ndev->mvdev.status = 0;
> - memset(ndev->event_cbs, 0, sizeof(ndev->event_cbs));
> + memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * (mvdev->max_vqs + 1));
> ndev->mvdev.actual_features = 0;
> ++mvdev->generation;
> if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
> @@ -2308,6 +2303,8 @@ static void mlx5_vdpa_free(struct vdpa_device *vdev)
> }
> mlx5_vdpa_free_resources(&ndev->mvdev);
> mutex_destroy(&ndev->reslock);
> + kfree(ndev->event_cbs);
> + kfree(ndev->vqs);
> }
>
> static struct vdpa_notification_area mlx5_get_vq_notification(struct vdpa_device *vdev, u16 idx)
> @@ -2547,13 +2544,24 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
>
> /* we save one virtqueue for control virtqueue should we require it */
> max_vqs = MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues);
> - max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS);
> + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
> + if (add_config->max_vq_pairs & (add_config->max_vq_pairs - 1) ||
> + add_config->max_vq_pairs > max_vqs / 2)
> + return -EINVAL;
> + max_vqs = min_t(u32, max_vqs, 2 * add_config->max_vq_pairs);
> + }
Not for this patch, but this seems to mean without max_vqp, a vdpa
with maximum number of qps are provisioned? Is this intended?
>
> ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mlx5_vdpa_ops,
> name, false);
> if (IS_ERR(ndev))
> return PTR_ERR(ndev);
>
> + ndev->vqs = kcalloc(max_vqs, sizeof(*ndev->vqs), GFP_KERNEL);
> + ndev->event_cbs = kcalloc(max_vqs + 1, sizeof(*ndev->event_cbs), GFP_KERNEL);
> + if (!ndev->vqs || !ndev->event_cbs) {
Do we need to kfree if any of the two allocations succeeded?
Thanks
> + err = -ENOMEM;
> + goto err_mtu;
> + }
> ndev->mvdev.max_vqs = max_vqs;
> mvdev = &ndev->mvdev;
> mvdev->mdev = mdev;
> @@ -2676,7 +2684,8 @@ static int mlx5v_probe(struct auxiliary_device *adev,
> mgtdev->mgtdev.ops = &mdev_ops;
> mgtdev->mgtdev.device = mdev->device;
> mgtdev->mgtdev.id_table = id_table;
> - mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR);
> + mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR) |
> + BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP);
> mgtdev->madev = madev;
>
> err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
> --
> 2.33.1
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 4/7] vdpa/mlx5: Report queried max supported virtqueues
[not found] ` <20211201195724.17503-5-elic@nvidia.com>
@ 2021-12-03 2:31 ` Jason Wang
2021-12-03 7:32 ` Si-Wei Liu
1 sibling, 0 replies; 17+ messages in thread
From: Jason Wang @ 2021-12-03 2:31 UTC (permalink / raw)
To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu
On Thu, Dec 2, 2021 at 3:58 AM Eli Cohen <elic@nvidia.com> wrote:
>
> Report back in get_vq_num_max() the correct number of virtqueues queried
> from the device instead of using a constant value.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
> drivers/vdpa/mlx5/net/mlx5_vnet.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 62aba5dbd8fa..336eda3dcf41 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1992,10 +1992,10 @@ static void mlx5_vdpa_set_config_cb(struct vdpa_device *vdev, struct vdpa_callba
> ndev->config_cb = *cb;
> }
>
> -#define MLX5_VDPA_MAX_VQ_ENTRIES 256
> static u16 mlx5_vdpa_get_vq_num_max(struct vdpa_device *vdev)
> {
> - return MLX5_VDPA_MAX_VQ_ENTRIES;
> + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> + return MLX5_CAP_DEV_VDPA_EMULATION(mvdev->mdev, max_num_virtio_queues);
> }
I may miss something but I guess it should not be the hardware
limitation but the one that is set from netlink.
Thanks
>
> static u32 mlx5_vdpa_get_device_id(struct vdpa_device *vdev)
> --
> 2.33.1
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 5/7] vdpa: Add support for querying control virtqueue index
[not found] ` <20211201195724.17503-6-elic@nvidia.com>
@ 2021-12-03 2:35 ` Jason Wang
[not found] ` <20211207075732.GC15767@mtl-vdi-166.wap.labs.mlnx>
0 siblings, 1 reply; 17+ messages in thread
From: Jason Wang @ 2021-12-03 2:35 UTC (permalink / raw)
To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu
On Thu, Dec 2, 2021 at 3:58 AM Eli Cohen <elic@nvidia.com> wrote:
>
> Add netlink attribute and callback function to query the control VQ
> index of a device.
It's better to explain the motivation of this. Actually we can deduce
it from the max_virtqueue_paris if I was not wrong. So it means the
value varies depending on if VIRTIO_NET_F_MQ is negotiated which can
be changed after a query. This may probably confuse the userspace
sometime.
Thanks
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
> drivers/vdpa/vdpa.c | 18 ++++++++++++++++++
> include/linux/vdpa.h | 1 +
> include/uapi/linux/vdpa.h | 1 +
> 3 files changed, 20 insertions(+)
>
> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> index f06f949d5fa1..ca3ab0f46188 100644
> --- a/drivers/vdpa/vdpa.c
> +++ b/drivers/vdpa/vdpa.c
> @@ -712,6 +712,20 @@ static int vdpa_nl_cmd_dev_get_dumpit(struct sk_buff *msg, struct netlink_callba
> return msg->len;
> }
>
> +static int vdpa_dev_net_ctrl_vq_fill(struct vdpa_device *vdev,
> + struct sk_buff *msg, u64 features)
> +{
> + u16 val_u16 = 0;
> +
> + if (features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ) &&
> + vdev->config->get_ctrl_vq_idx) {
> + val_u16 = vdev->config->get_ctrl_vq_idx(vdev);
> + return nla_put_u16(msg, VDPA_ATTR_DEV_CTRL_VQ_IDX, val_u16);
> + }
> +
> + return 0;
> +}
> +
> static int vdpa_dev_net_mq_config_fill(struct vdpa_device *vdev,
> struct sk_buff *msg, u64 features,
> const struct virtio_net_config *config)
> @@ -730,6 +744,7 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
> struct virtio_net_config config = {};
> u64 features;
> u16 val_u16;
> + int err;
>
> vdpa_get_config(vdev, 0, &config, sizeof(config));
>
> @@ -746,6 +761,9 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
> return -EMSGSIZE;
>
> features = vdev->config->get_features(vdev);
> + err = vdpa_dev_net_ctrl_vq_fill(vdev, msg, features);
> + if (err)
> + return err;
>
> return vdpa_dev_net_mq_config_fill(vdev, msg, features, &config);
> }
> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> index 820621c59365..fca9bfeed9ba 100644
> --- a/include/linux/vdpa.h
> +++ b/include/linux/vdpa.h
> @@ -274,6 +274,7 @@ struct vdpa_config_ops {
> (*get_vq_notification)(struct vdpa_device *vdev, u16 idx);
> /* vq irq is not expected to be changed once DRIVER_OK is set */
> int (*get_vq_irq)(struct vdpa_device *vdev, u16 idx);
> + u16 (*get_ctrl_vq_idx)(struct vdpa_device *vdev);
>
> /* Device ops */
> u32 (*get_vq_align)(struct vdpa_device *vdev);
> diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
> index a252f06f9dfd..2e3a7f89f42d 100644
> --- a/include/uapi/linux/vdpa.h
> +++ b/include/uapi/linux/vdpa.h
> @@ -34,6 +34,7 @@ enum vdpa_attr {
> VDPA_ATTR_DEV_MAX_VQS, /* u32 */
> VDPA_ATTR_DEV_MAX_VQ_SIZE, /* u16 */
> VDPA_ATTR_DEV_MIN_VQ_SIZE, /* u16 */
> + VDPA_ATTR_DEV_CTRL_VQ_IDX, /* u16 */
>
> VDPA_ATTR_DEV_NET_CFG_MACADDR, /* binary */
> VDPA_ATTR_DEV_NET_STATUS, /* u8 */
> --
> 2.33.1
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 7/7] vdpa/mlx5: Restore cur_num_vqs in case of failure in change_num_qps()
[not found] ` <20211201195724.17503-8-elic@nvidia.com>
@ 2021-12-03 2:37 ` Jason Wang
0 siblings, 0 replies; 17+ messages in thread
From: Jason Wang @ 2021-12-03 2:37 UTC (permalink / raw)
To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu
On Thu, Dec 2, 2021 at 3:58 AM Eli Cohen <elic@nvidia.com> wrote:
>
> Restore ndev->cur_num_vqs to the original value in case change_num_qps()
> fails.
>
> Fixes: 52893733f2c5 ("vdpa/mlx5: Add multiqueue support")
> Signed-off-by: Eli Cohen <elic@nvidia.com>
Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 81a602ff68b5..baeff15d4b95 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1552,6 +1552,8 @@ static int change_num_qps(struct mlx5_vdpa_dev *mvdev, int newqps)
> for (--i; i >= cur_qps; --i)
> teardown_vq(ndev, &ndev->vqs[i]);
>
> + ndev->cur_num_vqs = 2 * cur_qps;
> +
> return err;
> }
>
> --
> 2.33.1
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/7] vdpa: Allow to configure max data virtqueues
[not found] ` <20211201195724.17503-2-elic@nvidia.com>
2021-12-02 3:32 ` [PATCH 1/7] vdpa: Allow to configure max data virtqueues Parav Pandit via Virtualization
2021-12-02 3:50 ` Parav Pandit via Virtualization
@ 2021-12-03 6:09 ` Si-Wei Liu
2 siblings, 0 replies; 17+ messages in thread
From: Si-Wei Liu @ 2021-12-03 6:09 UTC (permalink / raw)
To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma
On 12/1/2021 11:57 AM, Eli Cohen wrote:
> Add netlink support to configure the max virtqueue pairs for a device.
> At least one pair is required. The maximum is dictated by the device.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
> drivers/vdpa/vdpa.c | 14 +++++++++++++-
> include/linux/vdpa.h | 1 +
> 2 files changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> index 7332a74a4b00..f06f949d5fa1 100644
> --- a/drivers/vdpa/vdpa.c
> +++ b/drivers/vdpa/vdpa.c
> @@ -480,7 +480,8 @@ vdpa_nl_cmd_mgmtdev_get_dumpit(struct sk_buff *msg, struct netlink_callback *cb)
> }
>
> #define VDPA_DEV_NET_ATTRS_MASK ((1 << VDPA_ATTR_DEV_NET_CFG_MACADDR) | \
> - (1 << VDPA_ATTR_DEV_NET_CFG_MTU))
> + (1 << VDPA_ATTR_DEV_NET_CFG_MTU) | \
> + (1 << VDPA_ATTR_DEV_NET_CFG_MAX_VQP))
>
> static int vdpa_nl_cmd_dev_add_set_doit(struct sk_buff *skb, struct genl_info *info)
> {
> @@ -506,6 +507,17 @@ static int vdpa_nl_cmd_dev_add_set_doit(struct sk_buff *skb, struct genl_info *i
> nla_get_u16(nl_attrs[VDPA_ATTR_DEV_NET_CFG_MTU]);
> config.mask |= (1 << VDPA_ATTR_DEV_NET_CFG_MTU);
I wonder if worth another patch to fix it altogether:
BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MTU)
> }
> + if (nl_attrs[VDPA_ATTR_DEV_NET_CFG_MAX_VQP]) {
> + config.max_vq_pairs =
> + nla_get_u16(nl_attrs[VDPA_ATTR_DEV_NET_CFG_MAX_VQP]);
> + if (!config.max_vq_pairs) {
> + NL_SET_ERR_MSG_MOD(info->extack,
> + "At list one pair of VQs is required");
s/list/least/
> + err = -EINVAL;
> + goto err;
> + }
> + config.mask |= BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP);
> + }
>
> /* Skip checking capability if user didn't prefer to configure any
> * device networking attributes. It is likely that user might have used
> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> index c3011ccda430..820621c59365 100644
> --- a/include/linux/vdpa.h
> +++ b/include/linux/vdpa.h
> @@ -101,6 +101,7 @@ struct vdpa_dev_set_config {
> u16 mtu;
> } net;
> u64 mask;
> + u16 max_vq_pairs;
This should be vdpa net specific config. Move it to under mtu?
-Siwei
> };
>
> /**
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/7] vdpa/mlx5: Support configuring max data virtqueue pairs
[not found] ` <20211201195724.17503-4-elic@nvidia.com>
2021-12-03 2:29 ` [PATCH 3/7] vdpa/mlx5: Support configuring max data virtqueue pairs Jason Wang
@ 2021-12-03 7:28 ` Si-Wei Liu
2021-12-03 7:31 ` Parav Pandit via Virtualization
[not found] ` <20211207081946.GE15767@mtl-vdi-166.wap.labs.mlnx>
1 sibling, 2 replies; 17+ messages in thread
From: Si-Wei Liu @ 2021-12-03 7:28 UTC (permalink / raw)
To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma
On 12/1/2021 11:57 AM, Eli Cohen wrote:
> Check whether the max number of data virtqueue pairs was provided when a
> adding a new device and verify the new value does not exceed device
> capabilities.
>
> In addition, change the arrays holding virtqueue and callback contexts
> to be dynamically allocated.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
> drivers/vdpa/mlx5/net/mlx5_vnet.c | 33 ++++++++++++++++++++-----------
> 1 file changed, 21 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index b66f41ccbac2..62aba5dbd8fa 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -131,11 +131,6 @@ struct mlx5_vdpa_virtqueue {
> struct mlx5_vq_restore_info ri;
> };
>
> -/* We will remove this limitation once mlx5_vdpa_alloc_resources()
> - * provides for driver space allocation
Is this comment outdated, i.e. driver space allocation in
mlx5_vdpa_alloc_resources() already provided?
> - */
> -#define MLX5_MAX_SUPPORTED_VQS 16
> -
> static bool is_index_valid(struct mlx5_vdpa_dev *mvdev, u16 idx)
> {
> if (unlikely(idx > mvdev->max_idx))
> @@ -148,8 +143,8 @@ struct mlx5_vdpa_net {
> struct mlx5_vdpa_dev mvdev;
> struct mlx5_vdpa_net_resources res;
> struct virtio_net_config config;
> - struct mlx5_vdpa_virtqueue vqs[MLX5_MAX_SUPPORTED_VQS];
> - struct vdpa_callback event_cbs[MLX5_MAX_SUPPORTED_VQS + 1];
> + struct mlx5_vdpa_virtqueue *vqs;
> + struct vdpa_callback *event_cbs;
>
> /* Serialize vq resources creation and destruction. This is required
> * since memory map might change and we need to destroy and create
> @@ -1218,7 +1213,7 @@ static void suspend_vqs(struct mlx5_vdpa_net *ndev)
> {
> int i;
>
> - for (i = 0; i < MLX5_MAX_SUPPORTED_VQS; i++)
> + for (i = 0; i < ndev->mvdev.max_vqs; i++)
> suspend_vq(ndev, &ndev->vqs[i]);
> }
>
> @@ -1245,7 +1240,7 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
> int i, j;
> int err;
>
> - max_rqt = min_t(int, MLX5_MAX_SUPPORTED_VQS / 2,
> + max_rqt = min_t(int, ndev->mvdev.max_vqs / 2,
> 1 << MLX5_CAP_GEN(ndev->mvdev.mdev, log_max_rqt_size));
> if (max_rqt < 1)
> return -EOPNOTSUPP;
> @@ -2235,7 +2230,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev)
> clear_vqs_ready(ndev);
> mlx5_vdpa_destroy_mr(&ndev->mvdev);
> ndev->mvdev.status = 0;
> - memset(ndev->event_cbs, 0, sizeof(ndev->event_cbs));
> + memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * (mvdev->max_vqs + 1));
> ndev->mvdev.actual_features = 0;
> ++mvdev->generation;
> if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
> @@ -2308,6 +2303,8 @@ static void mlx5_vdpa_free(struct vdpa_device *vdev)
> }
> mlx5_vdpa_free_resources(&ndev->mvdev);
> mutex_destroy(&ndev->reslock);
> + kfree(ndev->event_cbs);
> + kfree(ndev->vqs);
> }
>
> static struct vdpa_notification_area mlx5_get_vq_notification(struct vdpa_device *vdev, u16 idx)
> @@ -2547,13 +2544,24 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
>
> /* we save one virtqueue for control virtqueue should we require it */
> max_vqs = MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues);
> - max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS);
> + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
> + if (add_config->max_vq_pairs & (add_config->max_vq_pairs - 1) ||
> + add_config->max_vq_pairs > max_vqs / 2)
> + return -EINVAL;
It'd be the best to describe the failing cause here, the power of 2
limitation is known to me, but not friendly enough for first time user.
dev_warn would work for me.
> + max_vqs = min_t(u32, max_vqs, 2 * add_config->max_vq_pairs);
> + }
>
> ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mlx5_vdpa_ops,
> name, false);
> if (IS_ERR(ndev))
> return PTR_ERR(ndev);
>
> + ndev->vqs = kcalloc(max_vqs, sizeof(*ndev->vqs), GFP_KERNEL);
> + ndev->event_cbs = kcalloc(max_vqs + 1, sizeof(*ndev->event_cbs), GFP_KERNEL);
> + if (!ndev->vqs || !ndev->event_cbs) {
> + err = -ENOMEM;
> + goto err_mtu;
Not a good idea to call mutex_destroy() without calling mutex_init()
ahead. Introduce another err label before put_device()?
-Siwei
> + }
> ndev->mvdev.max_vqs = max_vqs;
> mvdev = &ndev->mvdev;
> mvdev->mdev = mdev;
> @@ -2676,7 +2684,8 @@ static int mlx5v_probe(struct auxiliary_device *adev,
> mgtdev->mgtdev.ops = &mdev_ops;
> mgtdev->mgtdev.device = mdev->device;
> mgtdev->mgtdev.id_table = id_table;
> - mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR);
> + mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR) |
> + BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP);
> mgtdev->madev = madev;
>
> err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH 3/7] vdpa/mlx5: Support configuring max data virtqueue pairs
2021-12-03 7:28 ` Si-Wei Liu
@ 2021-12-03 7:31 ` Parav Pandit via Virtualization
2021-12-03 7:40 ` Si-Wei Liu
[not found] ` <20211207081946.GE15767@mtl-vdi-166.wap.labs.mlnx>
1 sibling, 1 reply; 17+ messages in thread
From: Parav Pandit via Virtualization @ 2021-12-03 7:31 UTC (permalink / raw)
To: Si-Wei Liu, Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma
> From: Si-Wei Liu <si-wei.liu@oracle.com>
> Sent: Friday, December 3, 2021 12:58 PM
>
> On 12/1/2021 11:57 AM, Eli Cohen wrote:
> > Check whether the max number of data virtqueue pairs was provided when
> > a adding a new device and verify the new value does not exceed device
> > capabilities.
> >
> > In addition, change the arrays holding virtqueue and callback contexts
> > to be dynamically allocated.
> >
> > Signed-off-by: Eli Cohen <elic@nvidia.com>
> > ---
> > drivers/vdpa/mlx5/net/mlx5_vnet.c | 33 ++++++++++++++++++++-----------
> > 1 file changed, 21 insertions(+), 12 deletions(-)
> >
> > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > index b66f41ccbac2..62aba5dbd8fa 100644
> > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > @@ -131,11 +131,6 @@ struct mlx5_vdpa_virtqueue {
> > struct mlx5_vq_restore_info ri;
> > };
> >
> > -/* We will remove this limitation once mlx5_vdpa_alloc_resources()
> > - * provides for driver space allocation
> Is this comment outdated, i.e. driver space allocation in
> mlx5_vdpa_alloc_resources() already provided?
>
> > - */
> > -#define MLX5_MAX_SUPPORTED_VQS 16
> > -
> > static bool is_index_valid(struct mlx5_vdpa_dev *mvdev, u16 idx)
> > {
> > if (unlikely(idx > mvdev->max_idx)) @@ -148,8 +143,8 @@ struct
> > mlx5_vdpa_net {
> > struct mlx5_vdpa_dev mvdev;
> > struct mlx5_vdpa_net_resources res;
> > struct virtio_net_config config;
> > - struct mlx5_vdpa_virtqueue vqs[MLX5_MAX_SUPPORTED_VQS];
> > - struct vdpa_callback event_cbs[MLX5_MAX_SUPPORTED_VQS + 1];
> > + struct mlx5_vdpa_virtqueue *vqs;
> > + struct vdpa_callback *event_cbs;
> >
> > /* Serialize vq resources creation and destruction. This is required
> > * since memory map might change and we need to destroy and create
> > @@ -1218,7 +1213,7 @@ static void suspend_vqs(struct mlx5_vdpa_net
> *ndev)
> > {
> > int i;
> >
> > - for (i = 0; i < MLX5_MAX_SUPPORTED_VQS; i++)
> > + for (i = 0; i < ndev->mvdev.max_vqs; i++)
> > suspend_vq(ndev, &ndev->vqs[i]);
> > }
> >
> > @@ -1245,7 +1240,7 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
> > int i, j;
> > int err;
> >
> > - max_rqt = min_t(int, MLX5_MAX_SUPPORTED_VQS / 2,
> > + max_rqt = min_t(int, ndev->mvdev.max_vqs / 2,
> > 1 << MLX5_CAP_GEN(ndev->mvdev.mdev,
> log_max_rqt_size));
> > if (max_rqt < 1)
> > return -EOPNOTSUPP;
> > @@ -2235,7 +2230,7 @@ static int mlx5_vdpa_reset(struct vdpa_device
> *vdev)
> > clear_vqs_ready(ndev);
> > mlx5_vdpa_destroy_mr(&ndev->mvdev);
> > ndev->mvdev.status = 0;
> > - memset(ndev->event_cbs, 0, sizeof(ndev->event_cbs));
> > + memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) *
> > +(mvdev->max_vqs + 1));
> > ndev->mvdev.actual_features = 0;
> > ++mvdev->generation;
> > if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) { @@ -2308,6
> +2303,8 @@
> > static void mlx5_vdpa_free(struct vdpa_device *vdev)
> > }
> > mlx5_vdpa_free_resources(&ndev->mvdev);
> > mutex_destroy(&ndev->reslock);
> > + kfree(ndev->event_cbs);
> > + kfree(ndev->vqs);
> > }
> >
> > static struct vdpa_notification_area mlx5_get_vq_notification(struct
> > vdpa_device *vdev, u16 idx) @@ -2547,13 +2544,24 @@ static int
> > mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
> >
> > /* we save one virtqueue for control virtqueue should we require it */
> > max_vqs = MLX5_CAP_DEV_VDPA_EMULATION(mdev,
> max_num_virtio_queues);
> > - max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS);
> > + if (add_config->mask &
> BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
> > + if (add_config->max_vq_pairs & (add_config->max_vq_pairs -
> 1) ||
> > + add_config->max_vq_pairs > max_vqs / 2)
> > + return -EINVAL;
> It'd be the best to describe the failing cause here, the power of 2 limitation is
> known to me, but not friendly enough for first time user.
> dev_warn would work for me.
User commands shouldn't be creating dmsg unwanted messages.
dev_warn_once() is better.
But instead, extack error message should be returned that reaches the user who issues iproute2 command and makes user aware better.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 4/7] vdpa/mlx5: Report queried max supported virtqueues
[not found] ` <20211201195724.17503-5-elic@nvidia.com>
2021-12-03 2:31 ` [PATCH 4/7] vdpa/mlx5: Report queried max supported virtqueues Jason Wang
@ 2021-12-03 7:32 ` Si-Wei Liu
1 sibling, 0 replies; 17+ messages in thread
From: Si-Wei Liu @ 2021-12-03 7:32 UTC (permalink / raw)
To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma
On 12/1/2021 11:57 AM, Eli Cohen wrote:
> Report back in get_vq_num_max() the correct number of virtqueues queried
> from the device instead of using a constant value.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
> drivers/vdpa/mlx5/net/mlx5_vnet.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 62aba5dbd8fa..336eda3dcf41 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1992,10 +1992,10 @@ static void mlx5_vdpa_set_config_cb(struct vdpa_device *vdev, struct vdpa_callba
> ndev->config_cb = *cb;
> }
>
> -#define MLX5_VDPA_MAX_VQ_ENTRIES 256
> static u16 mlx5_vdpa_get_vq_num_max(struct vdpa_device *vdev)
> {
> - return MLX5_VDPA_MAX_VQ_ENTRIES;
> + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> + return MLX5_CAP_DEV_VDPA_EMULATION(mvdev->mdev, max_num_virtio_queues);
This denotes the size i.e. # of descriptor entries in a virtqueue,
rather the total number of virtqueues. You need to add another op
(vdpa_net specific) to get this info.
-Siwei
> }
>
> static u32 mlx5_vdpa_get_device_id(struct vdpa_device *vdev)
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/7] vdpa/mlx5: Support configuring max data virtqueue pairs
2021-12-03 7:31 ` Parav Pandit via Virtualization
@ 2021-12-03 7:40 ` Si-Wei Liu
0 siblings, 0 replies; 17+ messages in thread
From: Si-Wei Liu @ 2021-12-03 7:40 UTC (permalink / raw)
To: Parav Pandit, Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma
On 12/2/2021 11:31 PM, Parav Pandit wrote:
>
>> From: Si-Wei Liu <si-wei.liu@oracle.com>
>> Sent: Friday, December 3, 2021 12:58 PM
>>
>> On 12/1/2021 11:57 AM, Eli Cohen wrote:
>>> Check whether the max number of data virtqueue pairs was provided when
>>> a adding a new device and verify the new value does not exceed device
>>> capabilities.
>>>
>>> In addition, change the arrays holding virtqueue and callback contexts
>>> to be dynamically allocated.
>>>
>>> Signed-off-by: Eli Cohen <elic@nvidia.com>
>>> ---
>>> drivers/vdpa/mlx5/net/mlx5_vnet.c | 33 ++++++++++++++++++++-----------
>>> 1 file changed, 21 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>> b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>> index b66f41ccbac2..62aba5dbd8fa 100644
>>> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>> @@ -131,11 +131,6 @@ struct mlx5_vdpa_virtqueue {
>>> struct mlx5_vq_restore_info ri;
>>> };
>>>
>>> -/* We will remove this limitation once mlx5_vdpa_alloc_resources()
>>> - * provides for driver space allocation
>> Is this comment outdated, i.e. driver space allocation in
>> mlx5_vdpa_alloc_resources() already provided?
>>
>>> - */
>>> -#define MLX5_MAX_SUPPORTED_VQS 16
>>> -
>>> static bool is_index_valid(struct mlx5_vdpa_dev *mvdev, u16 idx)
>>> {
>>> if (unlikely(idx > mvdev->max_idx)) @@ -148,8 +143,8 @@ struct
>>> mlx5_vdpa_net {
>>> struct mlx5_vdpa_dev mvdev;
>>> struct mlx5_vdpa_net_resources res;
>>> struct virtio_net_config config;
>>> - struct mlx5_vdpa_virtqueue vqs[MLX5_MAX_SUPPORTED_VQS];
>>> - struct vdpa_callback event_cbs[MLX5_MAX_SUPPORTED_VQS + 1];
>>> + struct mlx5_vdpa_virtqueue *vqs;
>>> + struct vdpa_callback *event_cbs;
>>>
>>> /* Serialize vq resources creation and destruction. This is required
>>> * since memory map might change and we need to destroy and create
>>> @@ -1218,7 +1213,7 @@ static void suspend_vqs(struct mlx5_vdpa_net
>> *ndev)
>>> {
>>> int i;
>>>
>>> - for (i = 0; i < MLX5_MAX_SUPPORTED_VQS; i++)
>>> + for (i = 0; i < ndev->mvdev.max_vqs; i++)
>>> suspend_vq(ndev, &ndev->vqs[i]);
>>> }
>>>
>>> @@ -1245,7 +1240,7 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
>>> int i, j;
>>> int err;
>>>
>>> - max_rqt = min_t(int, MLX5_MAX_SUPPORTED_VQS / 2,
>>> + max_rqt = min_t(int, ndev->mvdev.max_vqs / 2,
>>> 1 << MLX5_CAP_GEN(ndev->mvdev.mdev,
>> log_max_rqt_size));
>>> if (max_rqt < 1)
>>> return -EOPNOTSUPP;
>>> @@ -2235,7 +2230,7 @@ static int mlx5_vdpa_reset(struct vdpa_device
>> *vdev)
>>> clear_vqs_ready(ndev);
>>> mlx5_vdpa_destroy_mr(&ndev->mvdev);
>>> ndev->mvdev.status = 0;
>>> - memset(ndev->event_cbs, 0, sizeof(ndev->event_cbs));
>>> + memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) *
>>> +(mvdev->max_vqs + 1));
>>> ndev->mvdev.actual_features = 0;
>>> ++mvdev->generation;
>>> if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) { @@ -2308,6
>> +2303,8 @@
>>> static void mlx5_vdpa_free(struct vdpa_device *vdev)
>>> }
>>> mlx5_vdpa_free_resources(&ndev->mvdev);
>>> mutex_destroy(&ndev->reslock);
>>> + kfree(ndev->event_cbs);
>>> + kfree(ndev->vqs);
>>> }
>>>
>>> static struct vdpa_notification_area mlx5_get_vq_notification(struct
>>> vdpa_device *vdev, u16 idx) @@ -2547,13 +2544,24 @@ static int
>>> mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
>>>
>>> /* we save one virtqueue for control virtqueue should we require it */
>>> max_vqs = MLX5_CAP_DEV_VDPA_EMULATION(mdev,
>> max_num_virtio_queues);
>>> - max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS);
>>> + if (add_config->mask &
>> BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
>>> + if (add_config->max_vq_pairs & (add_config->max_vq_pairs -
>> 1) ||
>>> + add_config->max_vq_pairs > max_vqs / 2)
>>> + return -EINVAL;
>> It'd be the best to describe the failing cause here, the power of 2 limitation is
>> known to me, but not friendly enough for first time user.
>> dev_warn would work for me.
> User commands shouldn't be creating dmsg unwanted messages.
> dev_warn_once() is better.
I said it works for me because there's already a dev_warn() instance in
the same function.
> But instead, extack error message should be returned that reaches the user who issues iproute2 command and makes user aware better.
Yep, extack's more preferred than dev_warn_once and dev_warn.
-Siwei
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/7] vdpa/mlx5: Support configuring max data virtqueue pairs
[not found] ` <20211207081946.GE15767@mtl-vdi-166.wap.labs.mlnx>
@ 2021-12-07 23:15 ` Si-Wei Liu
2021-12-08 3:29 ` Jason Wang
0 siblings, 1 reply; 17+ messages in thread
From: Si-Wei Liu @ 2021-12-07 23:15 UTC (permalink / raw)
To: Eli Cohen; +Cc: lvivier, mst, virtualization, eperezma
On 12/7/2021 12:19 AM, Eli Cohen wrote:
> On Thu, Dec 02, 2021 at 11:28:12PM -0800, Si-Wei Liu wrote:
>>
>> On 12/1/2021 11:57 AM, Eli Cohen wrote:
>>> Check whether the max number of data virtqueue pairs was provided when a
>>> adding a new device and verify the new value does not exceed device
>>> capabilities.
>>>
>>> In addition, change the arrays holding virtqueue and callback contexts
>>> to be dynamically allocated.
>>>
>>> Signed-off-by: Eli Cohen <elic@nvidia.com>
>>> ---
>>> drivers/vdpa/mlx5/net/mlx5_vnet.c | 33 ++++++++++++++++++++-----------
>>> 1 file changed, 21 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>> index b66f41ccbac2..62aba5dbd8fa 100644
>>> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>> @@ -131,11 +131,6 @@ struct mlx5_vdpa_virtqueue {
>>> struct mlx5_vq_restore_info ri;
>>> };
>>> -/* We will remove this limitation once mlx5_vdpa_alloc_resources()
>>> - * provides for driver space allocation
>> Is this comment outdated, i.e. driver space allocation in
>> mlx5_vdpa_alloc_resources() already provided?
>>
> Not sure I follow. The comment was removed in this patch since we no
> longer depend on MLX5_MAX_SUPPORTED_VQS and rather use dynamic
> allocations.
>>> - */
>>> -#define MLX5_MAX_SUPPORTED_VQS 16
>>> -
>>> static bool is_index_valid(struct mlx5_vdpa_dev *mvdev, u16 idx)
>>> {
>>> if (unlikely(idx > mvdev->max_idx))
>>> @@ -148,8 +143,8 @@ struct mlx5_vdpa_net {
>>> struct mlx5_vdpa_dev mvdev;
>>> struct mlx5_vdpa_net_resources res;
>>> struct virtio_net_config config;
>>> - struct mlx5_vdpa_virtqueue vqs[MLX5_MAX_SUPPORTED_VQS];
>>> - struct vdpa_callback event_cbs[MLX5_MAX_SUPPORTED_VQS + 1];
>>> + struct mlx5_vdpa_virtqueue *vqs;
>>> + struct vdpa_callback *event_cbs;
>>> /* Serialize vq resources creation and destruction. This is required
>>> * since memory map might change and we need to destroy and create
>>> @@ -1218,7 +1213,7 @@ static void suspend_vqs(struct mlx5_vdpa_net *ndev)
>>> {
>>> int i;
>>> - for (i = 0; i < MLX5_MAX_SUPPORTED_VQS; i++)
>>> + for (i = 0; i < ndev->mvdev.max_vqs; i++)
>>> suspend_vq(ndev, &ndev->vqs[i]);
>>> }
>>> @@ -1245,7 +1240,7 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
>>> int i, j;
>>> int err;
>>> - max_rqt = min_t(int, MLX5_MAX_SUPPORTED_VQS / 2,
>>> + max_rqt = min_t(int, ndev->mvdev.max_vqs / 2,
>>> 1 << MLX5_CAP_GEN(ndev->mvdev.mdev, log_max_rqt_size));
>>> if (max_rqt < 1)
>>> return -EOPNOTSUPP;
>>> @@ -2235,7 +2230,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev)
>>> clear_vqs_ready(ndev);
>>> mlx5_vdpa_destroy_mr(&ndev->mvdev);
>>> ndev->mvdev.status = 0;
>>> - memset(ndev->event_cbs, 0, sizeof(ndev->event_cbs));
>>> + memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * (mvdev->max_vqs + 1));
>>> ndev->mvdev.actual_features = 0;
>>> ++mvdev->generation;
>>> if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
>>> @@ -2308,6 +2303,8 @@ static void mlx5_vdpa_free(struct vdpa_device *vdev)
>>> }
>>> mlx5_vdpa_free_resources(&ndev->mvdev);
>>> mutex_destroy(&ndev->reslock);
>>> + kfree(ndev->event_cbs);
>>> + kfree(ndev->vqs);
>>> }
>>> static struct vdpa_notification_area mlx5_get_vq_notification(struct vdpa_device *vdev, u16 idx)
>>> @@ -2547,13 +2544,24 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
>>> /* we save one virtqueue for control virtqueue should we require it */
>>> max_vqs = MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues);
>>> - max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS);
>>> + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
>>> + if (add_config->max_vq_pairs & (add_config->max_vq_pairs - 1) ||
>>> + add_config->max_vq_pairs > max_vqs / 2)
>>> + return -EINVAL;
>> It'd be the best to describe the failing cause here, the power of 2
>> limitation is known to me, but not friendly enough for first time user.
>> dev_warn would work for me.
> I could add a warning but if some test script does this plenty of times
> you will clutter dmesg. You do fail if you provide a non power of two
> value.
You could pick dev_warn_once() and fix other similar dev_warn() usage in
the same function. But I do wonder why your firmware has this power-of-2
limitation for the number of data queues. Are you going to remove this
limitation by working around it in driver? I thought only queue size has
such power-of-2 limitation.
Thanks,
-Siwei
>>> + max_vqs = min_t(u32, max_vqs, 2 * add_config->max_vq_pairs);
>>> + }
>>> ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mlx5_vdpa_ops,
>>> name, false);
>>> if (IS_ERR(ndev))
>>> return PTR_ERR(ndev);
>>> + ndev->vqs = kcalloc(max_vqs, sizeof(*ndev->vqs), GFP_KERNEL);
>>> + ndev->event_cbs = kcalloc(max_vqs + 1, sizeof(*ndev->event_cbs), GFP_KERNEL);
>>> + if (!ndev->vqs || !ndev->event_cbs) {
>>> + err = -ENOMEM;
>>> + goto err_mtu;
>> Not a good idea to call mutex_destroy() without calling mutex_init() ahead.
>> Introduce another err label before put_device()?
> Thanks, will fix.
>> -Siwei
>>
>>> + }
>>> ndev->mvdev.max_vqs = max_vqs;
>>> mvdev = &ndev->mvdev;
>>> mvdev->mdev = mdev;
>>> @@ -2676,7 +2684,8 @@ static int mlx5v_probe(struct auxiliary_device *adev,
>>> mgtdev->mgtdev.ops = &mdev_ops;
>>> mgtdev->mgtdev.device = mdev->device;
>>> mgtdev->mgtdev.id_table = id_table;
>>> - mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR);
>>> + mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR) |
>>> + BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP);
>>> mgtdev->madev = madev;
>>> err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 5/7] vdpa: Add support for querying control virtqueue index
[not found] ` <20211207075732.GC15767@mtl-vdi-166.wap.labs.mlnx>
@ 2021-12-08 3:25 ` Jason Wang
[not found] ` <20211208060721.GA59227@mtl-vdi-166.wap.labs.mlnx>
0 siblings, 1 reply; 17+ messages in thread
From: Jason Wang @ 2021-12-08 3:25 UTC (permalink / raw)
To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu
On Tue, Dec 7, 2021 at 3:57 PM Eli Cohen <elic@nvidia.com> wrote:
>
> On Fri, Dec 03, 2021 at 10:35:30AM +0800, Jason Wang wrote:
> > On Thu, Dec 2, 2021 at 3:58 AM Eli Cohen <elic@nvidia.com> wrote:
> > >
> > > Add netlink attribute and callback function to query the control VQ
> > > index of a device.
> >
> > It's better to explain the motivation of this. Actually we can deduce
> > it from the max_virtqueue_paris if I was not wrong.
> Right, we can deduce it. But I think, let's let the kernel deduce it.
> e.g., we can avoid the new get_ctrl_vq_idx() callback, but still return
> the value to userspace with VDPA_ATTR_DEV_CTRL_VQ_IDX.
>
> > So it means the
> > value varies depending on if VIRTIO_NET_F_MQ
> I guest you meant VIRTIO_NET_F_CTRL_VQ
> > is negotiated which can
> > be changed after a query. This may probably confuse the userspace
> > sometime.
>
> No sure I follow you on this. The CVQ index is dictated after the call
> to _vdpa_register_device() and cannot change. Whether there is a CVQ or
> isn't depends on VIRTIO_NET_F_CTRL_VQ but the index doesn't change.
Looks not, according to the spec, consider we have a device with 2
queue pairs plus cvq.
When MQ is negotiated, cvq index is 4.
When MQ is not negotiated, cvq index is 2.
Thanks
>
> >
> > Thanks
> >
> > >
> > > Signed-off-by: Eli Cohen <elic@nvidia.com>
> > > ---
> > > drivers/vdpa/vdpa.c | 18 ++++++++++++++++++
> > > include/linux/vdpa.h | 1 +
> > > include/uapi/linux/vdpa.h | 1 +
> > > 3 files changed, 20 insertions(+)
> > >
> > > diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> > > index f06f949d5fa1..ca3ab0f46188 100644
> > > --- a/drivers/vdpa/vdpa.c
> > > +++ b/drivers/vdpa/vdpa.c
> > > @@ -712,6 +712,20 @@ static int vdpa_nl_cmd_dev_get_dumpit(struct sk_buff *msg, struct netlink_callba
> > > return msg->len;
> > > }
> > >
> > > +static int vdpa_dev_net_ctrl_vq_fill(struct vdpa_device *vdev,
> > > + struct sk_buff *msg, u64 features)
> > > +{
> > > + u16 val_u16 = 0;
> > > +
> > > + if (features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ) &&
> > > + vdev->config->get_ctrl_vq_idx) {
> > > + val_u16 = vdev->config->get_ctrl_vq_idx(vdev);
> > > + return nla_put_u16(msg, VDPA_ATTR_DEV_CTRL_VQ_IDX, val_u16);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > static int vdpa_dev_net_mq_config_fill(struct vdpa_device *vdev,
> > > struct sk_buff *msg, u64 features,
> > > const struct virtio_net_config *config)
> > > @@ -730,6 +744,7 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
> > > struct virtio_net_config config = {};
> > > u64 features;
> > > u16 val_u16;
> > > + int err;
> > >
> > > vdpa_get_config(vdev, 0, &config, sizeof(config));
> > >
> > > @@ -746,6 +761,9 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
> > > return -EMSGSIZE;
> > >
> > > features = vdev->config->get_features(vdev);
> > > + err = vdpa_dev_net_ctrl_vq_fill(vdev, msg, features);
> > > + if (err)
> > > + return err;
> > >
> > > return vdpa_dev_net_mq_config_fill(vdev, msg, features, &config);
> > > }
> > > diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> > > index 820621c59365..fca9bfeed9ba 100644
> > > --- a/include/linux/vdpa.h
> > > +++ b/include/linux/vdpa.h
> > > @@ -274,6 +274,7 @@ struct vdpa_config_ops {
> > > (*get_vq_notification)(struct vdpa_device *vdev, u16 idx);
> > > /* vq irq is not expected to be changed once DRIVER_OK is set */
> > > int (*get_vq_irq)(struct vdpa_device *vdev, u16 idx);
> > > + u16 (*get_ctrl_vq_idx)(struct vdpa_device *vdev);
> > >
> > > /* Device ops */
> > > u32 (*get_vq_align)(struct vdpa_device *vdev);
> > > diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
> > > index a252f06f9dfd..2e3a7f89f42d 100644
> > > --- a/include/uapi/linux/vdpa.h
> > > +++ b/include/uapi/linux/vdpa.h
> > > @@ -34,6 +34,7 @@ enum vdpa_attr {
> > > VDPA_ATTR_DEV_MAX_VQS, /* u32 */
> > > VDPA_ATTR_DEV_MAX_VQ_SIZE, /* u16 */
> > > VDPA_ATTR_DEV_MIN_VQ_SIZE, /* u16 */
> > > + VDPA_ATTR_DEV_CTRL_VQ_IDX, /* u16 */
> > >
> > > VDPA_ATTR_DEV_NET_CFG_MACADDR, /* binary */
> > > VDPA_ATTR_DEV_NET_STATUS, /* u8 */
> > > --
> > > 2.33.1
> > >
> >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/7] vdpa/mlx5: Support configuring max data virtqueue pairs
2021-12-07 23:15 ` Si-Wei Liu
@ 2021-12-08 3:29 ` Jason Wang
0 siblings, 0 replies; 17+ messages in thread
From: Jason Wang @ 2021-12-08 3:29 UTC (permalink / raw)
To: Si-Wei Liu; +Cc: Laurent Vivier, mst, virtualization, eperezma, Eli Cohen
On Wed, Dec 8, 2021 at 7:16 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
>
>
> On 12/7/2021 12:19 AM, Eli Cohen wrote:
> > On Thu, Dec 02, 2021 at 11:28:12PM -0800, Si-Wei Liu wrote:
> >>
> >> On 12/1/2021 11:57 AM, Eli Cohen wrote:
> >>> Check whether the max number of data virtqueue pairs was provided when a
> >>> adding a new device and verify the new value does not exceed device
> >>> capabilities.
> >>>
> >>> In addition, change the arrays holding virtqueue and callback contexts
> >>> to be dynamically allocated.
> >>>
> >>> Signed-off-by: Eli Cohen <elic@nvidia.com>
> >>> ---
> >>> drivers/vdpa/mlx5/net/mlx5_vnet.c | 33 ++++++++++++++++++++-----------
> >>> 1 file changed, 21 insertions(+), 12 deletions(-)
> >>>
> >>> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> >>> index b66f41ccbac2..62aba5dbd8fa 100644
> >>> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> >>> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> >>> @@ -131,11 +131,6 @@ struct mlx5_vdpa_virtqueue {
> >>> struct mlx5_vq_restore_info ri;
> >>> };
> >>> -/* We will remove this limitation once mlx5_vdpa_alloc_resources()
> >>> - * provides for driver space allocation
> >> Is this comment outdated, i.e. driver space allocation in
> >> mlx5_vdpa_alloc_resources() already provided?
> >>
> > Not sure I follow. The comment was removed in this patch since we no
> > longer depend on MLX5_MAX_SUPPORTED_VQS and rather use dynamic
> > allocations.
> >>> - */
> >>> -#define MLX5_MAX_SUPPORTED_VQS 16
> >>> -
> >>> static bool is_index_valid(struct mlx5_vdpa_dev *mvdev, u16 idx)
> >>> {
> >>> if (unlikely(idx > mvdev->max_idx))
> >>> @@ -148,8 +143,8 @@ struct mlx5_vdpa_net {
> >>> struct mlx5_vdpa_dev mvdev;
> >>> struct mlx5_vdpa_net_resources res;
> >>> struct virtio_net_config config;
> >>> - struct mlx5_vdpa_virtqueue vqs[MLX5_MAX_SUPPORTED_VQS];
> >>> - struct vdpa_callback event_cbs[MLX5_MAX_SUPPORTED_VQS + 1];
> >>> + struct mlx5_vdpa_virtqueue *vqs;
> >>> + struct vdpa_callback *event_cbs;
> >>> /* Serialize vq resources creation and destruction. This is required
> >>> * since memory map might change and we need to destroy and create
> >>> @@ -1218,7 +1213,7 @@ static void suspend_vqs(struct mlx5_vdpa_net *ndev)
> >>> {
> >>> int i;
> >>> - for (i = 0; i < MLX5_MAX_SUPPORTED_VQS; i++)
> >>> + for (i = 0; i < ndev->mvdev.max_vqs; i++)
> >>> suspend_vq(ndev, &ndev->vqs[i]);
> >>> }
> >>> @@ -1245,7 +1240,7 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
> >>> int i, j;
> >>> int err;
> >>> - max_rqt = min_t(int, MLX5_MAX_SUPPORTED_VQS / 2,
> >>> + max_rqt = min_t(int, ndev->mvdev.max_vqs / 2,
> >>> 1 << MLX5_CAP_GEN(ndev->mvdev.mdev, log_max_rqt_size));
> >>> if (max_rqt < 1)
> >>> return -EOPNOTSUPP;
> >>> @@ -2235,7 +2230,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev)
> >>> clear_vqs_ready(ndev);
> >>> mlx5_vdpa_destroy_mr(&ndev->mvdev);
> >>> ndev->mvdev.status = 0;
> >>> - memset(ndev->event_cbs, 0, sizeof(ndev->event_cbs));
> >>> + memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * (mvdev->max_vqs + 1));
> >>> ndev->mvdev.actual_features = 0;
> >>> ++mvdev->generation;
> >>> if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
> >>> @@ -2308,6 +2303,8 @@ static void mlx5_vdpa_free(struct vdpa_device *vdev)
> >>> }
> >>> mlx5_vdpa_free_resources(&ndev->mvdev);
> >>> mutex_destroy(&ndev->reslock);
> >>> + kfree(ndev->event_cbs);
> >>> + kfree(ndev->vqs);
> >>> }
> >>> static struct vdpa_notification_area mlx5_get_vq_notification(struct vdpa_device *vdev, u16 idx)
> >>> @@ -2547,13 +2544,24 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
> >>> /* we save one virtqueue for control virtqueue should we require it */
> >>> max_vqs = MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues);
> >>> - max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS);
> >>> + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
> >>> + if (add_config->max_vq_pairs & (add_config->max_vq_pairs - 1) ||
> >>> + add_config->max_vq_pairs > max_vqs / 2)
> >>> + return -EINVAL;
> >> It'd be the best to describe the failing cause here, the power of 2
> >> limitation is known to me, but not friendly enough for first time user.
> >> dev_warn would work for me.
> > I could add a warning but if some test script does this plenty of times
> > you will clutter dmesg. You do fail if you provide a non power of two
> > value.
> You could pick dev_warn_once() and fix other similar dev_warn() usage in
> the same function. But I do wonder why your firmware has this power-of-2
> limitation for the number of data queues.
It looks like a defect.
> Are you going to remove this
> limitation by working around it in driver? I thought only queue size has
> such power-of-2 limitation.
Only for split virtqueue, we don't have this for packed virtqueue.
Thanks
>
> Thanks,
> -Siwei
> >>> + max_vqs = min_t(u32, max_vqs, 2 * add_config->max_vq_pairs);
> >>> + }
> >>> ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mlx5_vdpa_ops,
> >>> name, false);
> >>> if (IS_ERR(ndev))
> >>> return PTR_ERR(ndev);
> >>> + ndev->vqs = kcalloc(max_vqs, sizeof(*ndev->vqs), GFP_KERNEL);
> >>> + ndev->event_cbs = kcalloc(max_vqs + 1, sizeof(*ndev->event_cbs), GFP_KERNEL);
> >>> + if (!ndev->vqs || !ndev->event_cbs) {
> >>> + err = -ENOMEM;
> >>> + goto err_mtu;
> >> Not a good idea to call mutex_destroy() without calling mutex_init() ahead.
> >> Introduce another err label before put_device()?
> > Thanks, will fix.
> >> -Siwei
> >>
> >>> + }
> >>> ndev->mvdev.max_vqs = max_vqs;
> >>> mvdev = &ndev->mvdev;
> >>> mvdev->mdev = mdev;
> >>> @@ -2676,7 +2684,8 @@ static int mlx5v_probe(struct auxiliary_device *adev,
> >>> mgtdev->mgtdev.ops = &mdev_ops;
> >>> mgtdev->mgtdev.device = mdev->device;
> >>> mgtdev->mgtdev.id_table = id_table;
> >>> - mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR);
> >>> + mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR) |
> >>> + BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP);
> >>> mgtdev->madev = madev;
> >>> err = vdpa_mgmtdev_register(&mgtdev->mgtdev);
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 5/7] vdpa: Add support for querying control virtqueue index
[not found] ` <20211208060721.GA59227@mtl-vdi-166.wap.labs.mlnx>
@ 2021-12-08 6:29 ` Jason Wang
0 siblings, 0 replies; 17+ messages in thread
From: Jason Wang @ 2021-12-08 6:29 UTC (permalink / raw)
To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu
On Wed, Dec 8, 2021 at 2:07 PM Eli Cohen <elic@nvidia.com> wrote:
>
> On Wed, Dec 08, 2021 at 11:25:01AM +0800, Jason Wang wrote:
> > On Tue, Dec 7, 2021 at 3:57 PM Eli Cohen <elic@nvidia.com> wrote:
> > >
> > > On Fri, Dec 03, 2021 at 10:35:30AM +0800, Jason Wang wrote:
> > > > On Thu, Dec 2, 2021 at 3:58 AM Eli Cohen <elic@nvidia.com> wrote:
> > > > >
> > > > > Add netlink attribute and callback function to query the control VQ
> > > > > index of a device.
> > > >
> > > > It's better to explain the motivation of this. Actually we can deduce
> > > > it from the max_virtqueue_paris if I was not wrong.
> > > Right, we can deduce it. But I think, let's let the kernel deduce it.
> > > e.g., we can avoid the new get_ctrl_vq_idx() callback, but still return
> > > the value to userspace with VDPA_ATTR_DEV_CTRL_VQ_IDX.
> > >
> > > > So it means the
> > > > value varies depending on if VIRTIO_NET_F_MQ
> > > I guest you meant VIRTIO_NET_F_CTRL_VQ
> > > > is negotiated which can
> > > > be changed after a query. This may probably confuse the userspace
> > > > sometime.
> > >
> > > No sure I follow you on this. The CVQ index is dictated after the call
> > > to _vdpa_register_device() and cannot change. Whether there is a CVQ or
> > > isn't depends on VIRTIO_NET_F_CTRL_VQ but the index doesn't change.
> >
> > Looks not, according to the spec, consider we have a device with 2
> > queue pairs plus cvq.
> >
> > When MQ is negotiated, cvq index is 4.
> > When MQ is not negotiated, cvq index is 2.
> >
>
> When you say MQ negotiated, do you mean both ways or just offered by the
> upstream driver?
I meant MQ feature is enabled from the driver and accepted by the device.
>
> I think I saw different behavior and I am going to check again.
>
> BTW, could you send reference to where the specs refers to the index of
> the control virtqueue?
You may refer spec chapter 5.1.2:
https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-1960002
Thanks
>
> > Thanks
> >
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > Signed-off-by: Eli Cohen <elic@nvidia.com>
> > > > > ---
> > > > > drivers/vdpa/vdpa.c | 18 ++++++++++++++++++
> > > > > include/linux/vdpa.h | 1 +
> > > > > include/uapi/linux/vdpa.h | 1 +
> > > > > 3 files changed, 20 insertions(+)
> > > > >
> > > > > diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> > > > > index f06f949d5fa1..ca3ab0f46188 100644
> > > > > --- a/drivers/vdpa/vdpa.c
> > > > > +++ b/drivers/vdpa/vdpa.c
> > > > > @@ -712,6 +712,20 @@ static int vdpa_nl_cmd_dev_get_dumpit(struct sk_buff *msg, struct netlink_callba
> > > > > return msg->len;
> > > > > }
> > > > >
> > > > > +static int vdpa_dev_net_ctrl_vq_fill(struct vdpa_device *vdev,
> > > > > + struct sk_buff *msg, u64 features)
> > > > > +{
> > > > > + u16 val_u16 = 0;
> > > > > +
> > > > > + if (features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ) &&
> > > > > + vdev->config->get_ctrl_vq_idx) {
> > > > > + val_u16 = vdev->config->get_ctrl_vq_idx(vdev);
> > > > > + return nla_put_u16(msg, VDPA_ATTR_DEV_CTRL_VQ_IDX, val_u16);
> > > > > + }
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > static int vdpa_dev_net_mq_config_fill(struct vdpa_device *vdev,
> > > > > struct sk_buff *msg, u64 features,
> > > > > const struct virtio_net_config *config)
> > > > > @@ -730,6 +744,7 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
> > > > > struct virtio_net_config config = {};
> > > > > u64 features;
> > > > > u16 val_u16;
> > > > > + int err;
> > > > >
> > > > > vdpa_get_config(vdev, 0, &config, sizeof(config));
> > > > >
> > > > > @@ -746,6 +761,9 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
> > > > > return -EMSGSIZE;
> > > > >
> > > > > features = vdev->config->get_features(vdev);
> > > > > + err = vdpa_dev_net_ctrl_vq_fill(vdev, msg, features);
> > > > > + if (err)
> > > > > + return err;
> > > > >
> > > > > return vdpa_dev_net_mq_config_fill(vdev, msg, features, &config);
> > > > > }
> > > > > diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> > > > > index 820621c59365..fca9bfeed9ba 100644
> > > > > --- a/include/linux/vdpa.h
> > > > > +++ b/include/linux/vdpa.h
> > > > > @@ -274,6 +274,7 @@ struct vdpa_config_ops {
> > > > > (*get_vq_notification)(struct vdpa_device *vdev, u16 idx);
> > > > > /* vq irq is not expected to be changed once DRIVER_OK is set */
> > > > > int (*get_vq_irq)(struct vdpa_device *vdev, u16 idx);
> > > > > + u16 (*get_ctrl_vq_idx)(struct vdpa_device *vdev);
> > > > >
> > > > > /* Device ops */
> > > > > u32 (*get_vq_align)(struct vdpa_device *vdev);
> > > > > diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
> > > > > index a252f06f9dfd..2e3a7f89f42d 100644
> > > > > --- a/include/uapi/linux/vdpa.h
> > > > > +++ b/include/uapi/linux/vdpa.h
> > > > > @@ -34,6 +34,7 @@ enum vdpa_attr {
> > > > > VDPA_ATTR_DEV_MAX_VQS, /* u32 */
> > > > > VDPA_ATTR_DEV_MAX_VQ_SIZE, /* u16 */
> > > > > VDPA_ATTR_DEV_MIN_VQ_SIZE, /* u16 */
> > > > > + VDPA_ATTR_DEV_CTRL_VQ_IDX, /* u16 */
> > > > >
> > > > > VDPA_ATTR_DEV_NET_CFG_MACADDR, /* binary */
> > > > > VDPA_ATTR_DEV_NET_STATUS, /* u8 */
> > > > > --
> > > > > 2.33.1
> > > > >
> > > >
> > >
> >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2021-12-08 6:29 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20211201195724.17503-1-elic@nvidia.com>
[not found] ` <20211201195724.17503-3-elic@nvidia.com>
2021-12-02 3:34 ` [PATCH 2/7] vdpa/mlx5: Fix config_attr_mask assignment Parav Pandit via Virtualization
2021-12-03 2:20 ` Jason Wang
[not found] ` <20211201195724.17503-6-elic@nvidia.com>
2021-12-03 2:35 ` [PATCH 5/7] vdpa: Add support for querying control virtqueue index Jason Wang
[not found] ` <20211207075732.GC15767@mtl-vdi-166.wap.labs.mlnx>
2021-12-08 3:25 ` Jason Wang
[not found] ` <20211208060721.GA59227@mtl-vdi-166.wap.labs.mlnx>
2021-12-08 6:29 ` Jason Wang
[not found] ` <20211201195724.17503-8-elic@nvidia.com>
2021-12-03 2:37 ` [PATCH 7/7] vdpa/mlx5: Restore cur_num_vqs in case of failure in change_num_qps() Jason Wang
[not found] ` <20211201195724.17503-2-elic@nvidia.com>
2021-12-02 3:32 ` [PATCH 1/7] vdpa: Allow to configure max data virtqueues Parav Pandit via Virtualization
2021-12-02 3:50 ` Parav Pandit via Virtualization
2021-12-03 6:09 ` Si-Wei Liu
[not found] ` <20211201195724.17503-4-elic@nvidia.com>
2021-12-03 2:29 ` [PATCH 3/7] vdpa/mlx5: Support configuring max data virtqueue pairs Jason Wang
2021-12-03 7:28 ` Si-Wei Liu
2021-12-03 7:31 ` Parav Pandit via Virtualization
2021-12-03 7:40 ` Si-Wei Liu
[not found] ` <20211207081946.GE15767@mtl-vdi-166.wap.labs.mlnx>
2021-12-07 23:15 ` Si-Wei Liu
2021-12-08 3:29 ` Jason Wang
[not found] ` <20211201195724.17503-5-elic@nvidia.com>
2021-12-03 2:31 ` [PATCH 4/7] vdpa/mlx5: Report queried max supported virtqueues Jason Wang
2021-12-03 7:32 ` Si-Wei Liu
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.