All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH v2 07/10] vdpa: Add support for returning device configuration information
       [not found] ` <20211213144258.179984-8-elic@nvidia.com>
@ 2021-12-13 23:49   ` Si-Wei Liu
  2021-12-14  4:05   ` Jason Wang
  1 sibling, 0 replies; 19+ messages in thread
From: Si-Wei Liu @ 2021-12-13 23:49 UTC (permalink / raw)
  To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma



On 12/13/2021 6:42 AM, Eli Cohen wrote:
> Add netlink attribute to store flags indicating current state of the
> device.
> In addition, introduce a flag to indicate whether control virtqueue is
> used.
>
> This indication can be retrieved by:
>
> vdpa dev config show vdpa-a
> vdpa-a: mac 00:00:00:00:88:88 link up link_announce false max_vq_pairs 1
> 	mtu 1500 ctrl_vq yes
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
>
> ----
> V1 -> V2
> Patch was changed to return only an indication of ctrl VQ
> ---
>   drivers/vdpa/vdpa.c       | 17 +++++++++++++++++
>   include/uapi/linux/vdpa.h |  8 ++++++++
>   2 files changed, 25 insertions(+)
>
> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> index 7b7bef7673b4..130a8d4aeaed 100644
> --- a/drivers/vdpa/vdpa.c
> +++ b/drivers/vdpa/vdpa.c
> @@ -787,6 +787,19 @@ static int vdpa_nl_cmd_dev_get_dumpit(struct sk_buff *msg, struct netlink_callba
>   	return msg->len;
>   }
>   
> +static int vdpa_dev_net_ctrl_vq_fill(struct vdpa_device *vdev,
> +				     struct sk_buff *msg,
> +				     struct virtio_net_config *config,
> +				     u64 features)
> +{
> +	if (!(features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)))
> +		return 0;
> +
> +	/* currently the only flag can be returned */
> +	return nla_put_u64_64bit(msg, VDPA_ATTR_DEV_FLAGS,
> +				 BIT_ULL(VDPA_DEV_ATTR_CVQ), VDPA_ATTR_PAD);
> +}
> +
>   static int vdpa_dev_net_mq_config_fill(struct vdpa_device *vdev,
>   				       struct sk_buff *msg, u64 features,
>   				       const struct virtio_net_config *config)
> @@ -805,6 +818,7 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
>   	struct virtio_net_config config = {};
>   	u64 features;
>   	u16 val_u16;
> +	int err;
>   
>   	vdpa_get_config(vdev, 0, &config, sizeof(config));
>   
> @@ -821,6 +835,9 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
>   		return -EMSGSIZE;
>   
>   	features = vdev->config->get_driver_features(vdev);
> +	err = vdpa_dev_net_ctrl_vq_fill(vdev, msg, &config, features);
> +	if (err)
> +		return err;
>   
>   	return vdpa_dev_net_mq_config_fill(vdev, msg, features, &config);
>   }
> diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
> index a252f06f9dfd..23b854e3e5e2 100644
> --- a/include/uapi/linux/vdpa.h
> +++ b/include/uapi/linux/vdpa.h
> @@ -20,9 +20,16 @@ enum vdpa_command {
>   	VDPA_CMD_DEV_CONFIG_GET,	/* can dump */
>   };
>   
> +enum {
> +	VDPA_DEV_ATTR_CVQ,
> +};
> +
>   enum vdpa_attr {
>   	VDPA_ATTR_UNSPEC,
>   
> +	/* Pad attribute for 64b alignment */
> +	VDPA_ATTR_PAD = VDPA_ATTR_UNSPEC,
> +
>   	/* bus name (optional) + dev name together make the parent device handle */
>   	VDPA_ATTR_MGMTDEV_BUS_NAME,		/* string */
>   	VDPA_ATTR_MGMTDEV_DEV_NAME,		/* string */
> @@ -34,6 +41,7 @@ enum vdpa_attr {
>   	VDPA_ATTR_DEV_MAX_VQS,			/* u32 */
>   	VDPA_ATTR_DEV_MAX_VQ_SIZE,		/* u16 */
>   	VDPA_ATTR_DEV_MIN_VQ_SIZE,		/* u16 */
> +	VDPA_ATTR_DEV_FLAGS,			/* u64 */
Adding new attr here would break existing userspace that is compiled 
with the older value. Need to append to the end.

-Siwei
>   
>   	VDPA_ATTR_DEV_NET_CFG_MACADDR,		/* binary */
>   	VDPA_ATTR_DEV_NET_STATUS,		/* u8 */

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 09/10] vdpa: Support reporting max device virtqueues
       [not found] ` <20211213144258.179984-10-elic@nvidia.com>
@ 2021-12-13 23:50   ` Si-Wei Liu
  2021-12-14  1:00   ` Si-Wei Liu
  1 sibling, 0 replies; 19+ messages in thread
From: Si-Wei Liu @ 2021-12-13 23:50 UTC (permalink / raw)
  To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma



On 12/13/2021 6:42 AM, Eli Cohen wrote:
> Add max_supported_vqs field to struct vdpa_mgmt_dev. Upstream drivers
> need to feel this value according to the device capabilities.
>
> This value is reported back in a netlink message when showing a device.
>
> Example:
>
> $ vdpa dev show
> vdpa-a: type network mgmtdev auxiliary/mlx5_core.sf.1 vendor_id 5555 \
> 	max_vqp 3 max_vq_size 256 max_supported_vqs 256
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
>   drivers/vdpa/vdpa.c       | 2 ++
>   include/linux/vdpa.h      | 1 +
>   include/uapi/linux/vdpa.h | 1 +
>   3 files changed, 4 insertions(+)
>
> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> index 130a8d4aeaed..b9dd693146be 100644
> --- a/drivers/vdpa/vdpa.c
> +++ b/drivers/vdpa/vdpa.c
> @@ -695,6 +695,8 @@ vdpa_dev_fill(struct vdpa_device *vdev, struct sk_buff *msg, u32 portid, u32 seq
>   		goto msg_err;
>   	if (nla_put_u16(msg, VDPA_ATTR_DEV_MIN_VQ_SIZE, min_vq_size))
>   		goto msg_err;
> +	if (nla_put_u16(msg, VDPA_ATTR_DEV_MAX_DEV_VQS, vdev->mdev->max_supported_vqs))
> +		goto msg_err;
>   
>   	genlmsg_end(msg, hdr);
>   	return 0;
> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> index 9245dfbf1b08..72ea8ad7ba21 100644
> --- a/include/linux/vdpa.h
> +++ b/include/linux/vdpa.h
> @@ -444,6 +444,7 @@ struct vdpa_mgmt_dev {
>   	const struct vdpa_mgmtdev_ops *ops;
>   	const struct virtio_device_id *id_table;
>   	u64 config_attr_mask;
> +	u16 max_supported_vqs;
>   	struct list_head list;
>   };
>   
> diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
> index 23b854e3e5e2..1b758d77e228 100644
> --- a/include/uapi/linux/vdpa.h
> +++ b/include/uapi/linux/vdpa.h
> @@ -42,6 +42,7 @@ enum vdpa_attr {
>   	VDPA_ATTR_DEV_MAX_VQ_SIZE,		/* u16 */
>   	VDPA_ATTR_DEV_MIN_VQ_SIZE,		/* u16 */
>   	VDPA_ATTR_DEV_FLAGS,			/* u64 */
> +	VDPA_ATTR_DEV_MAX_DEV_VQS,		/* u16 */

Adding new attr here would break existing userspace that is compiled 
with the older value. Need to append to the end.

-Siwei
>   
>   	VDPA_ATTR_DEV_NET_CFG_MACADDR,		/* binary */
>   	VDPA_ATTR_DEV_NET_STATUS,		/* u8 */

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 05/10] vdpa/mlx5: Fix config_attr_mask assignment
       [not found] ` <20211213144258.179984-6-elic@nvidia.com>
@ 2021-12-13 23:51   ` Si-Wei Liu
  0 siblings, 0 replies; 19+ messages in thread
From: Si-Wei Liu @ 2021-12-13 23:51 UTC (permalink / raw)
  To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma



On 12/13/2021 6:42 AM, Eli Cohen wrote:
> Fix VDPA_ATTR_DEV_NET_CFG_MACADDR assignment to be explicit 64 bit
> assignment.
>
> No issue was seen since the value is well below 64 bit max value.
> Nevertheless it needs to be fixed.
>
> Fixes: a007d940040c ("vdpa/mlx5: Support configuration of MAC")
> Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
>   drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 98aed4b36c3f..4a2149f70f1e 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -2669,7 +2669,7 @@ static int mlx5v_probe(struct auxiliary_device *adev,
>   	mgtdev->mgtdev.ops = &mdev_ops;
>   	mgtdev->mgtdev.device = mdev->device;
>   	mgtdev->mgtdev.id_table = id_table;
> -	mgtdev->mgtdev.config_attr_mask = (1 << VDPA_ATTR_DEV_NET_CFG_MACADDR);
> +	mgtdev->mgtdev.config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR);
>   	mgtdev->madev = madev;
>   
>   	err = vdpa_mgmtdev_register(&mgtdev->mgtdev);

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 04/10] vdpa: Allow to configure max data virtqueues
       [not found] ` <20211213144258.179984-5-elic@nvidia.com>
@ 2021-12-13 23:59   ` Si-Wei Liu
  0 siblings, 0 replies; 19+ messages in thread
From: Si-Wei Liu @ 2021-12-13 23:59 UTC (permalink / raw)
  To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma



On 12/13/2021 6:42 AM, Eli Cohen wrote:
> Add netlink support to configure the max virtqueue pairs for a device.
> At least one pair is required. The maximum is dictated by the device.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
>   drivers/vdpa/vdpa.c  | 14 +++++++++++++-
>   include/linux/vdpa.h |  1 +
>   2 files changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> index 5749cf0a1500..7b7bef7673b4 100644
> --- a/drivers/vdpa/vdpa.c
> +++ b/drivers/vdpa/vdpa.c
> @@ -555,7 +555,8 @@ vdpa_nl_cmd_mgmtdev_get_dumpit(struct sk_buff *msg, struct netlink_callback *cb)
>   }
>   
>   #define VDPA_DEV_NET_ATTRS_MASK ((1 << VDPA_ATTR_DEV_NET_CFG_MACADDR) | \
> -				 (1 << VDPA_ATTR_DEV_NET_CFG_MTU))
> +				 (1 << VDPA_ATTR_DEV_NET_CFG_MTU) | \
> +				 (1 << VDPA_ATTR_DEV_NET_CFG_MAX_VQP))
>   
>   static int vdpa_nl_cmd_dev_add_set_doit(struct sk_buff *skb, struct genl_info *info)
>   {
> @@ -581,6 +582,17 @@ static int vdpa_nl_cmd_dev_add_set_doit(struct sk_buff *skb, struct genl_info *i
>   			nla_get_u16(nl_attrs[VDPA_ATTR_DEV_NET_CFG_MTU]);
>   		config.mask |= (1 << VDPA_ATTR_DEV_NET_CFG_MTU);
>   	}
> +	if (nl_attrs[VDPA_ATTR_DEV_NET_CFG_MAX_VQP]) {
> +		config.net.max_vq_pairs =
> +			nla_get_u16(nl_attrs[VDPA_ATTR_DEV_NET_CFG_MAX_VQP]);
> +		if (!config.net.max_vq_pairs) {
> +			NL_SET_ERR_MSG_MOD(info->extack,
> +					   "At least one pair of VQs is required");
> +			err = -EINVAL;
> +			goto err;
Should directly return -EINVAL, vdpa_dev_mutex is not taken.

> +		}
> +		config.mask |= BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP);
> +	}
>   
>   	/* Skip checking capability if user didn't prefer to configure any
>   	 * device networking attributes. It is likely that user might have used
> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> index 9cc4291a79b3..9245dfbf1b08 100644
> --- a/include/linux/vdpa.h
> +++ b/include/linux/vdpa.h
> @@ -101,6 +101,7 @@ struct vdpa_dev_set_config {
>   	struct {
>   		u8 mac[ETH_ALEN];
>   		u16 mtu;
> +		u16 max_vq_pairs;
>   	} net;
>   	u64 mask;
>   };

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 09/10] vdpa: Support reporting max device virtqueues
       [not found] ` <20211213144258.179984-10-elic@nvidia.com>
  2021-12-13 23:50   ` [PATCH v2 09/10] vdpa: Support reporting max device virtqueues Si-Wei Liu
@ 2021-12-14  1:00   ` Si-Wei Liu
       [not found]     ` <20211214082229.GE89087@mtl-vdi-166.wap.labs.mlnx>
  1 sibling, 1 reply; 19+ messages in thread
From: Si-Wei Liu @ 2021-12-14  1:00 UTC (permalink / raw)
  To: Eli Cohen, mst, jasowang, virtualization; +Cc: lvivier, eperezma



On 12/13/2021 6:42 AM, Eli Cohen wrote:
> Add max_supported_vqs field to struct vdpa_mgmt_dev. Upstream drivers
> need to feel this value according to the device capabilities.
>
> This value is reported back in a netlink message when showing a device.
>
> Example:
>
> $ vdpa dev show
> vdpa-a: type network mgmtdev auxiliary/mlx5_core.sf.1 vendor_id 5555 \
> 	max_vqp 3 max_vq_size 256 max_supported_vqs 256
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
>   drivers/vdpa/vdpa.c       | 2 ++
>   include/linux/vdpa.h      | 1 +
>   include/uapi/linux/vdpa.h | 1 +
>   3 files changed, 4 insertions(+)
>
> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> index 130a8d4aeaed..b9dd693146be 100644
> --- a/drivers/vdpa/vdpa.c
> +++ b/drivers/vdpa/vdpa.c
> @@ -695,6 +695,8 @@ vdpa_dev_fill(struct vdpa_device *vdev, struct sk_buff *msg, u32 portid, u32 seq
>   		goto msg_err;
>   	if (nla_put_u16(msg, VDPA_ATTR_DEV_MIN_VQ_SIZE, min_vq_size))
>   		goto msg_err;
> +	if (nla_put_u16(msg, VDPA_ATTR_DEV_MAX_DEV_VQS, vdev->mdev->max_supported_vqs))
> +		goto msg_err;
What is the default value if vendor driver doesn't expose this value? 
And it doesn't seem this is something to stash on struct vdpa_mgmt_dev, 
only some vdpa vendor for network drive may need to expose it; if 
otherwise not exposed we can assume it's 32767 by the spec. A generic 
vdpa op (get_device_capability?) to query device capability might be 
better I guess (define a VDPA_CAPAB_NET_MAX_VQS subtype to get it).

-Siwei

>   
>   	genlmsg_end(msg, hdr);
>   	return 0;
> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> index 9245dfbf1b08..72ea8ad7ba21 100644
> --- a/include/linux/vdpa.h
> +++ b/include/linux/vdpa.h
> @@ -444,6 +444,7 @@ struct vdpa_mgmt_dev {
>   	const struct vdpa_mgmtdev_ops *ops;
>   	const struct virtio_device_id *id_table;
>   	u64 config_attr_mask;
> +	u16 max_supported_vqs;
>   	struct list_head list;
>   };
>   
> diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
> index 23b854e3e5e2..1b758d77e228 100644
> --- a/include/uapi/linux/vdpa.h
> +++ b/include/uapi/linux/vdpa.h
> @@ -42,6 +42,7 @@ enum vdpa_attr {
>   	VDPA_ATTR_DEV_MAX_VQ_SIZE,		/* u16 */
>   	VDPA_ATTR_DEV_MIN_VQ_SIZE,		/* u16 */
>   	VDPA_ATTR_DEV_FLAGS,			/* u64 */
> +	VDPA_ATTR_DEV_MAX_DEV_VQS,		/* u16 */
>   
>   	VDPA_ATTR_DEV_NET_CFG_MACADDR,		/* binary */
>   	VDPA_ATTR_DEV_NET_STATUS,		/* u8 */

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 00/10] Allow for configuring max number of virtqueue pairs
       [not found] <20211213144258.179984-1-elic@nvidia.com>
                   ` (2 preceding siblings ...)
       [not found] ` <20211213144258.179984-5-elic@nvidia.com>
@ 2021-12-14  3:17 ` Jason Wang
  2021-12-14  4:11   ` Parav Pandit via Virtualization
                     ` (2 more replies)
       [not found] ` <20211213144258.179984-2-elic@nvidia.com>
                   ` (2 subsequent siblings)
  6 siblings, 3 replies; 19+ messages in thread
From: Jason Wang @ 2021-12-14  3:17 UTC (permalink / raw)
  To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu

On Mon, Dec 13, 2021 at 10:43 PM Eli Cohen <elic@nvidia.com> wrote:
>
> Allow the user to configure the max number of virtqueue pairs for a vdpa
> instance. The user can then control the actual number of virtqueue pairs
> using ethtool.
>
> Example, set number of VQPs to 2:
> $ ethtool -L ens1 combined 2
>
> A user can check the max supported virtqueues for a management device by
> runnig:
>
> vdpa dev show
>     vdpa-a: type network mgmtdev auxiliary/mlx5_core.sf.1 vendor_id 5555 \
>             max_vqp 3 max_vq_size 256 max_supported_vqs 256

I think the maxsupported_vqs should be an odd number since it should
contain control vq.

>
> and refer to this value when adding a device.
>
> To create a device with a max of 5 VQPs:
> vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1 max_vqp 5

A question, consider the parent support both net and block, if user use

vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1

How do we know it's a net or block device? Or do you expect some block
specific attributes to be added?

Thanks

>
> V1 -> V2:
> 1. Do not return the index of the control VQ. Instead return an
>    indication if Ctrl VQ was negotiated.
> 2. Do not returen conig information if FEATURES_OK is not set to avoid
>    reporting out of sync information.
> 3. Minor fixes as described by the individual patches
> 4. Add patches to return the max virtqueues a device is capable of
>    supporting.
>
> Eli Cohen (10):
>   vdpa: Provide interface to read driver features
>   vdpa/mlx5: Distribute RX virtqueues in RQT object
>   vdpa: Read device configuration only if FEATURES_OK
>   vdpa: Allow to configure max data virtqueues
>   vdpa/mlx5: Fix config_attr_mask assignment
>   vdpa/mlx5: Support configuring max data virtqueue pairs
>   vdpa: Add support for returning device configuration information
>   vdpa/mlx5: Restore cur_num_vqs in case of failure in change_num_qps()
>   vdpa: Support reporting max device virtqueues
>   vdpa/mlx5: Configure max supported virtqueues
>
>  drivers/vdpa/alibaba/eni_vdpa.c    | 16 +++--
>  drivers/vdpa/ifcvf/ifcvf_main.c    | 16 +++--
>  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 94 +++++++++++++++++-------------
>  drivers/vdpa/vdpa.c                | 42 ++++++++++++-
>  drivers/vdpa/vdpa_sim/vdpa_sim.c   | 21 +++++--
>  drivers/vdpa/vdpa_user/vduse_dev.c | 16 +++--
>  drivers/vdpa/virtio_pci/vp_vdpa.c  | 16 +++--
>  drivers/vhost/vdpa.c               |  2 +-
>  drivers/virtio/virtio_vdpa.c       |  2 +-
>  include/linux/vdpa.h               | 16 +++--
>  include/uapi/linux/vdpa.h          |  9 +++
>  11 files changed, 178 insertions(+), 72 deletions(-)
>
> --
> 2.33.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 01/10] vdpa: Provide interface to read driver features
       [not found] ` <20211213144258.179984-2-elic@nvidia.com>
@ 2021-12-14  3:37   ` Jason Wang
  0 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2021-12-14  3:37 UTC (permalink / raw)
  To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu

On Mon, Dec 13, 2021 at 10:43 PM Eli Cohen <elic@nvidia.com> wrote:
>
> Provide an interface to read the negotiated features. This is needed
> when building the netlink message in vdpa_dev_net_config_fill().
>
> Also fix the implementation of vdpa_dev_net_config_fill() to use the
> negotiated features instead of the device features.
>
> To make APIs clearer, make the following name changes to struct
> vdpa_config_ops so they better describe their operations:
>
> get_features -> get_device_features
> set_features -> set_driver_features
>
> Finally, add get_driver_features to return the negotiated features and
> add implementation to all the upstream drivers.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> -----
> v1 -> v2:
> 1. Fix to call vp_modern_get_driver_features() in
>    vp_vdpa_get_driver_features()
> 2. Improve calback function description
> ---

Acked-by: Jason Wang <jasowang@redhat.com>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 03/10] vdpa: Read device configuration only if FEATURES_OK
       [not found] ` <20211213144258.179984-4-elic@nvidia.com>
@ 2021-12-14  3:44   ` Jason Wang
       [not found]     ` <20211214071411.GA89087@mtl-vdi-166.wap.labs.mlnx>
  0 siblings, 1 reply; 19+ messages in thread
From: Jason Wang @ 2021-12-14  3:44 UTC (permalink / raw)
  To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu

On Mon, Dec 13, 2021 at 10:43 PM Eli Cohen <elic@nvidia.com> wrote:
>
> Avoid reading device configuration during feature negotiation. Read
> device status and verify that VIRTIO_CONFIG_S_FEATURES_OK is set.
> Otherwise, return -EAGAIN.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>



> ---
>  drivers/vdpa/vdpa.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> index 42d71d60d5dc..5749cf0a1500 100644
> --- a/drivers/vdpa/vdpa.c
> +++ b/drivers/vdpa/vdpa.c
> @@ -819,8 +819,15 @@ vdpa_dev_config_fill(struct vdpa_device *vdev, struct sk_buff *msg, u32 portid,
>  {
>         u32 device_id;
>         void *hdr;
> +       u8 status;
>         int err;
>
> +       status = vdev->config->get_status(vdev);
> +       if (!(status & VIRTIO_CONFIG_S_FEATURES_OK)) {
> +               NL_SET_ERR_MSG_MOD(extack, "Features negotiation not completed");
> +               return -EAGAIN;
> +       }
> +
>         hdr = genlmsg_put(msg, portid, seq, &vdpa_nl_family, flags,
>                           VDPA_CMD_DEV_CONFIG_GET);
>         if (!hdr)

It looks to me we need to synchronize this with set_status() and
cf_mutex (set_config).

Or simply return all the config space (the maximum set of features)

Thanks

> --
> 2.33.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 07/10] vdpa: Add support for returning device configuration information
       [not found] ` <20211213144258.179984-8-elic@nvidia.com>
  2021-12-13 23:49   ` [PATCH v2 07/10] vdpa: Add support for returning device configuration information Si-Wei Liu
@ 2021-12-14  4:05   ` Jason Wang
       [not found]     ` <20211214071713.GB89087@mtl-vdi-166.wap.labs.mlnx>
  1 sibling, 1 reply; 19+ messages in thread
From: Jason Wang @ 2021-12-14  4:05 UTC (permalink / raw)
  To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu

On Mon, Dec 13, 2021 at 10:43 PM Eli Cohen <elic@nvidia.com> wrote:
>
> Add netlink attribute to store flags indicating current state of the
> device.
> In addition, introduce a flag to indicate whether control virtqueue is
> used.
>
> This indication can be retrieved by:
>
> vdpa dev config show vdpa-a
> vdpa-a: mac 00:00:00:00:88:88 link up link_announce false max_vq_pairs 1
>         mtu 1500 ctrl_vq yes

I think the cvq is kind of duplicated with the driver features?

Thanks

>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
>
> ----
> V1 -> V2
> Patch was changed to return only an indication of ctrl VQ
> ---
>  drivers/vdpa/vdpa.c       | 17 +++++++++++++++++
>  include/uapi/linux/vdpa.h |  8 ++++++++
>  2 files changed, 25 insertions(+)
>
> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> index 7b7bef7673b4..130a8d4aeaed 100644
> --- a/drivers/vdpa/vdpa.c
> +++ b/drivers/vdpa/vdpa.c
> @@ -787,6 +787,19 @@ static int vdpa_nl_cmd_dev_get_dumpit(struct sk_buff *msg, struct netlink_callba
>         return msg->len;
>  }
>
> +static int vdpa_dev_net_ctrl_vq_fill(struct vdpa_device *vdev,
> +                                    struct sk_buff *msg,
> +                                    struct virtio_net_config *config,
> +                                    u64 features)
> +{
> +       if (!(features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)))
> +               return 0;
> +
> +       /* currently the only flag can be returned */
> +       return nla_put_u64_64bit(msg, VDPA_ATTR_DEV_FLAGS,
> +                                BIT_ULL(VDPA_DEV_ATTR_CVQ), VDPA_ATTR_PAD);
> +}
> +
>  static int vdpa_dev_net_mq_config_fill(struct vdpa_device *vdev,
>                                        struct sk_buff *msg, u64 features,
>                                        const struct virtio_net_config *config)
> @@ -805,6 +818,7 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
>         struct virtio_net_config config = {};
>         u64 features;
>         u16 val_u16;
> +       int err;
>
>         vdpa_get_config(vdev, 0, &config, sizeof(config));
>
> @@ -821,6 +835,9 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
>                 return -EMSGSIZE;
>
>         features = vdev->config->get_driver_features(vdev);
> +       err = vdpa_dev_net_ctrl_vq_fill(vdev, msg, &config, features);
> +       if (err)
> +               return err;
>
>         return vdpa_dev_net_mq_config_fill(vdev, msg, features, &config);
>  }
> diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
> index a252f06f9dfd..23b854e3e5e2 100644
> --- a/include/uapi/linux/vdpa.h
> +++ b/include/uapi/linux/vdpa.h
> @@ -20,9 +20,16 @@ enum vdpa_command {
>         VDPA_CMD_DEV_CONFIG_GET,        /* can dump */
>  };
>
> +enum {
> +       VDPA_DEV_ATTR_CVQ,
> +};
> +
>  enum vdpa_attr {
>         VDPA_ATTR_UNSPEC,
>
> +       /* Pad attribute for 64b alignment */
> +       VDPA_ATTR_PAD = VDPA_ATTR_UNSPEC,
> +
>         /* bus name (optional) + dev name together make the parent device handle */
>         VDPA_ATTR_MGMTDEV_BUS_NAME,             /* string */
>         VDPA_ATTR_MGMTDEV_DEV_NAME,             /* string */
> @@ -34,6 +41,7 @@ enum vdpa_attr {
>         VDPA_ATTR_DEV_MAX_VQS,                  /* u32 */
>         VDPA_ATTR_DEV_MAX_VQ_SIZE,              /* u16 */
>         VDPA_ATTR_DEV_MIN_VQ_SIZE,              /* u16 */
> +       VDPA_ATTR_DEV_FLAGS,                    /* u64 */
>
>         VDPA_ATTR_DEV_NET_CFG_MACADDR,          /* binary */
>         VDPA_ATTR_DEV_NET_STATUS,               /* u8 */
> --
> 2.33.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH v2 00/10] Allow for configuring max number of virtqueue pairs
  2021-12-14  3:17 ` [PATCH v2 00/10] Allow for configuring max number of virtqueue pairs Jason Wang
@ 2021-12-14  4:11   ` Parav Pandit via Virtualization
  2021-12-14  6:16     ` Jason Wang
       [not found]   ` <20211214073141.GC89087@mtl-vdi-166.wap.labs.mlnx>
  2021-12-14  9:42   ` Stefano Garzarella
  2 siblings, 1 reply; 19+ messages in thread
From: Parav Pandit via Virtualization @ 2021-12-14  4:11 UTC (permalink / raw)
  To: Jason Wang, Eli Cohen
  Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu


> From: Jason Wang <jasowang@redhat.com>
> Sent: Tuesday, December 14, 2021 8:47 AM
> 
> A question, consider the parent support both net and block, if user use
> 
> vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1
> 
> How do we know it's a net or block device? Or do you expect some block
> specific attributes to be added?
User should pass the virtio device id of type net/blk etc when creating the device.
No such in-kernel driver exists so far, so when it is added netlink interface should be enhanced to accept the optional device id.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 00/10] Allow for configuring max number of virtqueue pairs
  2021-12-14  4:11   ` Parav Pandit via Virtualization
@ 2021-12-14  6:16     ` Jason Wang
  2021-12-14  6:40       ` Parav Pandit via Virtualization
  0 siblings, 1 reply; 19+ messages in thread
From: Jason Wang @ 2021-12-14  6:16 UTC (permalink / raw)
  To: Parav Pandit
  Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu, Eli Cohen

On Tue, Dec 14, 2021 at 12:11 PM Parav Pandit <parav@nvidia.com> wrote:
>
>
> > From: Jason Wang <jasowang@redhat.com>
> > Sent: Tuesday, December 14, 2021 8:47 AM
> >
> > A question, consider the parent support both net and block, if user use
> >
> > vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1
> >
> > How do we know it's a net or block device? Or do you expect some block
> > specific attributes to be added?
> User should pass the virtio device id of type net/blk etc when creating the device.
> No such in-kernel driver exists so far, so when it is added netlink interface should be enhanced to accept the optional device id.

So the plan is to reject the dev adding without specifying the device
type? Looks fine to me.

Thanks

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH v2 00/10] Allow for configuring max number of virtqueue pairs
  2021-12-14  6:16     ` Jason Wang
@ 2021-12-14  6:40       ` Parav Pandit via Virtualization
  0 siblings, 0 replies; 19+ messages in thread
From: Parav Pandit via Virtualization @ 2021-12-14  6:40 UTC (permalink / raw)
  To: Jason Wang
  Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu, Eli Cohen



> From: Jason Wang <jasowang@redhat.com>
> Sent: Tuesday, December 14, 2021 11:47 AM
> 
> On Tue, Dec 14, 2021 at 12:11 PM Parav Pandit <parav@nvidia.com> wrote:
> >
> >
> > > From: Jason Wang <jasowang@redhat.com>
> > > Sent: Tuesday, December 14, 2021 8:47 AM
> > >
> > > A question, consider the parent support both net and block, if user
> > > use
> > >
> > > vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1
> > >
> > > How do we know it's a net or block device? Or do you expect some
> > > block specific attributes to be added?
> > User should pass the virtio device id of type net/blk etc when creating the
> device.
> > No such in-kernel driver exists so far, so when it is added netlink interface
> should be enhanced to accept the optional device id.
> 
> So the plan is to reject the dev adding without specifying the device type?
> Looks fine to me.
Yes, this enhancement will be only for those mgmt. device that supports multiple device types.
This ensures backward compatibility and also avoids the burden from user to not specify this for mgmt. device that support single type of device.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 03/10] vdpa: Read device configuration only if FEATURES_OK
       [not found]     ` <20211214071411.GA89087@mtl-vdi-166.wap.labs.mlnx>
@ 2021-12-14  8:53       ` Jason Wang
  0 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2021-12-14  8:53 UTC (permalink / raw)
  To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu

On Tue, Dec 14, 2021 at 3:14 PM Eli Cohen <elic@nvidia.com> wrote:
>
> On Tue, Dec 14, 2021 at 11:44:06AM +0800, Jason Wang wrote:
> > On Mon, Dec 13, 2021 at 10:43 PM Eli Cohen <elic@nvidia.com> wrote:
> > >
> > > Avoid reading device configuration during feature negotiation. Read
> > > device status and verify that VIRTIO_CONFIG_S_FEATURES_OK is set.
> > > Otherwise, return -EAGAIN.
> > >
> > > Signed-off-by: Eli Cohen <elic@nvidia.com>
> >
> >
> >
> > > ---
> > >  drivers/vdpa/vdpa.c | 7 +++++++
> > >  1 file changed, 7 insertions(+)
> > >
> > > diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> > > index 42d71d60d5dc..5749cf0a1500 100644
> > > --- a/drivers/vdpa/vdpa.c
> > > +++ b/drivers/vdpa/vdpa.c
> > > @@ -819,8 +819,15 @@ vdpa_dev_config_fill(struct vdpa_device *vdev, struct sk_buff *msg, u32 portid,
> > >  {
> > >         u32 device_id;
> > >         void *hdr;
> > > +       u8 status;
> > >         int err;
> > >
> > > +       status = vdev->config->get_status(vdev);
> > > +       if (!(status & VIRTIO_CONFIG_S_FEATURES_OK)) {
> > > +               NL_SET_ERR_MSG_MOD(extack, "Features negotiation not completed");
> > > +               return -EAGAIN;
> > > +       }
> > > +
> > >         hdr = genlmsg_put(msg, portid, seq, &vdpa_nl_family, flags,
> > >                           VDPA_CMD_DEV_CONFIG_GET);
> > >         if (!hdr)
> >
> > It looks to me we need to synchronize this with set_status() and
> > cf_mutex (set_config).
>
> Makes sense.
>
> >
> > Or simply return all the config space (the maximum set of features)
>
> No sure I follow you on this. Are you suggesting to return in the
> netlink message all the fields of struct virtio_net_config as individual fields?
>
> How is this related to the need to sync with cf_mutex?

I meant the reason for the synchronization is because some config
fields depend on the feature negotiation. I wonder if it would be
easier we just return all the possible config fields, then there's
probably no need for the synchronization since we don't need to check
the driver feature but the device features.

Thanks

>
> >
> > Thanks
> >
> > > --
> > > 2.33.1
> > >
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 00/10] Allow for configuring max number of virtqueue pairs
       [not found]   ` <20211214073141.GC89087@mtl-vdi-166.wap.labs.mlnx>
@ 2021-12-14  8:53     ` Jason Wang
  0 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2021-12-14  8:53 UTC (permalink / raw)
  To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu

On Tue, Dec 14, 2021 at 3:32 PM Eli Cohen <elic@nvidia.com> wrote:
>
> On Tue, Dec 14, 2021 at 11:17:00AM +0800, Jason Wang wrote:
> > On Mon, Dec 13, 2021 at 10:43 PM Eli Cohen <elic@nvidia.com> wrote:
> > >
> > > Allow the user to configure the max number of virtqueue pairs for a vdpa
> > > instance. The user can then control the actual number of virtqueue pairs
> > > using ethtool.
> > >
> > > Example, set number of VQPs to 2:
> > > $ ethtool -L ens1 combined 2
> > >
> > > A user can check the max supported virtqueues for a management device by
> > > runnig:
> > >
> > > vdpa dev show
> > >     vdpa-a: type network mgmtdev auxiliary/mlx5_core.sf.1 vendor_id 5555 \
> > >             max_vqp 3 max_vq_size 256 max_supported_vqs 256
> >
> > I think the maxsupported_vqs should be an odd number since it should
> > contain control vq.
>
> I was thinking we should report data virtqueues. If some upstream driver
> or device does not support CVQ, this should still be an even number.
>
> Moreover, if we want to include the CVQ, we might want to indicate that
> explicitly.

Can we expose driver features in this case?

Thanks

>
> >
> > >
> > > and refer to this value when adding a device.
> > >
> > > To create a device with a max of 5 VQPs:
> > > vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1 max_vqp 5
> >
> > A question, consider the parent support both net and block, if user use
> >
> > vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1
> >
> > How do we know it's a net or block device? Or do you expect some block
> > specific attributes to be added?
> >
> > Thanks
> >
> > >
> > > V1 -> V2:
> > > 1. Do not return the index of the control VQ. Instead return an
> > >    indication if Ctrl VQ was negotiated.
> > > 2. Do not returen conig information if FEATURES_OK is not set to avoid
> > >    reporting out of sync information.
> > > 3. Minor fixes as described by the individual patches
> > > 4. Add patches to return the max virtqueues a device is capable of
> > >    supporting.
> > >
> > > Eli Cohen (10):
> > >   vdpa: Provide interface to read driver features
> > >   vdpa/mlx5: Distribute RX virtqueues in RQT object
> > >   vdpa: Read device configuration only if FEATURES_OK
> > >   vdpa: Allow to configure max data virtqueues
> > >   vdpa/mlx5: Fix config_attr_mask assignment
> > >   vdpa/mlx5: Support configuring max data virtqueue pairs
> > >   vdpa: Add support for returning device configuration information
> > >   vdpa/mlx5: Restore cur_num_vqs in case of failure in change_num_qps()
> > >   vdpa: Support reporting max device virtqueues
> > >   vdpa/mlx5: Configure max supported virtqueues
> > >
> > >  drivers/vdpa/alibaba/eni_vdpa.c    | 16 +++--
> > >  drivers/vdpa/ifcvf/ifcvf_main.c    | 16 +++--
> > >  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 94 +++++++++++++++++-------------
> > >  drivers/vdpa/vdpa.c                | 42 ++++++++++++-
> > >  drivers/vdpa/vdpa_sim/vdpa_sim.c   | 21 +++++--
> > >  drivers/vdpa/vdpa_user/vduse_dev.c | 16 +++--
> > >  drivers/vdpa/virtio_pci/vp_vdpa.c  | 16 +++--
> > >  drivers/vhost/vdpa.c               |  2 +-
> > >  drivers/virtio/virtio_vdpa.c       |  2 +-
> > >  include/linux/vdpa.h               | 16 +++--
> > >  include/uapi/linux/vdpa.h          |  9 +++
> > >  11 files changed, 178 insertions(+), 72 deletions(-)
> > >
> > > --
> > > 2.33.1
> > >
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 07/10] vdpa: Add support for returning device configuration information
       [not found]     ` <20211214071713.GB89087@mtl-vdi-166.wap.labs.mlnx>
@ 2021-12-14  8:56       ` Jason Wang
  0 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2021-12-14  8:56 UTC (permalink / raw)
  To: Eli Cohen; +Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu

On Tue, Dec 14, 2021 at 3:17 PM Eli Cohen <elic@nvidia.com> wrote:
>
> On Tue, Dec 14, 2021 at 12:05:38PM +0800, Jason Wang wrote:
> > On Mon, Dec 13, 2021 at 10:43 PM Eli Cohen <elic@nvidia.com> wrote:
> > >
> > > Add netlink attribute to store flags indicating current state of the
> > > device.
> > > In addition, introduce a flag to indicate whether control virtqueue is
> > > used.
> > >
> > > This indication can be retrieved by:
> > >
> > > vdpa dev config show vdpa-a
> > > vdpa-a: mac 00:00:00:00:88:88 link up link_announce false max_vq_pairs 1
> > >         mtu 1500 ctrl_vq yes
> >
> > I think the cvq is kind of duplicated with the driver features?
>
> We don't pass to userspace the driver features. The availablity of CVQ
> is passed trough a new bit mask field that encodes (currently) just the
> availablity of CVQ.
>
> Are you suggesting the return the entire set of negotiated features and
> let user space decode that?

Yes, I think that can simplify things and save a lot of effort.

Thanks

>
> >
> > Thanks
> >
> > >
> > > Signed-off-by: Eli Cohen <elic@nvidia.com>
> > >
> > > ----
> > > V1 -> V2
> > > Patch was changed to return only an indication of ctrl VQ
> > > ---
> > >  drivers/vdpa/vdpa.c       | 17 +++++++++++++++++
> > >  include/uapi/linux/vdpa.h |  8 ++++++++
> > >  2 files changed, 25 insertions(+)
> > >
> > > diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> > > index 7b7bef7673b4..130a8d4aeaed 100644
> > > --- a/drivers/vdpa/vdpa.c
> > > +++ b/drivers/vdpa/vdpa.c
> > > @@ -787,6 +787,19 @@ static int vdpa_nl_cmd_dev_get_dumpit(struct sk_buff *msg, struct netlink_callba
> > >         return msg->len;
> > >  }
> > >
> > > +static int vdpa_dev_net_ctrl_vq_fill(struct vdpa_device *vdev,
> > > +                                    struct sk_buff *msg,
> > > +                                    struct virtio_net_config *config,
> > > +                                    u64 features)
> > > +{
> > > +       if (!(features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)))
> > > +               return 0;
> > > +
> > > +       /* currently the only flag can be returned */
> > > +       return nla_put_u64_64bit(msg, VDPA_ATTR_DEV_FLAGS,
> > > +                                BIT_ULL(VDPA_DEV_ATTR_CVQ), VDPA_ATTR_PAD);
> > > +}
> > > +
> > >  static int vdpa_dev_net_mq_config_fill(struct vdpa_device *vdev,
> > >                                        struct sk_buff *msg, u64 features,
> > >                                        const struct virtio_net_config *config)
> > > @@ -805,6 +818,7 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
> > >         struct virtio_net_config config = {};
> > >         u64 features;
> > >         u16 val_u16;
> > > +       int err;
> > >
> > >         vdpa_get_config(vdev, 0, &config, sizeof(config));
> > >
> > > @@ -821,6 +835,9 @@ static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *ms
> > >                 return -EMSGSIZE;
> > >
> > >         features = vdev->config->get_driver_features(vdev);
> > > +       err = vdpa_dev_net_ctrl_vq_fill(vdev, msg, &config, features);
> > > +       if (err)
> > > +               return err;
> > >
> > >         return vdpa_dev_net_mq_config_fill(vdev, msg, features, &config);
> > >  }
> > > diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
> > > index a252f06f9dfd..23b854e3e5e2 100644
> > > --- a/include/uapi/linux/vdpa.h
> > > +++ b/include/uapi/linux/vdpa.h
> > > @@ -20,9 +20,16 @@ enum vdpa_command {
> > >         VDPA_CMD_DEV_CONFIG_GET,        /* can dump */
> > >  };
> > >
> > > +enum {
> > > +       VDPA_DEV_ATTR_CVQ,
> > > +};
> > > +
> > >  enum vdpa_attr {
> > >         VDPA_ATTR_UNSPEC,
> > >
> > > +       /* Pad attribute for 64b alignment */
> > > +       VDPA_ATTR_PAD = VDPA_ATTR_UNSPEC,
> > > +
> > >         /* bus name (optional) + dev name together make the parent device handle */
> > >         VDPA_ATTR_MGMTDEV_BUS_NAME,             /* string */
> > >         VDPA_ATTR_MGMTDEV_DEV_NAME,             /* string */
> > > @@ -34,6 +41,7 @@ enum vdpa_attr {
> > >         VDPA_ATTR_DEV_MAX_VQS,                  /* u32 */
> > >         VDPA_ATTR_DEV_MAX_VQ_SIZE,              /* u16 */
> > >         VDPA_ATTR_DEV_MIN_VQ_SIZE,              /* u16 */
> > > +       VDPA_ATTR_DEV_FLAGS,                    /* u64 */
> > >
> > >         VDPA_ATTR_DEV_NET_CFG_MACADDR,          /* binary */
> > >         VDPA_ATTR_DEV_NET_STATUS,               /* u8 */
> > > --
> > > 2.33.1
> > >
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 00/10] Allow for configuring max number of virtqueue pairs
  2021-12-14  3:17 ` [PATCH v2 00/10] Allow for configuring max number of virtqueue pairs Jason Wang
  2021-12-14  4:11   ` Parav Pandit via Virtualization
       [not found]   ` <20211214073141.GC89087@mtl-vdi-166.wap.labs.mlnx>
@ 2021-12-14  9:42   ` Stefano Garzarella
       [not found]     ` <20211214095303.GA94217@mtl-vdi-166.wap.labs.mlnx>
  2 siblings, 1 reply; 19+ messages in thread
From: Stefano Garzarella @ 2021-12-14  9:42 UTC (permalink / raw)
  To: Eli Cohen, virtualization-owner
  Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu

Hi Eli,
I don't know what's wrong, but I've only received replies through the 
virtualization@lists.linux-foundation.org mailing list.

Even in the archive I can't find your original series.

Adding virtualization-owner@lists.linux-foundation.org to double check 
what's going wrong.

Thanks,
Stefano

On Tue, Dec 14, 2021 at 11:17:00AM +0800, Jason Wang wrote:
>On Mon, Dec 13, 2021 at 10:43 PM Eli Cohen <elic@nvidia.com> wrote:
>>
>> Allow the user to configure the max number of virtqueue pairs for a vdpa
>> instance. The user can then control the actual number of virtqueue pairs
>> using ethtool.
>>
>> Example, set number of VQPs to 2:
>> $ ethtool -L ens1 combined 2
>>
>> A user can check the max supported virtqueues for a management device by
>> runnig:
>>
>> vdpa dev show
>>     vdpa-a: type network mgmtdev auxiliary/mlx5_core.sf.1 vendor_id 5555 \
>>             max_vqp 3 max_vq_size 256 max_supported_vqs 256
>
>I think the maxsupported_vqs should be an odd number since it should
>contain control vq.
>
>>
>> and refer to this value when adding a device.
>>
>> To create a device with a max of 5 VQPs:
>> vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1 max_vqp 5
>
>A question, consider the parent support both net and block, if user use
>
>vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1
>
>How do we know it's a net or block device? Or do you expect some block
>specific attributes to be added?
>
>Thanks
>
>>
>> V1 -> V2:
>> 1. Do not return the index of the control VQ. Instead return an
>>    indication if Ctrl VQ was negotiated.
>> 2. Do not returen conig information if FEATURES_OK is not set to avoid
>>    reporting out of sync information.
>> 3. Minor fixes as described by the individual patches
>> 4. Add patches to return the max virtqueues a device is capable of
>>    supporting.
>>
>> Eli Cohen (10):
>>   vdpa: Provide interface to read driver features
>>   vdpa/mlx5: Distribute RX virtqueues in RQT object
>>   vdpa: Read device configuration only if FEATURES_OK
>>   vdpa: Allow to configure max data virtqueues
>>   vdpa/mlx5: Fix config_attr_mask assignment
>>   vdpa/mlx5: Support configuring max data virtqueue pairs
>>   vdpa: Add support for returning device configuration information
>>   vdpa/mlx5: Restore cur_num_vqs in case of failure in change_num_qps()
>>   vdpa: Support reporting max device virtqueues
>>   vdpa/mlx5: Configure max supported virtqueues
>>
>>  drivers/vdpa/alibaba/eni_vdpa.c    | 16 +++--
>>  drivers/vdpa/ifcvf/ifcvf_main.c    | 16 +++--
>>  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 94 +++++++++++++++++-------------
>>  drivers/vdpa/vdpa.c                | 42 ++++++++++++-
>>  drivers/vdpa/vdpa_sim/vdpa_sim.c   | 21 +++++--
>>  drivers/vdpa/vdpa_user/vduse_dev.c | 16 +++--
>>  drivers/vdpa/virtio_pci/vp_vdpa.c  | 16 +++--
>>  drivers/vhost/vdpa.c               |  2 +-
>>  drivers/virtio/virtio_vdpa.c       |  2 +-
>>  include/linux/vdpa.h               | 16 +++--
>>  include/uapi/linux/vdpa.h          |  9 +++
>>  11 files changed, 178 insertions(+), 72 deletions(-)
>>
>> --
>> 2.33.1
>>
>
>_______________________________________________
>Virtualization mailing list
>Virtualization@lists.linux-foundation.org
>https://lists.linuxfoundation.org/mailman/listinfo/virtualization
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 00/10] Allow for configuring max number of virtqueue pairs
       [not found]     ` <20211214095303.GA94217@mtl-vdi-166.wap.labs.mlnx>
@ 2021-12-14 11:12       ` Stefano Garzarella
  0 siblings, 0 replies; 19+ messages in thread
From: Stefano Garzarella @ 2021-12-14 11:12 UTC (permalink / raw)
  To: Eli Cohen
  Cc: Laurent Vivier, mst, virtualization, eperezma, Si-Wei Liu,
	virtualization-owner

On Tue, Dec 14, 2021 at 11:53:03AM +0200, Eli Cohen wrote:
>On Tue, Dec 14, 2021 at 10:42:35AM +0100, Stefano Garzarella wrote:
>> Hi Eli,
>> I don't know what's wrong, but I've only received replies through the
>> virtualization@lists.linux-foundation.org mailing list.
>>
>> Even in the archive I can't find your original series.
>>
>> Adding virtualization-owner@lists.linux-foundation.org to double check
>> what's going wrong.
>
>OK, let me know if you want me to send you the patches. I am going to
>send a new series and can put you on CC.

I'll look at the new series.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 09/10] vdpa: Support reporting max device virtqueues
       [not found]     ` <20211214082229.GE89087@mtl-vdi-166.wap.labs.mlnx>
@ 2021-12-16  2:56       ` Si-Wei Liu
       [not found]         ` <20211219120839.GA84946@mtl-vdi-166.wap.labs.mlnx>
  0 siblings, 1 reply; 19+ messages in thread
From: Si-Wei Liu @ 2021-12-16  2:56 UTC (permalink / raw)
  To: Eli Cohen; +Cc: lvivier, mst, virtualization, eperezma



On 12/14/2021 12:22 AM, Eli Cohen wrote:
> On Mon, Dec 13, 2021 at 05:00:18PM -0800, Si-Wei Liu wrote:
>>
>> On 12/13/2021 6:42 AM, Eli Cohen wrote:
>>> Add max_supported_vqs field to struct vdpa_mgmt_dev. Upstream drivers
>>> need to feel this value according to the device capabilities.
>>>
>>> This value is reported back in a netlink message when showing a device.
>>>
>>> Example:
>>>
>>> $ vdpa dev show
>>> vdpa-a: type network mgmtdev auxiliary/mlx5_core.sf.1 vendor_id 5555 \
>>> 	max_vqp 3 max_vq_size 256 max_supported_vqs 256
>>>
>>> Signed-off-by: Eli Cohen <elic@nvidia.com>
>>> ---
>>>    drivers/vdpa/vdpa.c       | 2 ++
>>>    include/linux/vdpa.h      | 1 +
>>>    include/uapi/linux/vdpa.h | 1 +
>>>    3 files changed, 4 insertions(+)
>>>
>>> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
>>> index 130a8d4aeaed..b9dd693146be 100644
>>> --- a/drivers/vdpa/vdpa.c
>>> +++ b/drivers/vdpa/vdpa.c
>>> @@ -695,6 +695,8 @@ vdpa_dev_fill(struct vdpa_device *vdev, struct sk_buff *msg, u32 portid, u32 seq
>>>    		goto msg_err;
>>>    	if (nla_put_u16(msg, VDPA_ATTR_DEV_MIN_VQ_SIZE, min_vq_size))
>>>    		goto msg_err;
>>> +	if (nla_put_u16(msg, VDPA_ATTR_DEV_MAX_DEV_VQS, vdev->mdev->max_supported_vqs))
>>> +		goto msg_err;
>> What is the default value if vendor driver doesn't expose this value?
> I think each vendor should put a value here. If you don't, you can still
> add a vdpa device but you'll have to guess.
> Alternatively, I can provide a patch that sets this value to 2 for all
> vendors.
>
>> And it
>> doesn't seem this is something to stash on struct vdpa_mgmt_dev, only some
>> vdpa vendor for network drive may need to expose it; if otherwise not
>> exposed we can assume it's 32767 by the spec.
> I am not sure there's any device capable of so many VQs. In any case, I
> think use 2 as defauly is better since any device can support that.
OK, default to 2 is fine (for net).

>
>> A generic vdpa op
>> (get_device_capability?)
> The point is that you need this info to create a vdpa device so I don't
> see how it can be a vdpa device operation.
I wonder if this info should belong to mgmtdev rather than vdpa dev? It 
should be visible before user ever attempts to create a vdpa device.

>
>> to query device capability might be better I guess
>> (define a VDPA_CAPAB_NET_MAX_VQS subtype to get it).
> Why limit this capablity only for net devices? Any kind of vdpa device
> may want to report this capability.
I thought what you report here is the max number for data queues the 
device can support, no? The control or event queue that is emulated in 
userspace isn't much interesting to users IMHO.

User needs to take the hint from this value to create vdpa net device 
and specify a proper max_vqp value. It's rather counter intuitive if 
what they see is not what they would use.

Thanks,
-Siwei

>> -Siwei
>>
>>>    	genlmsg_end(msg, hdr);
>>>    	return 0;
>>> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
>>> index 9245dfbf1b08..72ea8ad7ba21 100644
>>> --- a/include/linux/vdpa.h
>>> +++ b/include/linux/vdpa.h
>>> @@ -444,6 +444,7 @@ struct vdpa_mgmt_dev {
>>>    	const struct vdpa_mgmtdev_ops *ops;
>>>    	const struct virtio_device_id *id_table;
>>>    	u64 config_attr_mask;
>>> +	u16 max_supported_vqs;
>>>    	struct list_head list;
>>>    };
>>> diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
>>> index 23b854e3e5e2..1b758d77e228 100644
>>> --- a/include/uapi/linux/vdpa.h
>>> +++ b/include/uapi/linux/vdpa.h
>>> @@ -42,6 +42,7 @@ enum vdpa_attr {
>>>    	VDPA_ATTR_DEV_MAX_VQ_SIZE,		/* u16 */
>>>    	VDPA_ATTR_DEV_MIN_VQ_SIZE,		/* u16 */
>>>    	VDPA_ATTR_DEV_FLAGS,			/* u64 */
>>> +	VDPA_ATTR_DEV_MAX_DEV_VQS,		/* u16 */
>>>    	VDPA_ATTR_DEV_NET_CFG_MACADDR,		/* binary */
>>>    	VDPA_ATTR_DEV_NET_STATUS,		/* u8 */

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 09/10] vdpa: Support reporting max device virtqueues
       [not found]         ` <20211219120839.GA84946@mtl-vdi-166.wap.labs.mlnx>
@ 2021-12-21  8:02           ` Si-Wei Liu
  0 siblings, 0 replies; 19+ messages in thread
From: Si-Wei Liu @ 2021-12-21  8:02 UTC (permalink / raw)
  To: Eli Cohen; +Cc: lvivier, mst, virtualization, eperezma



On 12/19/2021 4:08 AM, Eli Cohen wrote:
> On Wed, Dec 15, 2021 at 06:56:40PM -0800, Si-Wei Liu wrote:
>>
>> On 12/14/2021 12:22 AM, Eli Cohen wrote:
>>> On Mon, Dec 13, 2021 at 05:00:18PM -0800, Si-Wei Liu wrote:
>>>> On 12/13/2021 6:42 AM, Eli Cohen wrote:
>>>>> Add max_supported_vqs field to struct vdpa_mgmt_dev. Upstream drivers
>>>>> need to feel this value according to the device capabilities.
>>>>>
>>>>> This value is reported back in a netlink message when showing a device.
>>>>>
>>>>> Example:
>>>>>
>>>>> $ vdpa dev show
>>>>> vdpa-a: type network mgmtdev auxiliary/mlx5_core.sf.1 vendor_id 5555 \
>>>>> 	max_vqp 3 max_vq_size 256 max_supported_vqs 256
>>>>>
>>>>> Signed-off-by: Eli Cohen <elic@nvidia.com>
>>>>> ---
>>>>>     drivers/vdpa/vdpa.c       | 2 ++
>>>>>     include/linux/vdpa.h      | 1 +
>>>>>     include/uapi/linux/vdpa.h | 1 +
>>>>>     3 files changed, 4 insertions(+)
>>>>>
>>>>> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
>>>>> index 130a8d4aeaed..b9dd693146be 100644
>>>>> --- a/drivers/vdpa/vdpa.c
>>>>> +++ b/drivers/vdpa/vdpa.c
>>>>> @@ -695,6 +695,8 @@ vdpa_dev_fill(struct vdpa_device *vdev, struct sk_buff *msg, u32 portid, u32 seq
>>>>>     		goto msg_err;
>>>>>     	if (nla_put_u16(msg, VDPA_ATTR_DEV_MIN_VQ_SIZE, min_vq_size))
>>>>>     		goto msg_err;
>>>>> +	if (nla_put_u16(msg, VDPA_ATTR_DEV_MAX_DEV_VQS, vdev->mdev->max_supported_vqs))
>>>>> +		goto msg_err;
>>>> What is the default value if vendor driver doesn't expose this value?
>>> I think each vendor should put a value here. If you don't, you can still
>>> add a vdpa device but you'll have to guess.
>>> Alternatively, I can provide a patch that sets this value to 2 for all
>>> vendors.
>>>
>>>> And it
>>>> doesn't seem this is something to stash on struct vdpa_mgmt_dev, only some
>>>> vdpa vendor for network drive may need to expose it; if otherwise not
>>>> exposed we can assume it's 32767 by the spec.
>>> I am not sure there's any device capable of so many VQs. In any case, I
>>> think use 2 as defauly is better since any device can support that.
>> OK, default to 2 is fine (for net).
>>
>>>> A generic vdpa op
>>>> (get_device_capability?)
>>> The point is that you need this info to create a vdpa device so I don't
>>> see how it can be a vdpa device operation.
>> I wonder if this info should belong to mgmtdev rather than vdpa dev? It
>> should be visible before user ever attempts to create a vdpa device.
>>
> Every vendor lists its devices in the management bus. The you use vdpa
> tool to instantiate a management device so I think management device is
> the right place.
>
>>>> to query device capability might be better I guess
>>>> (define a VDPA_CAPAB_NET_MAX_VQS subtype to get it).
>>> Why limit this capablity only for net devices? Any kind of vdpa device
>>> may want to report this capability.
>> I thought what you report here is the max number for data queues the device
>> can support, no? The control or event queue that is emulated in userspace
>> isn't much interesting to users IMHO.
>>
>> User needs to take the hint from this value to create vdpa net device and
>> specify a proper max_vqp value. It's rather counter intuitive if what they
>> see is not what they would use.
> I am not following you here.
> We report the max the device is capable of. The user uses this to create
> a device and specify valid number of virtqueues.
I mean why don't you expose this value in "vdpa mgmtdev show" before 
user attempts to create a vdpa device to know the max number of queues 
the parent may support? How can a user infer this value without having 
to create a vdpa device?

Thanks,
-Siwei


>
>> Thanks,
>> -Siwei
>>
>>>> -Siwei
>>>>
>>>>>     	genlmsg_end(msg, hdr);
>>>>>     	return 0;
>>>>> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
>>>>> index 9245dfbf1b08..72ea8ad7ba21 100644
>>>>> --- a/include/linux/vdpa.h
>>>>> +++ b/include/linux/vdpa.h
>>>>> @@ -444,6 +444,7 @@ struct vdpa_mgmt_dev {
>>>>>     	const struct vdpa_mgmtdev_ops *ops;
>>>>>     	const struct virtio_device_id *id_table;
>>>>>     	u64 config_attr_mask;
>>>>> +	u16 max_supported_vqs;
>>>>>     	struct list_head list;
>>>>>     };
>>>>> diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
>>>>> index 23b854e3e5e2..1b758d77e228 100644
>>>>> --- a/include/uapi/linux/vdpa.h
>>>>> +++ b/include/uapi/linux/vdpa.h
>>>>> @@ -42,6 +42,7 @@ enum vdpa_attr {
>>>>>     	VDPA_ATTR_DEV_MAX_VQ_SIZE,		/* u16 */
>>>>>     	VDPA_ATTR_DEV_MIN_VQ_SIZE,		/* u16 */
>>>>>     	VDPA_ATTR_DEV_FLAGS,			/* u64 */
>>>>> +	VDPA_ATTR_DEV_MAX_DEV_VQS,		/* u16 */
>>>>>     	VDPA_ATTR_DEV_NET_CFG_MACADDR,		/* binary */
>>>>>     	VDPA_ATTR_DEV_NET_STATUS,		/* u8 */

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-12-21  8:03 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20211213144258.179984-1-elic@nvidia.com>
     [not found] ` <20211213144258.179984-10-elic@nvidia.com>
2021-12-13 23:50   ` [PATCH v2 09/10] vdpa: Support reporting max device virtqueues Si-Wei Liu
2021-12-14  1:00   ` Si-Wei Liu
     [not found]     ` <20211214082229.GE89087@mtl-vdi-166.wap.labs.mlnx>
2021-12-16  2:56       ` Si-Wei Liu
     [not found]         ` <20211219120839.GA84946@mtl-vdi-166.wap.labs.mlnx>
2021-12-21  8:02           ` Si-Wei Liu
     [not found] ` <20211213144258.179984-6-elic@nvidia.com>
2021-12-13 23:51   ` [PATCH v2 05/10] vdpa/mlx5: Fix config_attr_mask assignment Si-Wei Liu
     [not found] ` <20211213144258.179984-5-elic@nvidia.com>
2021-12-13 23:59   ` [PATCH v2 04/10] vdpa: Allow to configure max data virtqueues Si-Wei Liu
2021-12-14  3:17 ` [PATCH v2 00/10] Allow for configuring max number of virtqueue pairs Jason Wang
2021-12-14  4:11   ` Parav Pandit via Virtualization
2021-12-14  6:16     ` Jason Wang
2021-12-14  6:40       ` Parav Pandit via Virtualization
     [not found]   ` <20211214073141.GC89087@mtl-vdi-166.wap.labs.mlnx>
2021-12-14  8:53     ` Jason Wang
2021-12-14  9:42   ` Stefano Garzarella
     [not found]     ` <20211214095303.GA94217@mtl-vdi-166.wap.labs.mlnx>
2021-12-14 11:12       ` Stefano Garzarella
     [not found] ` <20211213144258.179984-2-elic@nvidia.com>
2021-12-14  3:37   ` [PATCH v2 01/10] vdpa: Provide interface to read driver features Jason Wang
     [not found] ` <20211213144258.179984-4-elic@nvidia.com>
2021-12-14  3:44   ` [PATCH v2 03/10] vdpa: Read device configuration only if FEATURES_OK Jason Wang
     [not found]     ` <20211214071411.GA89087@mtl-vdi-166.wap.labs.mlnx>
2021-12-14  8:53       ` Jason Wang
     [not found] ` <20211213144258.179984-8-elic@nvidia.com>
2021-12-13 23:49   ` [PATCH v2 07/10] vdpa: Add support for returning device configuration information Si-Wei Liu
2021-12-14  4:05   ` Jason Wang
     [not found]     ` <20211214071713.GB89087@mtl-vdi-166.wap.labs.mlnx>
2021-12-14  8:56       ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.