linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eli Cohen <elic@nvidia.com>
To: Si-Wei Liu <si-wei.liu@oracle.com>,
	"mst@redhat.com" <mst@redhat.com>,
	"jasowang@redhat.com" <jasowang@redhat.com>
Cc: "virtualization@lists.linux-foundation.org" 
	<virtualization@lists.linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: RE: [PATCH v3 1/2] vdpa: Add support for querying vendor statistics
Date: Sun, 8 May 2022 07:44:15 +0000	[thread overview]
Message-ID: <DM8PR12MB5400F639C76A0B8198666299ABC79@DM8PR12MB5400.namprd12.prod.outlook.com> (raw)
In-Reply-To: <f279bbbb-6af2-95f4-08c3-97ba72deb6cb@oracle.com>

> -----Original Message-----
> From: Si-Wei Liu <si-wei.liu@oracle.com>
> Sent: Tuesday, May 3, 2022 2:48 AM
> To: Eli Cohen <elic@nvidia.com>; mst@redhat.com; jasowang@redhat.com
> Cc: virtualization@lists.linux-foundation.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 1/2] vdpa: Add support for querying vendor statistics
> 
> 
> 
> On 5/2/2022 3:22 AM, Eli Cohen wrote:
> > Allows to read vendor statistics of a vdpa device. The specific
> > statistics data are received from the upstream driver in the form of an
> > (attribute name, attribute value) pairs.
> >
> > An example of statistics for mlx5_vdpa device are:
> >
> > received_desc - number of descriptors received by the virtqueue
> > completed_desc - number of descriptors completed by the virtqueue
> >
> > A descriptor using indirect buffers is still counted as 1. In addition,
> > N chained descriptors are counted correctly N times as one would expect.
> >
> > A new callback was added to vdpa_config_ops which provides the means for
> > the vdpa driver to return statistics results.
> >
> > The interface allows for reading all the supported virtqueues, including
> > the control virtqueue if it exists.
> >
> > Below are some examples taken from mlx5_vdpa which are introduced in the
> > following patch:
> >
> > 1. Read statistics for the virtqueue at index 1
> >
> > $ vdpa dev vstats show vdpa-a qidx 1
> > vdpa-a:
> > queue_type tx queue_index 1 received_desc 3844836 completed_desc 3844836
> >
> > 2. Read statistics for the virtqueue at index 32
> > $ vdpa dev vstats show vdpa-a qidx 32
> > vdpa-a:
> > queue_type control_vq queue_index 32 received_desc 62 completed_desc 62
> >
> > 3. Read statisitics for the virtqueue at index 0 with json output
> > $ vdpa -j dev vstats show vdpa-a qidx 0
> > {"vstats":{"vdpa-a":{
> > "queue_type":"rx","queue_index":0,"name":"received_desc","value":417776,\
> >   "name":"completed_desc","value":417548}}}
> >
> > 4. Read statistics for the virtqueue at index 0 with preety json output
> > $ vdpa -jp dev vstats show vdpa-a qidx 0
> > {
> >      "vstats": {
> >          "vdpa-a": {
> >
> >              "queue_type": "rx",
> >              "queue_index": 0,
> >              "name": "received_desc",
> >              "value": 417776,
> >              "name": "completed_desc",
> >              "value": 417548
> >          }
> >      }
> > }
> >
> > Signed-off-by: Eli Cohen <elic@nvidia.com>
> > ---
> >   drivers/vdpa/vdpa.c       | 129 ++++++++++++++++++++++++++++++++++++++
> >   include/linux/vdpa.h      |   5 ++
> >   include/uapi/linux/vdpa.h |   6 ++
> >   3 files changed, 140 insertions(+)
> >
> > diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> > index 2b75c00b1005..933466f61ca8 100644
> > --- a/drivers/vdpa/vdpa.c
> > +++ b/drivers/vdpa/vdpa.c
> > @@ -909,6 +909,74 @@ vdpa_dev_config_fill(struct vdpa_device *vdev, struct sk_buff *msg, u32 portid,
> >   	return err;
> >   }
> >
> > +static int vdpa_fill_stats_rec(struct vdpa_device *vdev, struct sk_buff *msg,
> > +			       struct genl_info *info, u32 index)
> > +{
> > +	int err;
> > +
> > +	err = vdev->config->get_vendor_vq_stats(vdev, index, msg, info->extack);
> > +	if (err)
> > +		return err;
> > +
> > +	if (nla_put_u32(msg, VDPA_ATTR_DEV_QUEUE_INDEX, index))
> > +		return -EMSGSIZE;
> > +
> > +	return 0;
> > +}
> > +
> > +static int vendor_stats_fill(struct vdpa_device *vdev, struct sk_buff *msg,
> > +			     struct genl_info *info, u32 index)
> > +{
> > +	int err;
> > +
> > +	if (!vdev->config->get_vendor_vq_stats)
> > +		return -EOPNOTSUPP;
> > +
> > +	err = vdpa_fill_stats_rec(vdev, msg, info, index);
> > +	if (err)
> > +		return err;
> > +
> > +	return 0;
> > +}
> > +
> > +static int vdpa_dev_vendor_stats_fill(struct vdpa_device *vdev,
> > +				      struct sk_buff *msg,
> > +				      struct genl_info *info, u32 index)
> > +{
> > +	u32 device_id;
> > +	void *hdr;
> > +	int err;
> > +	u32 portid = info->snd_portid;
> > +	u32 seq = info->snd_seq;
> > +	u32 flags = 0;
> > +
> > +	hdr = genlmsg_put(msg, portid, seq, &vdpa_nl_family, flags,
> > +			  VDPA_CMD_DEV_VSTATS_GET);
> > +	if (!hdr)
> > +		return -EMSGSIZE;
> > +
> > +	if (nla_put_string(msg, VDPA_ATTR_DEV_NAME, dev_name(&vdev->dev))) {
> > +		err = -EMSGSIZE;
> > +		goto undo_msg;
> > +	}
> > +
> > +	device_id = vdev->config->get_device_id(vdev);
> > +	if (nla_put_u32(msg, VDPA_ATTR_DEV_ID, device_id)) {
> > +		err = -EMSGSIZE;
> > +		goto undo_msg;
> > +	}
> > +
> You seem to miss VDPA_ATTR_DEV_NEGOTIATED_FEATURES from this function,
> otherwise I can't image how you can ensure the atomicity to infer
> queue_type for control vq.
> And should we make sure VIRTIO_CONFIG_S_FEATURES_OK is set before call
> into vendor_stats_fill()?
> 
> > +	err = vendor_stats_fill(vdev, msg, info, index);
> > +
> > +	genlmsg_end(msg, hdr);
> > +
> > +	return err;
> > +
> > +undo_msg:
> > +	genlmsg_cancel(msg, hdr);
> > +	return err;
> > +}
> > +
> >   static int vdpa_nl_cmd_dev_config_get_doit(struct sk_buff *skb, struct genl_info *info)
> >   {
> >   	struct vdpa_device *vdev;
> > @@ -990,6 +1058,60 @@ vdpa_nl_cmd_dev_config_get_dumpit(struct sk_buff *msg, struct netlink_callback *
> >   	return msg->len;
> >   }
> >
> > +static int vdpa_nl_cmd_dev_stats_get_doit(struct sk_buff *skb,
> > +					  struct genl_info *info)
> > +{
> > +	struct vdpa_device *vdev;
> > +	struct sk_buff *msg;
> > +	const char *devname;
> > +	struct device *dev;
> > +	u32 index;
> > +	int err;
> > +
> > +	if (!info->attrs[VDPA_ATTR_DEV_NAME])
> > +		return -EINVAL;
> > +
> > +	if (!info->attrs[VDPA_ATTR_DEV_QUEUE_INDEX])
> > +		return -EINVAL;
> > +
> > +	devname = nla_data(info->attrs[VDPA_ATTR_DEV_NAME]);
> > +	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
> > +	if (!msg)
> > +		return -ENOMEM;
> > +
> > +	index = nla_get_u32(info->attrs[VDPA_ATTR_DEV_QUEUE_INDEX]);
> > +	mutex_lock(&vdpa_dev_mutex);
> Given that stats_get() is a read-only operation that might happen quite
> often from time to time, I wonder if it is now a good timing to convert
> vdpa_dev_mutex to a semaphore?

No sure I follow you. You still need that the process that took the lock will
release it. So in that respect we should still use a mutex.

Did you mean use readers/writers lock?

> 
> > +	dev = bus_find_device(&vdpa_bus, NULL, devname, vdpa_name_match);
> > +	if (!dev) {
> > +		NL_SET_ERR_MSG_MOD(info->extack, "device not found");
> > +		err = -ENODEV;
> > +		goto dev_err;
> Missing nlmsg_free().
> > +	}
> > +	vdev = container_of(dev, struct vdpa_device, dev);
> > +	if (!vdev->mdev) {
> > +		NL_SET_ERR_MSG_MOD(info->extack, "unmanaged vdpa device");
> > +		err = -EINVAL;
> > +		goto mdev_err;
> Missing nlmsg_free().

Thanks will fix.

> 
> Otherwise looks fine.
> 
> Acked-by: Si-Wei Liu <si-wei.liu@oracle.com>
> 
> 
> -Siwei
> > +	}
> > +	err = vdpa_dev_vendor_stats_fill(vdev, msg, info, index);
> > +	if (!err)
> > +		err = genlmsg_reply(msg, info);
> > +
> > +	put_device(dev);
> > +	mutex_unlock(&vdpa_dev_mutex);
> > +
> > +	if (err)
> > +		nlmsg_free(msg);
> > +
> > +	return err;
> > +
> > +mdev_err:
> > +	put_device(dev);
> > +dev_err:
> > +	mutex_unlock(&vdpa_dev_mutex);
> > +	return err;
> > +}
> > +
> >   static const struct nla_policy vdpa_nl_policy[VDPA_ATTR_MAX + 1] = {
> >   	[VDPA_ATTR_MGMTDEV_BUS_NAME] = { .type = NLA_NUL_STRING },
> >   	[VDPA_ATTR_MGMTDEV_DEV_NAME] = { .type = NLA_STRING },
> > @@ -997,6 +1119,7 @@ static const struct nla_policy vdpa_nl_policy[VDPA_ATTR_MAX + 1] = {
> >   	[VDPA_ATTR_DEV_NET_CFG_MACADDR] = NLA_POLICY_ETH_ADDR,
> >   	/* virtio spec 1.1 section 5.1.4.1 for valid MTU range */
> >   	[VDPA_ATTR_DEV_NET_CFG_MTU] = NLA_POLICY_MIN(NLA_U16, 68),
> > +	[VDPA_ATTR_DEV_QUEUE_INDEX] = NLA_POLICY_RANGE(NLA_U32, 0, 65535),
> >   };
> >
> >   static const struct genl_ops vdpa_nl_ops[] = {
> > @@ -1030,6 +1153,12 @@ static const struct genl_ops vdpa_nl_ops[] = {
> >   		.doit = vdpa_nl_cmd_dev_config_get_doit,
> >   		.dumpit = vdpa_nl_cmd_dev_config_get_dumpit,
> >   	},
> > +	{
> > +		.cmd = VDPA_CMD_DEV_VSTATS_GET,
> > +		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
> > +		.doit = vdpa_nl_cmd_dev_stats_get_doit,
> > +		.flags = GENL_ADMIN_PERM,
> > +	},
> >   };
> >
> >   static struct genl_family vdpa_nl_family __ro_after_init = {
> > diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> > index 8943a209202e..48ed1fc00830 100644
> > --- a/include/linux/vdpa.h
> > +++ b/include/linux/vdpa.h
> > @@ -276,6 +276,9 @@ struct vdpa_config_ops {
> >   			    const struct vdpa_vq_state *state);
> >   	int (*get_vq_state)(struct vdpa_device *vdev, u16 idx,
> >   			    struct vdpa_vq_state *state);
> > +	int (*get_vendor_vq_stats)(struct vdpa_device *vdev, u16 idx,
> > +				   struct sk_buff *msg,
> > +				   struct netlink_ext_ack *extack);
> >   	struct vdpa_notification_area
> >   	(*get_vq_notification)(struct vdpa_device *vdev, u16 idx);
> >   	/* vq irq is not expected to be changed once DRIVER_OK is set */
> > @@ -473,4 +476,6 @@ struct vdpa_mgmt_dev {
> >   int vdpa_mgmtdev_register(struct vdpa_mgmt_dev *mdev);
> >   void vdpa_mgmtdev_unregister(struct vdpa_mgmt_dev *mdev);
> >
> > +#define VDPA_INVAL_QUEUE_INDEX 0xffff
> > +
> >   #endif /* _LINUX_VDPA_H */
> > diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
> > index 1061d8d2d09d..25c55cab3d7c 100644
> > --- a/include/uapi/linux/vdpa.h
> > +++ b/include/uapi/linux/vdpa.h
> > @@ -18,6 +18,7 @@ enum vdpa_command {
> >   	VDPA_CMD_DEV_DEL,
> >   	VDPA_CMD_DEV_GET,		/* can dump */
> >   	VDPA_CMD_DEV_CONFIG_GET,	/* can dump */
> > +	VDPA_CMD_DEV_VSTATS_GET,
> >   };
> >
> >   enum vdpa_attr {
> > @@ -46,6 +47,11 @@ enum vdpa_attr {
> >   	VDPA_ATTR_DEV_NEGOTIATED_FEATURES,	/* u64 */
> >   	VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,		/* u32 */
> >   	VDPA_ATTR_DEV_SUPPORTED_FEATURES,	/* u64 */
> > +
> > +	VDPA_ATTR_DEV_QUEUE_INDEX,              /* u32 */
> > +	VDPA_ATTR_DEV_VENDOR_ATTR_NAME,		/* string */
> > +	VDPA_ATTR_DEV_VENDOR_ATTR_VALUE,        /* u64 */
> > +
> >   	/* new attributes must be added above here */
> >   	VDPA_ATTR_MAX,
> >   };


  parent reply	other threads:[~2022-05-08  7:44 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-02 10:21 [PATCH v3 0/2] Show statistics for a vdpa device Eli Cohen
2022-05-02 10:22 ` [PATCH v3 1/2] vdpa: Add support for querying vendor statistics Eli Cohen
2022-05-02 23:47   ` Si-Wei Liu
2022-05-03  5:13     ` Eli Cohen
2022-05-04  4:43       ` Si-Wei Liu
2022-05-04  5:44         ` Eli Cohen
2022-05-05  8:10           ` Jason Wang
2022-05-06  0:35           ` Si-Wei Liu
2022-05-08  7:44     ` Eli Cohen [this message]
2022-05-10 17:58       ` Si-Wei Liu
2022-05-11 10:07         ` Eli Cohen
2022-05-02 10:22 ` [PATCH v3 2/2] vdpa/mlx5: Add support for reading descriptor statistics Eli Cohen
2022-05-03  0:36   ` Si-Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM8PR12MB5400F639C76A0B8198666299ABC79@DM8PR12MB5400.namprd12.prod.outlook.com \
    --to=elic@nvidia.com \
    --cc=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=si-wei.liu@oracle.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).