netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eli Cohen <elic@nvidia.com>
To: Jason Wang <jasowang@redhat.com>
Cc: <stephen@networkplumber.org>, <netdev@vger.kernel.org>,
	<lulu@redhat.com>, <si-wei.liu@oracle.com>
Subject: Re: [PATCH v2 4/4] vdpa: Support reading device features
Date: Mon, 21 Feb 2022 10:28:52 +0200	[thread overview]
Message-ID: <20220221070416.GB45328@mtl-vdi-166.wap.labs.mlnx> (raw)
In-Reply-To: <b36e4a63-7fac-212a-2d6b-e267d49c5e72@redhat.com>

On Mon, Feb 21, 2022 at 12:18:03PM +0800, Jason Wang wrote:
> 
> 在 2022/2/17 下午8:30, Eli Cohen 写道:
> > When showing the available management devices, check if
> > VDPA_ATTR_DEV_SUPPORTED_FEATURES feature is available and print the
> > supported features for a management device.
> > 
> > Examples:
> > $ vdpa mgmtdev show
> > auxiliary/mlx5_core.sf.1:
> >    supported_classes net
> >    max_supported_vqs 257
> >    dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ MQ \
> >                 CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
> > 
> > $ vdpa -jp mgmtdev show
> > {
> >      "mgmtdev": {
> >          "auxiliary/mlx5_core.sf.1": {
> >              "supported_classes": [ "net" ],
> >              "max_supported_vqs": 257,
> >              "dev_features": [
> > "CSUM","GUEST_CSUM","MTU","HOST_TSO4","HOST_TSO6","STATUS","CTRL_VQ","MQ",\
> > "CTRL_MAC_ADDR","VERSION_1","ACCESS_PLATFORM" ]
> >          }
> >      }
> > }
> > 
> > Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com>
> > Signed-off-by: Eli Cohen <elic@nvidia.com>
> > ---
> >   vdpa/include/uapi/linux/vdpa.h |  1 +
> >   vdpa/vdpa.c                    | 14 +++++++++++++-
> >   2 files changed, 14 insertions(+), 1 deletion(-)
> > 
> > diff --git a/vdpa/include/uapi/linux/vdpa.h b/vdpa/include/uapi/linux/vdpa.h
> > index a3ebf4d4d9b8..96ccbf305d14 100644
> > --- a/vdpa/include/uapi/linux/vdpa.h
> > +++ b/vdpa/include/uapi/linux/vdpa.h
> > @@ -42,6 +42,7 @@ enum vdpa_attr {
> >   	VDPA_ATTR_DEV_NEGOTIATED_FEATURES,	/* u64 */
> >   	VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,          /* u32 */
> > +	VDPA_ATTR_DEV_SUPPORTED_FEATURES,	/* u64 */
> >   	/* new attributes must be added above here */
> >   	VDPA_ATTR_MAX,
> > diff --git a/vdpa/vdpa.c b/vdpa/vdpa.c
> > index 78736b1422b6..bdc366880ab9 100644
> > --- a/vdpa/vdpa.c
> > +++ b/vdpa/vdpa.c
> > @@ -84,6 +84,7 @@ static const enum mnl_attr_data_type vdpa_policy[VDPA_ATTR_MAX + 1] = {
> >   	[VDPA_ATTR_DEV_MAX_VQ_SIZE] = MNL_TYPE_U16,
> >   	[VDPA_ATTR_DEV_NEGOTIATED_FEATURES] = MNL_TYPE_U64,
> >   	[VDPA_ATTR_DEV_MGMTDEV_MAX_VQS] = MNL_TYPE_U32,
> > +	[VDPA_ATTR_DEV_SUPPORTED_FEATURES] = MNL_TYPE_U64,
> >   };
> >   static int attr_cb(const struct nlattr *attr, void *data)
> > @@ -494,13 +495,14 @@ static void print_features(struct vdpa *vdpa, uint64_t features, bool mgmtdevf,
> >   static void pr_out_mgmtdev_show(struct vdpa *vdpa, const struct nlmsghdr *nlh,
> >   				struct nlattr **tb)
> >   {
> > +	uint64_t classes = 0;
> >   	const char *class;
> >   	unsigned int i;
> >   	pr_out_handle_start(vdpa, tb);
> >   	if (tb[VDPA_ATTR_MGMTDEV_SUPPORTED_CLASSES]) {
> > -		uint64_t classes = mnl_attr_get_u64(tb[VDPA_ATTR_MGMTDEV_SUPPORTED_CLASSES]);
> > +		classes = mnl_attr_get_u64(tb[VDPA_ATTR_MGMTDEV_SUPPORTED_CLASSES]);
> >   		pr_out_array_start(vdpa, "supported_classes");
> >   		for (i = 1; i < 64; i++) {
> > @@ -522,6 +524,16 @@ static void pr_out_mgmtdev_show(struct vdpa *vdpa, const struct nlmsghdr *nlh,
> >   		print_uint(PRINT_ANY, "max_supported_vqs", "  max_supported_vqs %d", num_vqs);
> >   	}
> > +	if (tb[VDPA_ATTR_DEV_SUPPORTED_FEATURES]) {
> > +		uint64_t features;
> > +
> > +		features  = mnl_attr_get_u64(tb[VDPA_ATTR_DEV_SUPPORTED_FEATURES]);
> > +		if (classes & BIT(VIRTIO_ID_NET))
> > +			print_features(vdpa, features, true, VIRTIO_ID_NET);
> > +		else
> > +			print_features(vdpa, features, true, 0);
> 
> 
> I wonder what happens if we try to read a virtio_blk device consider:
> 
> static const char * const *dev_to_feature_str[] = {
>         [VIRTIO_ID_NET] = net_feature_strs,
> };

In that case we will print bit_xx for each non general bit.

> 
> Thanks
> 
> 
> > +	}
> > +
> >   	pr_out_handle_end(vdpa);
> >   }
> 

      reply	other threads:[~2022-02-21  8:29 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-17 12:30 [PATCH v2 0/4] vdpa tool enhancements Eli Cohen
2022-02-17 12:30 ` [PATCH v2 1/4] vdpa: Remove unsupported command line option Eli Cohen
2022-02-19  0:24   ` Si-Wei Liu
2022-02-17 12:30 ` [PATCH v2 2/4] vdpa: Allow for printing negotiated features of a device Eli Cohen
2022-02-19  0:31   ` Si-Wei Liu
2022-02-21  3:57   ` Jason Wang
2022-02-21 11:24     ` Eli Cohen
2022-02-17 12:30 ` [PATCH v2 3/4] vdpa: Support for configuring max VQ pairs for " Eli Cohen
2022-02-21  3:58   ` Jason Wang
2022-02-17 12:30 ` [PATCH v2 4/4] vdpa: Support reading device features Eli Cohen
2022-02-21  4:18   ` Jason Wang
2022-02-21  8:28     ` Eli Cohen [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220221070416.GB45328@mtl-vdi-166.wap.labs.mlnx \
    --to=elic@nvidia.com \
    --cc=jasowang@redhat.com \
    --cc=lulu@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=si-wei.liu@oracle.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).