From: Jason Wang <jasowang@redhat.com>
To: Zhu Lingshan <lingshan.zhu@intel.com>, mst@redhat.com
Cc: virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, parav@nvidia.com,
xieyongji@bytedance.com, gautam.dawar@amd.com
Subject: Re: [PATCH V3 2/6] vDPA/ifcvf: support userspace to query features and MQ of a management device
Date: Mon, 4 Jul 2022 12:43:43 +0800 [thread overview]
Message-ID: <c602c6c3-b38a-9543-2bb5-03be7d99fef3@redhat.com> (raw)
In-Reply-To: <20220701132826.8132-3-lingshan.zhu@intel.com>
在 2022/7/1 21:28, Zhu Lingshan 写道:
> Adapting to current netlink interfaces, this commit allows userspace
> to query feature bits and MQ capability of a management device.
>
> Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
> ---
> drivers/vdpa/ifcvf/ifcvf_base.c | 12 ++++++++++++
> drivers/vdpa/ifcvf/ifcvf_base.h | 1 +
> drivers/vdpa/ifcvf/ifcvf_main.c | 3 +++
> 3 files changed, 16 insertions(+)
>
> diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c b/drivers/vdpa/ifcvf/ifcvf_base.c
> index fb957b57941e..7c5f1cc93ad9 100644
> --- a/drivers/vdpa/ifcvf/ifcvf_base.c
> +++ b/drivers/vdpa/ifcvf/ifcvf_base.c
> @@ -346,6 +346,18 @@ int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 qid, u16 num)
> return 0;
> }
>
> +u16 ifcvf_get_max_vq_pairs(struct ifcvf_hw *hw)
> +{
> + struct virtio_net_config __iomem *config;
> + u16 val, mq;
> +
> + config = hw->dev_cfg;
> + val = vp_ioread16((__le16 __iomem *)&config->max_virtqueue_pairs);
> + mq = le16_to_cpu((__force __le16)val);
> +
> + return mq;
> +}
> +
> static int ifcvf_hw_enable(struct ifcvf_hw *hw)
> {
> struct virtio_pci_common_cfg __iomem *cfg;
> diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h b/drivers/vdpa/ifcvf/ifcvf_base.h
> index f5563f665cc6..d54a1bed212e 100644
> --- a/drivers/vdpa/ifcvf/ifcvf_base.h
> +++ b/drivers/vdpa/ifcvf/ifcvf_base.h
> @@ -130,6 +130,7 @@ u64 ifcvf_get_hw_features(struct ifcvf_hw *hw);
> int ifcvf_verify_min_features(struct ifcvf_hw *hw, u64 features);
> u16 ifcvf_get_vq_state(struct ifcvf_hw *hw, u16 qid);
> int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 qid, u16 num);
> +u16 ifcvf_get_max_vq_pairs(struct ifcvf_hw *hw);
> struct ifcvf_adapter *vf_to_adapter(struct ifcvf_hw *hw);
> int ifcvf_probed_virtio_net(struct ifcvf_hw *hw);
> u32 ifcvf_get_config_size(struct ifcvf_hw *hw);
> diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
> index 0a5670729412..3ff7096d30f1 100644
> --- a/drivers/vdpa/ifcvf/ifcvf_main.c
> +++ b/drivers/vdpa/ifcvf/ifcvf_main.c
> @@ -791,6 +791,9 @@ static int ifcvf_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
> vf->hw_features = ifcvf_get_hw_features(vf);
> vf->config_size = ifcvf_get_config_size(vf);
>
> + ifcvf_mgmt_dev->mdev.max_supported_vqs = ifcvf_get_max_vq_pairs(vf);
Do we want #qps or #queues?
FYI, vp_vdpa did:
drivers/vdpa/virtio_pci/vp_vdpa.c: mgtdev->max_supported_vqs =
vp_modern_get_num_queues(mdev);
Thanks
> + ifcvf_mgmt_dev->mdev.supported_features = vf->hw_features;
> +
> adapter->vdpa.mdev = &ifcvf_mgmt_dev->mdev;
> ret = _vdpa_register_device(&adapter->vdpa, vf->nr_vring);
> if (ret) {
next prev parent reply other threads:[~2022-07-04 4:43 UTC|newest]
Thread overview: 113+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-01 13:28 [PATCH V3 0/6] ifcvf/vDPA: support query device config space through netlink Zhu Lingshan
2022-07-01 13:28 ` [PATCH V3 1/6] vDPA/ifcvf: get_config_size should return a value no greater than dev implementation Zhu Lingshan
2022-07-04 4:39 ` Jason Wang
2022-07-08 6:44 ` Zhu, Lingshan
2022-07-13 5:44 ` Michael S. Tsirkin
2022-07-13 7:52 ` Zhu, Lingshan
2022-07-13 5:31 ` Michael S. Tsirkin
2022-07-13 7:48 ` Zhu, Lingshan
2022-07-01 13:28 ` [PATCH V3 2/6] vDPA/ifcvf: support userspace to query features and MQ of a management device Zhu Lingshan
2022-07-04 4:43 ` Jason Wang [this message]
2022-07-08 6:54 ` Zhu, Lingshan
2022-07-01 13:28 ` [PATCH V3 3/6] vDPA: allow userspace to query features of a vDPA device Zhu Lingshan
2022-07-01 22:02 ` Parav Pandit
2022-07-04 4:46 ` Jason Wang
2022-07-04 12:53 ` Parav Pandit
2022-07-05 7:59 ` Zhu, Lingshan
2022-07-05 11:56 ` Parav Pandit
2022-07-05 16:56 ` Zhu, Lingshan
2022-07-05 17:01 ` Parav Pandit
2022-07-06 2:25 ` Zhu, Lingshan
2022-07-06 2:28 ` Parav Pandit
2022-07-23 11:27 ` Zhu, Lingshan
2022-07-24 15:23 ` Parav Pandit
2022-07-27 8:15 ` Si-Wei Liu
2022-07-27 11:38 ` Zhu, Lingshan
2022-07-08 6:16 ` Zhu, Lingshan
2022-07-08 16:13 ` Parav Pandit
2022-07-11 2:18 ` Zhu, Lingshan
2022-07-01 13:28 ` [PATCH V3 4/6] vDPA: !FEATURES_OK should not block querying device config space Zhu Lingshan
2022-07-01 22:12 ` Parav Pandit
2022-07-08 6:22 ` Zhu, Lingshan
2022-07-13 5:23 ` Michael S. Tsirkin
2022-07-13 7:46 ` Zhu, Lingshan
[not found] ` <00889067-50ac-d2cd-675f-748f171e5c83@oracle.com>
[not found] ` <63242254-ba84-6810-dad8-34f900b97f2f@intel.com>
[not found] ` <8002554a-a77c-7b25-8f99-8d68248a741d@oracle.com>
2022-07-28 2:06 ` Jason Wang
2022-07-28 7:08 ` Si-Wei Liu
2022-07-28 7:36 ` Jason Wang
2022-07-28 7:44 ` Zhu, Lingshan
[not found] ` <2dfff5f3-3100-4a63-6da3-3e3d21ffb364@oracle.com>
2022-07-28 11:28 ` spec clarification (was Re: [PATCH V3 4/6] vDPA: !FEATURES_OK should not block querying device config space) Michael S. Tsirkin
2022-07-28 11:35 ` [PATCH V3 4/6] vDPA: !FEATURES_OK should not block querying device config space Michael S. Tsirkin
2022-07-28 22:12 ` Si-Wei Liu
[not found] ` <00e2e07e-1a2e-7af8-a060-cc9034e0d33f@intel.com>
[not found] ` <b58dba25-3258-d600-ea06-879094639852@oracle.com>
[not found] ` <c143e2da-208e-b046-9b8f-1780f75ed3e6@intel.com>
2022-07-29 20:55 ` Si-Wei Liu
2022-08-01 4:44 ` Jason Wang
2022-08-01 22:53 ` Si-Wei Liu
2022-08-01 22:58 ` Si-Wei Liu
2022-08-02 6:33 ` Jason Wang
2022-08-03 1:26 ` Si-Wei Liu
2022-08-03 2:30 ` Zhu, Lingshan
2022-08-03 23:09 ` Si-Wei Liu
2022-08-04 1:41 ` Zhu, Lingshan
2022-08-04 1:41 ` Zhu, Lingshan
2022-07-01 13:28 ` [PATCH V3 5/6] vDPA: answer num of queue pairs = 1 to userspace when VIRTIO_NET_F_MQ == 0 Zhu Lingshan
2022-07-01 22:07 ` Parav Pandit
2022-07-08 6:21 ` Zhu, Lingshan
2022-07-08 16:23 ` Parav Pandit
2022-07-11 2:29 ` Zhu, Lingshan
2022-07-12 16:48 ` Parav Pandit
2022-07-13 3:03 ` Zhu, Lingshan
2022-07-13 3:06 ` Parav Pandit
2022-07-13 3:45 ` Zhu, Lingshan
2022-07-26 15:56 ` Parav Pandit
2022-07-26 19:52 ` Michael S. Tsirkin
2022-07-26 20:49 ` Parav Pandit
2022-07-27 2:14 ` Zhu, Lingshan
2022-07-27 2:17 ` Parav Pandit
2022-07-27 2:53 ` Zhu, Lingshan
2022-07-27 3:47 ` Parav Pandit
2022-07-27 4:24 ` Zhu, Lingshan
2022-07-27 6:01 ` Michael S. Tsirkin
2022-07-27 6:25 ` Zhu, Lingshan
2022-07-27 6:56 ` Jason Wang
2022-07-27 9:05 ` Michael S. Tsirkin
2022-07-27 6:54 ` Jason Wang
2022-07-27 9:02 ` Michael S. Tsirkin
2022-07-27 9:50 ` Jason Wang
2022-07-27 15:45 ` Michael S. Tsirkin
2022-07-28 1:21 ` Jason Wang
2022-07-28 3:46 ` Zhu, Lingshan
2022-07-28 5:53 ` Jason Wang
2022-07-28 6:02 ` Zhu, Lingshan
2022-07-28 6:41 ` Michael S. Tsirkin
2022-08-01 4:50 ` Jason Wang
2022-07-27 7:50 ` Si-Wei Liu
2022-07-27 9:01 ` Michael S. Tsirkin
2022-07-27 10:09 ` Si-Wei Liu
2022-07-27 11:54 ` Zhu, Lingshan
2022-07-28 1:41 ` Si-Wei Liu
2022-07-28 2:44 ` Zhu, Lingshan
2022-07-28 21:54 ` Si-Wei Liu
2022-07-29 2:07 ` Zhu, Lingshan
2022-07-27 15:48 ` Michael S. Tsirkin
2022-07-13 5:26 ` Michael S. Tsirkin
2022-07-13 7:47 ` Zhu, Lingshan
2022-07-26 15:54 ` Parav Pandit
2022-07-26 19:48 ` Michael S. Tsirkin
2022-07-26 20:53 ` Parav Pandit
2022-07-27 1:56 ` Zhu, Lingshan
2022-07-27 2:11 ` Zhu, Lingshan
2022-07-01 13:28 ` [PATCH V3 6/6] vDPA: fix 'cast to restricted le16' warnings in vdpa.c Zhu Lingshan
2022-07-01 22:18 ` Parav Pandit
2022-07-08 6:25 ` Zhu, Lingshan
2022-07-08 16:08 ` Parav Pandit
2022-07-29 8:53 ` Michael S. Tsirkin
2022-07-29 9:07 ` Zhu, Lingshan
2022-07-29 9:17 ` Michael S. Tsirkin
2022-07-29 9:20 ` Zhu, Lingshan
2022-07-29 9:23 ` Michael S. Tsirkin
2022-07-29 9:35 ` Zhu, Lingshan
2022-07-29 9:39 ` Michael S. Tsirkin
2022-07-29 10:01 ` Zhu, Lingshan
2022-07-29 10:16 ` Michael S. Tsirkin
2022-07-29 10:18 ` Zhu, Lingshan
2022-08-01 4:33 ` Jason Wang
2022-08-01 6:25 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c602c6c3-b38a-9543-2bb5-03be7d99fef3@redhat.com \
--to=jasowang@redhat.com \
--cc=gautam.dawar@amd.com \
--cc=lingshan.zhu@intel.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=parav@nvidia.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=xieyongji@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).