All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Si-Wei Liu <si-wei.liu@oracle.com>
Cc: "lvivier@redhat.com" <lvivier@redhat.com>,
	"mst@redhat.com" <mst@redhat.com>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
	"eperezma@redhat.com" <eperezma@redhat.com>,
	Eli Cohen <elic@nvidia.com>
Subject: Re: [PATCH v5 10/13] vdpa: Support reporting max device virtqueues
Date: Thu, 23 Dec 2021 10:27:49 +0800	[thread overview]
Message-ID: <CACGkMEvMAS1PspbRdL-0SHfGkkZLp-1AFQAwCkQPAiZeMzxAHw@mail.gmail.com> (raw)
In-Reply-To: <8e93cfc4-b71e-adc5-8b35-337523e3a431@oracle.com>

On Thu, Dec 23, 2021 at 3:25 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
>
>
> On 12/21/2021 11:54 PM, Eli Cohen wrote:
> > On Tue, Dec 21, 2021 at 11:29:36PM -0800, Si-Wei Liu wrote:
> >>
> >> On 12/21/2021 11:10 PM, Eli Cohen wrote:
> >>> On Wed, Dec 22, 2021 at 09:03:37AM +0200, Parav Pandit wrote:
> >>>>> From: Eli Cohen <elic@nvidia.com>
> >>>>> Sent: Wednesday, December 22, 2021 12:17 PM
> >>>>>
> >>>>>>>> --- a/drivers/vdpa/vdpa.c
> >>>>>>>> +++ b/drivers/vdpa/vdpa.c
> >>>>>>>> @@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct
> >>>>>>> vdpa_mgmt_dev *mdev, struct sk_buff *m
> >>>>>>>>                err = -EMSGSIZE;
> >>>>>>>>                goto msg_err;
> >>>>>>>>        }
> >>>>>>>> +      if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
> >>>>>>>> +                      mdev->max_supported_vqs))
> >>>>>>> It still needs a default value when the field is not explicitly
> >>>>>>> filled in by the driver.
> >>>>>>>
> >>>>>> Unlikely. This can be optional field to help user decide device max limit.
> >>>>>> When max_supported_vqs is set to zero. Vdpa should omit exposing it to user
> >>>>> space.
> >>>>> This is not about what you expose to userspace. It's about the number of VQs
> >>>>> you want to create for a specific instance of vdpa.
> >>>> This value on mgmtdev indicates that a given mgmt device supports creating a vdpa device who can have maximum VQs of N.
> >>>> User will choose to create VQ with VQs <= N depending on its vcpu and other factors.
> >>> You're right.
> >>> So each vendor needs to put there their value.
> >> If I understand Parav correctly, he was suggesting not to expose
> >> VDPA_ATTR_DEV_MGMTDEV_MAX_VQS to userspace if seeing (max_supported_vqs ==
> >> 0) from the driver.
> > I can see the reasoning, but maybe we should leave it as zero which
> > means it was not reported. The user will then need to guess. I believe
> > other vendors will follow with an update so this to a real value.
> Unless you place a check in the vdpa core to enforce it on vdpa
> creation, otherwise it's very likely to get ignored by other vendors.
>
> >
> >> But meanwhile, I do wonder how users tell apart multiqueue supporting parent
> >> from the single queue mgmtdev without getting the aid from this field. I
> >> hope the answer won't be to create a vdpa instance to try.
> >>
> > Do you see a scenario that an admin decides to not instantiate vdpa just
> > because it does not support MQ?
> Yes, there is. If the hardware doesn't support MQ, the provisioning tool
> in the mgmt software will need to fallback to software vhost backend
> with mq=on. At the time the tool is checking out, it doesn't run with
> root privilege.
>
> >
> > And it the management device reports it does support, there's still no
> > guarantee you'll end up with a MQ net device.
> I'm not sure I follow. Do you mean it may be up to the guest feature
> negotiation? But the device itself is still MQ capable, isn't it?

I think we need to clarify the "device" here.

For compatibility reasons, there could be a case that mgmt doesn't
expect a mq capable vdpa device. So in this case, even if the parent
is MQ capable, the vdpa isn't.

Thanks

>
> Thanks,
> -Siwei
>
> >
> >
> >> -Siwei
> >>
> >>>> This is what is exposed to the user to decide the upper bound.
> >>>>>> There has been some talk/patches of rdma virtio device.
> >>>>>> I anticipate such device to support more than 64K queues by nature of rdma.
> >>>>>> It is better to keep max_supported_vqs as u32.
> >>>>> Why not add it when we have it?
> >>>> Sure, with that approach we will end up adding two fields (current u16 and later another u32) due to smaller bit width of current one.
> >>>> Either way is fine. Michael was suggesting similar higher bit-width in other patches, so bringing up here for this field on how he sees it.
> >>> I can use u32 then.
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2021-12-23  2:28 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20211221172006.43784-1-elic@nvidia.com>
     [not found] ` <20211221172006.43784-5-elic@nvidia.com>
2021-12-22  1:55   ` [PATCH v5 04/13] vdpa: Read device configuration only if FEATURES_OK Si-Wei Liu
     [not found]     ` <20211222055518.GA210450@mtl-vdi-166.wap.labs.mlnx>
2021-12-22  7:03       ` Si-Wei Liu
     [not found] ` <20211221172006.43784-11-elic@nvidia.com>
2021-12-22  2:00   ` [PATCH v5 10/13] vdpa: Support reporting max device virtqueues Si-Wei Liu
2021-12-22  5:06     ` Parav Pandit via Virtualization
2021-12-22  7:02       ` Si-Wei Liu
     [not found]       ` <20211222064728.GE210450@mtl-vdi-166.wap.labs.mlnx>
2021-12-22  7:03         ` Parav Pandit via Virtualization
     [not found]           ` <20211222071036.GA213382@mtl-vdi-166.wap.labs.mlnx>
2021-12-22  7:29             ` Si-Wei Liu
     [not found]               ` <20211222075402.GA214545@mtl-vdi-166.wap.labs.mlnx>
2021-12-22 19:25                 ` Si-Wei Liu
2021-12-23  2:27                   ` Jason Wang [this message]
2021-12-23  2:43                     ` Si-Wei Liu
     [not found]                       ` <20211223053912.GA10014@mtl-vdi-166.wap.labs.mlnx>
2021-12-23 21:37                         ` Si-Wei Liu
     [not found] ` <20211221172006.43784-10-elic@nvidia.com>
2021-12-22  2:01   ` [PATCH v5 09/13] vdpa/mlx5: Restore cur_num_vqs in case of failure in change_num_qps() Si-Wei Liu
     [not found] ` <20211221172006.43784-9-elic@nvidia.com>
2021-12-22  2:04   ` [PATCH v5 08/13] vdpa: Add support for returning device configuration information Si-Wei Liu
     [not found] ` <20211221172006.43784-8-elic@nvidia.com>
2021-12-22  2:05   ` [PATCH v5 07/13] vdpa/mlx5: Support configuring max data virtqueue pairs Si-Wei Liu
     [not found] ` <20211221172006.43784-12-elic@nvidia.com>
2021-12-22  2:07   ` [PATCH v5 11/13] vdpa/mlx5: Configure max supported virtqueues Si-Wei Liu
     [not found] ` <20211221172006.43784-14-elic@nvidia.com>
2021-12-22  2:08   ` [PATCH v5 13/13] vdpa: Use BIT_ULL for bit operations Si-Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACGkMEvMAS1PspbRdL-0SHfGkkZLp-1AFQAwCkQPAiZeMzxAHw@mail.gmail.com \
    --to=jasowang@redhat.com \
    --cc=elic@nvidia.com \
    --cc=eperezma@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=mst@redhat.com \
    --cc=si-wei.liu@oracle.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.