All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: Yishai Hadas <yishaih@nvidia.com>,
	mst@redhat.com, jasowang@redhat.com, kvm@vger.kernel.org,
	virtualization@lists.linux-foundation.org, parav@nvidia.com,
	feliu@nvidia.com, jiri@nvidia.com, kevin.tian@intel.com,
	joao.m.martins@oracle.com, si-wei.liu@oracle.com,
	leonro@nvidia.com, maorg@nvidia.com
Subject: Re: [PATCH V1 vfio 9/9] vfio/virtio: Introduce a vfio driver over virtio devices
Date: Wed, 18 Oct 2023 16:28:48 -0300	[thread overview]
Message-ID: <20231018192848.GE3952@nvidia.com> (raw)
In-Reply-To: <20231018122925.3fde9405.alex.williamson@redhat.com>

On Wed, Oct 18, 2023 at 12:29:25PM -0600, Alex Williamson wrote:

> > I think this should be configured when the VF is provisioned. If the
> > user does not want legacy IO bar support then the VFIO VF function
> > should not advertise the capability, and they won't get driver
> > support.
> > 
> > I think that is a very reasonable way to approach this - it is how we
> > approached similar problems for mlx5. The provisioning interface is
> > what "profiles" the VF, regardless of if VFIO is driving it or not.
> 
> It seems like a huge assumption that every device is going to allow
> this degree of specification in provisioning VFs.  mlx5 is a vendor
> specific driver, it can make such assumptions in design philosophy.

I don't think it is a huge assumption.  Some degree of configuration
is already mandatory just to get basic functionality, and it isn't
like virtio can really be a full fixed HW implementation on the
control plane.

So the assumption is that some device SW that already must exist, and
already must be configurable just gains 1 more bit of
configuration. It does not seem like a big assumption to me at all.

Regardless, if we set an architecture/philosophy from the kernel side
vendors will align to it.

> > The same argument is going come with live migration. This same driver
> > will still bind and enable live migration if the virtio function is
> > profiled to support it. If you don't want that in your system then
> > don't profile the VF for migration support.
> 
> What in the virtio or SR-IOV spec requires a vendor to make this
> configurable?

The same part that describes how to make live migration work :)

> So nothing here is really "all in one place", it may be in the
> provisioning of the VF, outside of the scope of the host OS, it might
> be a collection of scripts or operators with device or interface
> specific tooling to configure the device.  Sometimes this configuration
> will be before the device is probed by the vfio-pci variant driver,
> sometimes in between probing and opening the device.

We don't have any in tree examples of between probing and opening -
I'd like to keep it that way..

> I don't see why it becomes out of scope if the variant driver itself
> provides some means for selecting a device profile.  We have evidence
> both from mdev vGPUs and here (imo) that we can expect to see that
> behavior, so why wouldn't we want to attempt some basic shared
> interface for variant drivers to implement for selecting such a profile
> rather than add to this hodgepodge

The GPU profiling approach is an artifact of the mdev sysfs. I do not
expect to actually do this in tree.. The function should be profiled
before it reaches VFIO, not after. This is often necessary anyhow
because a function can be bound to kernel driver in almost all cases
too.

Consistently following this approach prevents future problems where we
end up with different ways to profile/provision functions depending on
what driver is attached (vfio/in-kernel). That would be a mess.

> > > Another obvious option is sysfs, where we might imagine an optional
> > > "profiles" directory, perhaps under vfio-dev.  Attributes of
> > > "available" and "current" could allow discovery and selection of a
> > > profile similar to mdev types.  
> > 
> > IMHO it is a far too complex problem for sysfs.
> 
> Isn't it then just like devlink, not a silver bullet, but useful for
> some configuration? 

Yes, but that accepts the architecture that configuration and
provisioning should happen on the VFIO side at all, which I think is
not a good direction.

> AIUI, devlink shot down a means to list available
> profiles for a device and a means to select one of those profiles.

And other things, yes.

> There are a variety of attributes in sysfs which perform this sort of
> behavior.  Specifying a specific profile in sysfs can be difficult, and
> I'm not proposing sysfs profile support as a mandatory feature, but I'm
> also not a fan of the vendor specific sysfs approach that out of tree
> drivers have taken.

It is my belief we are going to have to build some good general
infrastructure to support SIOV. The action to spawn, provision and
activate a SIOV function should be a generic infrastructure of some
kind. We have already been through a precursor to all this with mlx5's
devlink infrastructure for SFs (which are basically SIOV functions),
so we have a pretty deep experience now.

mdev mushed all those steps into VFIO, but it belongs in different
layers. SIOV devices are not going to be exclusively consumed by VFIO.

If we have such a layer then it would be possible to also configure
VFIO "through the back door" of the provisioning layer in the kernel.

I think that is the closest we can get to some kind of generic API
here. The trouble is that it will not actually be generic because
provisioning is not generic or standardized. It doesn't eliminate the
need for having a user space driver component that actually
understands exactly what to do in order to fully provision something.

I don't know what to say about that from a libvirt perspective. Like
how does that world imagine provisioning network and storage
functions? All I know is at the openshift level it is done with
operators (aka user space drivers).

> The mdev type interface is certainly not perfect, but from it we've
> been able to develop mdevctl to allow persistent and complex
> configurations of mdev devices.  I'd like to see the ability to do
> something like that with variant drivers that offer multiple profiles
> without always depending on vendor specific interfaces.

I think profiles are too narrow an abstraction to be that broadly
useful beyond simple device types. Given the device variety we already
have I don't know if there is an alternative to a user space driver to
manage provisioning. Indeed that is how we see our actual deployments
already.

IOW I'm worried we invest a lot of effort in VFIO profiling for little
return.

Jason

  reply	other threads:[~2023-10-18 19:28 UTC|newest]

Thread overview: 100+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-17 13:42 [PATCH V1 vfio 0/9] Introduce a vfio driver over virtio devices Yishai Hadas
2023-10-17 13:42 ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 1/9] virtio-pci: Fix common config map for modern device Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 2/9] virtio: Define feature bit for administration virtqueue Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 3/9] virtio-pci: Introduce admin virtqueue Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 4/9] virtio-pci: Introduce admin command sending function Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 5/9] virtio-pci: Introduce admin commands Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 6/9] virtio-pci: Introduce APIs to execute legacy IO " Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 20:33   ` kernel test robot
2023-10-17 20:33     ` kernel test robot
2023-10-22  1:14   ` kernel test robot
2023-10-22  1:14     ` kernel test robot
2023-10-24 21:01   ` Michael S. Tsirkin
2023-10-24 21:01     ` Michael S. Tsirkin
2023-10-25  9:18     ` Yishai Hadas via Virtualization
2023-10-25 10:17       ` Michael S. Tsirkin
2023-10-25 10:17         ` Michael S. Tsirkin
2023-10-25 13:00         ` Yishai Hadas
2023-10-25 13:00           ` Yishai Hadas via Virtualization
2023-10-25 13:04           ` Michael S. Tsirkin
2023-10-25 13:04             ` Michael S. Tsirkin
2023-10-25 13:44           ` Michael S. Tsirkin
2023-10-25 13:44             ` Michael S. Tsirkin
2023-10-25 14:03             ` Yishai Hadas
2023-10-25 14:03               ` Yishai Hadas via Virtualization
2023-10-25 16:31               ` Michael S. Tsirkin
2023-10-25 16:31                 ` Michael S. Tsirkin
2023-10-25  9:36     ` Yishai Hadas
2023-10-25  9:36       ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 7/9] vfio/pci: Expose vfio_pci_core_setup_barmap() Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 8/9] vfio/pci: Expose vfio_pci_iowrite/read##size() Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 13:42 ` [PATCH V1 vfio 9/9] vfio/virtio: Introduce a vfio driver over virtio devices Yishai Hadas
2023-10-17 13:42   ` Yishai Hadas via Virtualization
2023-10-17 20:24   ` Alex Williamson
2023-10-17 20:24     ` Alex Williamson
2023-10-18  9:01     ` Yishai Hadas
2023-10-18  9:01       ` Yishai Hadas via Virtualization
2023-10-18 12:51       ` Alex Williamson
2023-10-18 12:51         ` Alex Williamson
2023-10-18 13:06         ` Parav Pandit
2023-10-18 13:06           ` Parav Pandit via Virtualization
2023-10-18 16:33     ` Jason Gunthorpe
2023-10-18 18:29       ` Alex Williamson
2023-10-18 18:29         ` Alex Williamson
2023-10-18 19:28         ` Jason Gunthorpe [this message]
2023-10-24 19:57   ` Alex Williamson
2023-10-24 19:57     ` Alex Williamson
2023-10-25 14:35     ` Yishai Hadas
2023-10-25 14:35       ` Yishai Hadas via Virtualization
2023-10-25 16:28       ` Michael S. Tsirkin
2023-10-25 16:28         ` Michael S. Tsirkin
2023-10-25 19:13       ` Alex Williamson
2023-10-25 19:13         ` Alex Williamson
2023-10-26 12:08         ` Yishai Hadas
2023-10-26 12:08           ` Yishai Hadas via Virtualization
2023-10-26 12:12           ` Michael S. Tsirkin
2023-10-26 12:12             ` Michael S. Tsirkin
2023-10-26 12:40             ` Parav Pandit
2023-10-26 12:40               ` Parav Pandit via Virtualization
2023-10-26 13:15               ` Michael S. Tsirkin
2023-10-26 13:15                 ` Michael S. Tsirkin
2023-10-26 13:28                 ` Parav Pandit
2023-10-26 13:28                   ` Parav Pandit via Virtualization
2023-10-26 15:06                   ` Michael S. Tsirkin
2023-10-26 15:06                     ` Michael S. Tsirkin
2023-10-26 15:09                     ` Parav Pandit
2023-10-26 15:09                       ` Parav Pandit via Virtualization
2023-10-26 15:46                       ` Michael S. Tsirkin
2023-10-26 15:46                         ` Michael S. Tsirkin
2023-10-26 15:56                         ` Parav Pandit
2023-10-26 15:56                           ` Parav Pandit via Virtualization
2023-10-26 17:55           ` Alex Williamson
2023-10-26 17:55             ` Alex Williamson
2023-10-26 19:49             ` Michael S. Tsirkin
2023-10-26 19:49               ` Michael S. Tsirkin
2023-10-29 16:13             ` Yishai Hadas via Virtualization
2023-10-29 16:13               ` Yishai Hadas
2023-10-22  8:20 ` [PATCH V1 vfio 0/9] " Yishai Hadas
2023-10-22  8:20   ` Yishai Hadas via Virtualization
2023-10-22  9:12   ` Michael S. Tsirkin
2023-10-22  9:12     ` Michael S. Tsirkin
2023-10-23 15:33   ` Alex Williamson
2023-10-23 15:33     ` Alex Williamson
2023-10-23 15:42     ` Jason Gunthorpe
2023-10-23 16:09       ` Alex Williamson
2023-10-23 16:09         ` Alex Williamson
2023-10-23 16:20         ` Jason Gunthorpe
2023-10-23 16:45           ` Alex Williamson
2023-10-23 16:45             ` Alex Williamson
2023-10-23 17:27             ` Jason Gunthorpe
2023-10-25  8:34       ` Tian, Kevin
2023-10-25  8:34         ` Tian, Kevin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231018192848.GE3952@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=alex.williamson@redhat.com \
    --cc=feliu@nvidia.com \
    --cc=jasowang@redhat.com \
    --cc=jiri@nvidia.com \
    --cc=joao.m.martins@oracle.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=leonro@nvidia.com \
    --cc=maorg@nvidia.com \
    --cc=mst@redhat.com \
    --cc=parav@nvidia.com \
    --cc=si-wei.liu@oracle.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=yishaih@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.