linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: Max Gurtovoy <mgurtovoy@nvidia.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>, <kvm@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <liranl@nvidia.com>,
	<oren@nvidia.com>, <tzahio@nvidia.com>, <leonro@nvidia.com>,
	<yarong@nvidia.com>, <aviadye@nvidia.com>, <shahafs@nvidia.com>,
	<artemp@nvidia.com>, <kwankhede@nvidia.com>, <ACurrid@nvidia.com>,
	<gmataev@nvidia.com>, <cjia@nvidia.com>, <yishaih@nvidia.com>,
	<aik@ozlabs.ru>
Subject: Re: [PATCH 8/9] vfio/pci: use x86 naming instead of igd
Date: Tue, 2 Feb 2021 19:06:04 -0400	[thread overview]
Message-ID: <20210202230604.GD4247@nvidia.com> (raw)
In-Reply-To: <20210202143013.06366e9d@omen.home.shazbot.org>

On Tue, Feb 02, 2021 at 02:30:13PM -0700, Alex Williamson wrote:

> The first set of users already fail this specification though, we can't
> base it strictly on device and vendor IDs, we need wildcards, class
> codes, revision IDs, etc., just like any other PCI drvier.  We're not
> going to maintain a set of specific device IDs for the IGD
> extension,

The Intel GPU driver already has a include/drm/i915_pciids.h that
organizes all the PCI match table entries, no reason why VFIO IGD
couldn't include that too and use the same match table as the real GPU
driver. Same HW right?

Also how sure are you that this loose detection is going to work with
future Intel discrete GPUs that likely won't need vfio_igd?

> nor I suspect the NVLINK support as that would require a kernel update
> every time a new GPU is released that makes use of the same interface.

The nvlink device that required this special vfio code was a one
off. Current devices do not use it. Not having an exact PCI ID match
in this case is a bug.

> As I understand Jason's reply, these vendor drivers would have an ids
> table and a user could look at modalias for the device to compare to
> the driver supported aliases for a match.  Does kmod already have this
> as a utility outside of modprobe?

I think this is worth exploring.

One idea that fits nicely with the existing infrastructure is to add
to driver core a 'device mode' string. It would be "default" or "vfio"

devices in vfio mode only match vfio mode device_drivers.

devices in vfio mode generate a unique modalias string that includes
some additional 'mode=vfio' identifier

drivers that run in vfio mode generate a module table string that
includes the same mode=vfio

The driver core can trigger driver auto loading soley based on the
mode string, happens naturally.

All the existing udev, depmod/etc tooling will transparently work.

Like driver_override, but doesn't bypass all the ID and module loading
parts of the driver core.

(But lets not get too far down this path until we can agree that
embracing the driver core like the RFC contemplates is the agreed
direction)

> Seems like it would be embedded in the aliases for the module, with
> this explicit binding flag being the significant difference that
> prevents auto loading the device.  We still have one of the races that
> driver_override resolves though, the proposed explicit bind flag is on
> the driver not the device, so a native host driver being loaded due to
> a hotplug operation or independent actions of different admins could
> usurp the device between unbind of old driver and bind to new driver.

This is because the sysfs doesn't have an atomic way to bind and
rebind a device, teaching 'bind' to how to do that would also solve
this problem.

> This seems unpredictable from a user perspective.  In either the igd or
> nvlink cases, if the platform features aren't available, the feature
> set of the device is reduced.  That's not apparent until the user tries
> to start interacting with the device if the device specific driver
> doesn't fail the probe.  Userspace policy would need to decide if a
> fallback driver is acceptable or the vendor specific driver failure is
> fatal. Thanks,

It matches today's behavior, if it is a good idea to preserve it, then
it can be so without much effort.

I do prefer the explicitness because I think most use cases have a
user that requires the special driver to run. Explicitly binding a
the required driver seems preferable.

Certainly nvlink and mlx5 should fail probe and not fall back to plain
vfio-pci. If user wants plain vfio-pci user should ask explicitly. At
least for the mlx5 cases this is a completely reasonable thing to
do. I like that we can support this choice.

I'm not so clear on IGD, especially how it would interact with future
descrete cards that probably don't need it. IMHO, it would be fine if
it was different for some good reason.

Jason

  reply	other threads:[~2021-02-02 23:07 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-01 16:28 [PATCH v2 0/9] Introduce vfio-pci-core subsystem Max Gurtovoy
2021-02-01 16:28 ` [PATCH 1/9] vfio-pci: rename vfio_pci.c to vfio_pci_core.c Max Gurtovoy
2021-02-01 16:28 ` [PATCH 2/9] vfio-pci: introduce vfio_pci_core subsystem driver Max Gurtovoy
2021-02-01 16:28 ` [PATCH 3/9] vfio-pci-core: export vfio_pci_register_dev_region function Max Gurtovoy
2021-02-01 16:28 ` [PATCH 4/9] mlx5-vfio-pci: add new vfio_pci driver for mlx5 devices Max Gurtovoy
2021-02-01 16:28 ` [PATCH 5/9] vfio-pci/zdev: remove unused vdev argument Max Gurtovoy
2021-02-01 17:27   ` Matthew Rosato
2021-02-02  7:57   ` Cornelia Huck
2021-02-02 17:21     ` Alex Williamson
2021-02-01 16:28 ` [PATCH 6/9] vfio-pci/zdev: fix possible segmentation fault issue Max Gurtovoy
2021-02-01 16:52   ` Cornelia Huck
2021-02-01 17:08     ` Matthew Rosato
2021-02-01 20:47       ` Alex Williamson
2021-02-02  7:58         ` Cornelia Huck
2021-02-01 16:28 ` [PATCH 7/9] vfio/pci: use s390 naming instead of zdev Max Gurtovoy
2021-02-01 16:28 ` [PATCH 8/9] vfio/pci: use x86 naming instead of igd Max Gurtovoy
2021-02-01 17:14   ` Cornelia Huck
2021-02-01 17:49     ` Matthew Rosato
2021-02-01 18:42       ` Alex Williamson
2021-02-02 16:06         ` Cornelia Huck
2021-02-02 17:10           ` Jason Gunthorpe
2021-02-11 15:47             ` Max Gurtovoy
2021-02-11 16:29               ` Matthew Rosato
2021-02-11 17:39                 ` Cornelia Huck
2021-02-02 17:41           ` Max Gurtovoy
2021-02-02 17:54             ` Alex Williamson
2021-02-02 18:50               ` Jason Gunthorpe
2021-02-02 18:55                 ` Christoph Hellwig
2021-02-02 19:05                   ` Jason Gunthorpe
2021-02-02 19:37                 ` Alex Williamson
2021-02-02 20:44                   ` Jason Gunthorpe
2021-02-02 20:59                     ` Max Gurtovoy
2021-02-02 21:30                       ` Alex Williamson
2021-02-02 23:06                         ` Jason Gunthorpe [this message]
2021-02-02 23:59                           ` Alex Williamson
2021-02-03 13:54                             ` Jason Gunthorpe
2021-02-11  8:47                               ` Christoph Hellwig
2021-02-11 14:30                                 ` Jason Gunthorpe
2021-02-11  8:44                             ` Christoph Hellwig
2021-02-11 19:43                               ` Alex Williamson
     [not found]             ` <806c138e-685c-0955-7c15-93cb1d4fe0d9@ozlabs.ru>
2021-02-03 16:07               ` Max Gurtovoy
     [not found]                 ` <83ef0164-6291-c3d1-0ce5-2c9d6c97469e@ozlabs.ru>
2021-02-04 12:51                   ` Jason Gunthorpe
2021-02-05  0:42                     ` Alexey Kardashevskiy
2021-02-08 12:44                       ` Max Gurtovoy
2021-02-09  1:55                         ` Alexey Kardashevskiy
2021-02-08 18:13                       ` Jason Gunthorpe
2021-02-09  1:51                         ` Alexey Kardashevskiy
2021-02-04  9:12               ` Max Gurtovoy
2021-02-11  8:50                 ` Christoph Hellwig
2021-02-11 14:49                   ` Jason Gunthorpe
2021-02-01 16:28 ` [PATCH 9/9] vfio/pci: use powernv naming instead of nvlink2 Max Gurtovoy
2021-02-01 18:35   ` Jason Gunthorpe
2021-02-10  7:52 ` [PATCH v2 0/9] Introduce vfio-pci-core subsystem Tian, Kevin
2021-02-10 13:34   ` Jason Gunthorpe
2021-02-10 16:37     ` Alex Williamson
2021-02-10 17:08       ` Jason Gunthorpe
2021-02-11  8:36     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210202230604.GD4247@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=ACurrid@nvidia.com \
    --cc=aik@ozlabs.ru \
    --cc=alex.williamson@redhat.com \
    --cc=artemp@nvidia.com \
    --cc=aviadye@nvidia.com \
    --cc=cjia@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=gmataev@nvidia.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=leonro@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=liranl@nvidia.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=mjrosato@linux.ibm.com \
    --cc=oren@nvidia.com \
    --cc=shahafs@nvidia.com \
    --cc=tzahio@nvidia.com \
    --cc=yarong@nvidia.com \
    --cc=yishaih@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).