All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Max Gurtovoy <mgurtovoy@nvidia.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>, <kvm@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <liranl@nvidia.com>,
	<oren@nvidia.com>, <tzahio@nvidia.com>, <leonro@nvidia.com>,
	<yarong@nvidia.com>, <aviadye@nvidia.com>, <shahafs@nvidia.com>,
	<artemp@nvidia.com>, <kwankhede@nvidia.com>, <ACurrid@nvidia.com>,
	<gmataev@nvidia.com>, <cjia@nvidia.com>, <yishaih@nvidia.com>,
	<aik@ozlabs.ru>
Subject: Re: [PATCH 8/9] vfio/pci: use x86 naming instead of igd
Date: Tue, 2 Feb 2021 16:59:23 -0700	[thread overview]
Message-ID: <20210202165923.53f76901@omen.home.shazbot.org> (raw)
In-Reply-To: <20210202230604.GD4247@nvidia.com>

On Tue, 2 Feb 2021 19:06:04 -0400
Jason Gunthorpe <jgg@nvidia.com> wrote:

> On Tue, Feb 02, 2021 at 02:30:13PM -0700, Alex Williamson wrote:
> 
> > The first set of users already fail this specification though, we can't
> > base it strictly on device and vendor IDs, we need wildcards, class
> > codes, revision IDs, etc., just like any other PCI drvier.  We're not
> > going to maintain a set of specific device IDs for the IGD
> > extension,  
> 
> The Intel GPU driver already has a include/drm/i915_pciids.h that
> organizes all the PCI match table entries, no reason why VFIO IGD
> couldn't include that too and use the same match table as the real GPU
> driver. Same HW right?

vfio-pci-igd support knows very little about the device, we're
effectively just exposing a firmware table and some of the host bridge
config space (read-only).  So the idea that the host kernel needs to
have updated i915 support in order to expose the device to userspace
with these extra regions is a bit silly.

> Also how sure are you that this loose detection is going to work with
> future Intel discrete GPUs that likely won't need vfio_igd?

Not at all, which is one more reason we don't want to rely on i915's
device table, which would likely support those as well.  We might only
want to bind to GPUs on the root complex, or at address 0000:00:02.0.
Our "what to reject" algorithm might need to evolve as those arrive,
but I don't think that means we need to explicitly list every device ID
either.

> > nor I suspect the NVLINK support as that would require a kernel update
> > every time a new GPU is released that makes use of the same interface.  
> 
> The nvlink device that required this special vfio code was a one
> off. Current devices do not use it. Not having an exact PCI ID match
> in this case is a bug.

AIUI, the quirk is only activated when there's a firmware table to
support it.  No firmware table, no driver bind, no need to use
explicit IDs.  Vendor and class code should be enough.
 
> > As I understand Jason's reply, these vendor drivers would have an ids
> > table and a user could look at modalias for the device to compare to
> > the driver supported aliases for a match.  Does kmod already have this
> > as a utility outside of modprobe?  
> 
> I think this is worth exploring.
> 
> One idea that fits nicely with the existing infrastructure is to add
> to driver core a 'device mode' string. It would be "default" or "vfio"
> 
> devices in vfio mode only match vfio mode device_drivers.
> 
> devices in vfio mode generate a unique modalias string that includes
> some additional 'mode=vfio' identifier
> 
> drivers that run in vfio mode generate a module table string that
> includes the same mode=vfio
> 
> The driver core can trigger driver auto loading soley based on the
> mode string, happens naturally.
> 
> All the existing udev, depmod/etc tooling will transparently work.
> 
> Like driver_override, but doesn't bypass all the ID and module loading
> parts of the driver core.
> 
> (But lets not get too far down this path until we can agree that
> embracing the driver core like the RFC contemplates is the agreed
> direction)

I'm not sure I fully follow the mechanics of this.  I'm interpreting
this as something like a sub-class of drivers where for example
vfio-pci class drivers would have a vfio-pci: alias prefix rather than
pci:.  There might be some sysfs attribute for the device that would
allow the user to write an alias prefix and would that trigger the
(ex.) pci-core to send remove uevents for the pci: modalias device and
add uevents for the vfio-pci: modalias device?  Some ordering rules
would then allow vendor/device modules to precede vfio-pci, which would
have only a wildcard id table?

I need to churn on that for a while, but if driver-core folks are
interested, maybe it could be a good idea...  Thanks,

Alex


  reply	other threads:[~2021-02-03  0:01 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-01 16:28 [PATCH v2 0/9] Introduce vfio-pci-core subsystem Max Gurtovoy
2021-02-01 16:28 ` [PATCH 1/9] vfio-pci: rename vfio_pci.c to vfio_pci_core.c Max Gurtovoy
2021-02-01 16:28 ` [PATCH 2/9] vfio-pci: introduce vfio_pci_core subsystem driver Max Gurtovoy
2021-02-01 16:28 ` [PATCH 3/9] vfio-pci-core: export vfio_pci_register_dev_region function Max Gurtovoy
2021-02-01 16:28 ` [PATCH 4/9] mlx5-vfio-pci: add new vfio_pci driver for mlx5 devices Max Gurtovoy
2021-02-01 16:28 ` [PATCH 5/9] vfio-pci/zdev: remove unused vdev argument Max Gurtovoy
2021-02-01 17:27   ` Matthew Rosato
2021-02-02  7:57   ` Cornelia Huck
2021-02-02 17:21     ` Alex Williamson
2021-02-01 16:28 ` [PATCH 6/9] vfio-pci/zdev: fix possible segmentation fault issue Max Gurtovoy
2021-02-01 16:52   ` Cornelia Huck
2021-02-01 17:08     ` Matthew Rosato
2021-02-01 20:47       ` Alex Williamson
2021-02-02  7:58         ` Cornelia Huck
2021-02-01 16:28 ` [PATCH 7/9] vfio/pci: use s390 naming instead of zdev Max Gurtovoy
2021-02-01 16:28 ` [PATCH 8/9] vfio/pci: use x86 naming instead of igd Max Gurtovoy
2021-02-01 17:14   ` Cornelia Huck
2021-02-01 17:49     ` Matthew Rosato
2021-02-01 18:42       ` Alex Williamson
2021-02-02 16:06         ` Cornelia Huck
2021-02-02 17:10           ` Jason Gunthorpe
2021-02-11 15:47             ` Max Gurtovoy
2021-02-11 16:29               ` Matthew Rosato
2021-02-11 17:39                 ` Cornelia Huck
2021-02-02 17:41           ` Max Gurtovoy
2021-02-02 17:54             ` Alex Williamson
2021-02-02 18:50               ` Jason Gunthorpe
2021-02-02 18:55                 ` Christoph Hellwig
2021-02-02 19:05                   ` Jason Gunthorpe
2021-02-02 19:37                 ` Alex Williamson
2021-02-02 20:44                   ` Jason Gunthorpe
2021-02-02 20:59                     ` Max Gurtovoy
2021-02-02 21:30                       ` Alex Williamson
2021-02-02 23:06                         ` Jason Gunthorpe
2021-02-02 23:59                           ` Alex Williamson [this message]
2021-02-03 13:54                             ` Jason Gunthorpe
2021-02-11  8:47                               ` Christoph Hellwig
2021-02-11 14:30                                 ` Jason Gunthorpe
2021-02-11  8:44                             ` Christoph Hellwig
2021-02-11 19:43                               ` Alex Williamson
     [not found]             ` <806c138e-685c-0955-7c15-93cb1d4fe0d9@ozlabs.ru>
2021-02-03 16:07               ` Max Gurtovoy
     [not found]                 ` <83ef0164-6291-c3d1-0ce5-2c9d6c97469e@ozlabs.ru>
2021-02-04 12:51                   ` Jason Gunthorpe
2021-02-05  0:42                     ` Alexey Kardashevskiy
2021-02-08 12:44                       ` Max Gurtovoy
2021-02-09  1:55                         ` Alexey Kardashevskiy
2021-02-08 18:13                       ` Jason Gunthorpe
2021-02-09  1:51                         ` Alexey Kardashevskiy
2021-02-04  9:12               ` Max Gurtovoy
2021-02-11  8:50                 ` Christoph Hellwig
2021-02-11 14:49                   ` Jason Gunthorpe
2021-02-01 16:28 ` [PATCH 9/9] vfio/pci: use powernv naming instead of nvlink2 Max Gurtovoy
2021-02-01 18:35   ` Jason Gunthorpe
2021-02-10  7:52 ` [PATCH v2 0/9] Introduce vfio-pci-core subsystem Tian, Kevin
2021-02-10 13:34   ` Jason Gunthorpe
2021-02-10 16:37     ` Alex Williamson
2021-02-10 17:08       ` Jason Gunthorpe
2021-02-11  8:36     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210202165923.53f76901@omen.home.shazbot.org \
    --to=alex.williamson@redhat.com \
    --cc=ACurrid@nvidia.com \
    --cc=aik@ozlabs.ru \
    --cc=artemp@nvidia.com \
    --cc=aviadye@nvidia.com \
    --cc=cjia@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=gmataev@nvidia.com \
    --cc=jgg@nvidia.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=leonro@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=liranl@nvidia.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=mjrosato@linux.ibm.com \
    --cc=oren@nvidia.com \
    --cc=shahafs@nvidia.com \
    --cc=tzahio@nvidia.com \
    --cc=yarong@nvidia.com \
    --cc=yishaih@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.