All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Alex Williamson <alex.williamson@redhat.com>,
	Max Gurtovoy <mgurtovoy@nvidia.com>,
	Alexey Kardashevskiy <aik@ozlabs.ru>,
	cohuck@redhat.com, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, liranl@nvidia.com, oren@nvidia.com,
	tzahio@nvidia.com, leonro@nvidia.com, yarong@nvidia.com,
	aviadye@nvidia.com, shahafs@nvidia.com, artemp@nvidia.com,
	kwankhede@nvidia.com, ACurrid@nvidia.com, cjia@nvidia.com,
	yishaih@nvidia.com, mjrosato@linux.ibm.com
Subject: Re: [PATCH 8/9] vfio/pci: export nvlink2 support into vendor vfio_pci drivers
Date: Mon, 22 Mar 2021 13:44:11 -0300	[thread overview]
Message-ID: <20210322164411.GV2356281@nvidia.com> (raw)
In-Reply-To: <20210322151125.GA1051@lst.de>

On Mon, Mar 22, 2021 at 04:11:25PM +0100, Christoph Hellwig wrote:
> On Fri, Mar 19, 2021 at 05:07:49PM -0300, Jason Gunthorpe wrote:
> > The way the driver core works is to first match against the already
> > loaded driver list, then trigger an event for module loading and when
> > new drivers are registered they bind to unbound devices.
> > 
> > So, the trouble is the event through userspace because the kernel
> > can't just go on to use vfio_pci until it knows userspace has failed
> > to satisfy the load request.
> > 
> > One answer is to have userspace udev have the "hook" here and when a
> > vfio flavour mod alias is requested on a PCI device it swaps in
> > vfio_pci if it can't find an alternative.
> > 
> > The dream would be a system with no vfio modules loaded could do some
> > 
> >  echo "vfio" > /sys/bus/pci/xxx/driver_flavour
> > 
> > And a module would be loaded and a struct vfio_device is created for
> > that device. Very easy for the user.
> 
> Maybe I did not communicate my suggestion last week very well.  My
> idea is that there are no different pci_drivers vs vfio or not,
> but different personalities of the same driver.

This isn't quite the scenario that needs solving. Lets go back to
Max's V1 posting:

The mlx5_vfio_pci.c pci_driver matches this:

+	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_REDHAT_QUMRANET, 0x1042,
+			 PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID) }, /* Virtio SNAP controllers */

This overlaps with the match table in
drivers/virtio/virtio_pci_common.c:

        { PCI_DEVICE(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID) },

So, if we do as you propose we have to add something mellanox specific
to virtio_pci_common which seems to me to just repeating this whole
problem except in more drivers.

The general thing that that is happening is people are adding VM
migration capability to existing standard PCI interfaces like VFIO,
NVMe, etc

At least in this mlx5 situation the PF driver provides the HW access
to do the migration and the vfio mlx5 driver provides all the protocol
and machinery specific to the PCI standard being migrated. They are
all a little different.

But you could imagine some other implemetnation where the VF might
have an extra non-standard BAR that is the migration control.

This is why I like having a full stand alone pci_driver as everyone
implementing this can provide the vfio_device that is appropriate for
the HW.

Jason

  reply	other threads:[~2021-03-22 16:44 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-09  8:33 [PATCH v3 0/9] Introduce vfio-pci-core subsystem Max Gurtovoy
2021-03-09  8:33 ` [PATCH 1/9] vfio-pci: rename vfio_pci.c to vfio_pci_core.c Max Gurtovoy
2021-03-09  8:33 ` [PATCH 2/9] vfio-pci: rename vfio_pci_private.h to vfio_pci_core.h Max Gurtovoy
2021-03-09  8:33 ` [PATCH 3/9] vfio-pci: rename vfio_pci_device to vfio_pci_core_device Max Gurtovoy
2021-03-09  8:33 ` [PATCH 4/9] vfio-pci: introduce vfio_pci_core subsystem driver Max Gurtovoy
2021-03-09  8:33 ` [PATCH 5/9] vfio/pci: introduce vfio_pci_device structure Max Gurtovoy
2021-03-09  8:33 ` [PATCH 6/9] vfio-pci-core: export vfio_pci_register_dev_region function Max Gurtovoy
2021-03-09  8:33 ` [PATCH 7/9] vfio/pci_core: split nvlink2 to nvlink2gpu and npu2 Max Gurtovoy
2021-03-10  8:08   ` Christoph Hellwig
2021-03-09  8:33 ` [PATCH 8/9] vfio/pci: export nvlink2 support into vendor vfio_pci drivers Max Gurtovoy
2021-03-10  6:39   ` Alexey Kardashevskiy
2021-03-10 12:57     ` Max Gurtovoy
2021-03-10 13:02       ` Jason Gunthorpe
2021-03-10 14:24         ` Alexey Kardashevskiy
2021-03-10 19:40           ` Jason Gunthorpe
2021-03-11  1:20             ` Alexey Kardashevskiy
2021-03-11  1:34               ` Jason Gunthorpe
2021-03-11  1:42                 ` Alexey Kardashevskiy
2021-03-11  2:00                   ` Jason Gunthorpe
2021-03-11  7:54                     ` Alexey Kardashevskiy
2021-03-11  9:44                       ` Max Gurtovoy
2021-03-11 16:51                         ` Jason Gunthorpe
2021-03-11 17:01                       ` Jason Gunthorpe
2021-03-10 14:19       ` Alexey Kardashevskiy
2021-03-11  1:10         ` Max Gurtovoy
2021-03-19 15:23       ` Alex Williamson
2021-03-19 16:17         ` Jason Gunthorpe
2021-03-19 16:20           ` Christoph Hellwig
2021-03-19 16:28             ` Jason Gunthorpe
2021-03-19 16:34               ` Christoph Hellwig
2021-03-19 17:36                 ` Alex Williamson
2021-03-19 20:07                   ` Jason Gunthorpe
2021-03-19 21:08                     ` Alex Williamson
2021-03-19 22:59                       ` Jason Gunthorpe
2021-03-20  4:40                         ` Alex Williamson
2021-03-21 12:58                           ` Jason Gunthorpe
2021-03-22 16:40                             ` Alex Williamson
2021-03-23 19:32                               ` Jason Gunthorpe
2021-03-24  2:39                                 ` Alexey Kardashevskiy
2021-03-29 23:10                                 ` Alex Williamson
2021-04-01 13:04                                   ` Cornelia Huck
2021-04-01 13:12                                   ` Jason Gunthorpe
2021-04-01 21:49                                     ` Alex Williamson
2021-03-22 15:11                     ` Christoph Hellwig
2021-03-22 16:44                       ` Jason Gunthorpe [this message]
2021-03-23 13:17                         ` Christoph Hellwig
2021-03-23 13:42                           ` Jason Gunthorpe
2021-03-09  8:33 ` [PATCH 9/9] vfio/pci: export igd support into vendor vfio_pci driver Max Gurtovoy
2021-03-10  8:15   ` Christoph Hellwig
2021-03-10 12:31     ` Jason Gunthorpe
2021-03-11 11:37       ` Christoph Hellwig
2021-03-11 12:09         ` Max Gurtovoy
2021-03-11 15:43         ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210322164411.GV2356281@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=ACurrid@nvidia.com \
    --cc=aik@ozlabs.ru \
    --cc=alex.williamson@redhat.com \
    --cc=artemp@nvidia.com \
    --cc=aviadye@nvidia.com \
    --cc=cjia@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=hch@lst.de \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=leonro@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=liranl@nvidia.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=mjrosato@linux.ibm.com \
    --cc=oren@nvidia.com \
    --cc=shahafs@nvidia.com \
    --cc=tzahio@nvidia.com \
    --cc=yarong@nvidia.com \
    --cc=yishaih@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.