kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Cornelia Huck <cohuck@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>,
	Max Gurtovoy <mgurtovoy@nvidia.com>, <kvm@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <liranl@nvidia.com>,
	<oren@nvidia.com>, <tzahio@nvidia.com>, <leonro@nvidia.com>,
	<yarong@nvidia.com>, <aviadye@nvidia.com>, <shahafs@nvidia.com>,
	<artemp@nvidia.com>, <kwankhede@nvidia.com>, <ACurrid@nvidia.com>,
	<gmataev@nvidia.com>, <cjia@nvidia.com>
Subject: Re: [PATCH RFC v1 0/3] Introduce vfio-pci-core subsystem
Date: Mon, 25 Jan 2021 14:04:40 -0400	[thread overview]
Message-ID: <20210125180440.GR4147@nvidia.com> (raw)
In-Reply-To: <20210125172035.3b61b91b.cohuck@redhat.com>

On Mon, Jan 25, 2021 at 05:20:35PM +0100, Cornelia Huck wrote:

> I think you cut out an important part of Alex' comment, so let me
> repost it here:

Yes, I've already respnded to this.

> I'm missing the bigger picture of how this api is supposed to work out,
> a driver with a lot of TODOs does not help much to figure out whether
> this split makes sense and is potentially useful for a number of use
> cases

The change to vfio-pci is essentially complete, ignoring some
additional cleanup. I'm not sure seeing details how some mlx5 driver
uses it will be substantially more informative if it is useful for
S390, or Intel.

As far as API it is clear to see:

> +/* Exported functions */
> +struct vfio_pci_device *vfio_create_pci_device(struct pci_dev *pdev,
> +               const struct vfio_device_ops *vfio_pci_ops, void *dd_data);
> +void vfio_destroy_pci_device(struct pci_dev *pdev);

Called from probe/remove of the consuming driver

> +int vfio_pci_core_open(void *device_data);
> +void vfio_pci_core_release(void *device_data);
> +long vfio_pci_core_ioctl(void *device_data, unsigned int cmd,
> +               unsigned long arg);
> +ssize_t vfio_pci_core_read(void *device_data, char __user *buf, size_t count,
> +               loff_t *ppos);
> +ssize_t vfio_pci_core_write(void *device_data, const char __user *buf,
> +               size_t count, loff_t *ppos);
> +int vfio_pci_core_mmap(void *device_data, struct vm_area_struct *vma);
> +void vfio_pci_core_request(void *device_data, unsigned int count);
> +int vfio_pci_core_match(void *device_data, char *buf);

Called from vfio_device_ops and has the existing well defined API of
the matching ops.

> +int vfio_pci_core_sriov_configure(struct pci_dev *pdev, int nr_virtfn);
> +pci_ers_result_t vfio_pci_core_aer_err_detected(struct pci_dev *pdev,
> +               pci_channel_state_t state);
> +

Callbacks from the PCI core, API defined by the PCI subsystem.

Notice there is no major new API exposed from vfio-pci, nor are
vfio-pci internals any more exposed then they used to be.

Except create/destroy, every single newly exported function was
already available via a function pointer someplace else in the
API. This is a key point, because it is very much NOT like the May
series.

Because the new driver sits before vfio_pci because it owns the
vfio_device_ops, it introduces nearly nothing new. The May series put
the new driver after vfio-pci as some internalized sub-driver and
exposed a whole new API, wrappers and callbacks to go along with it.

For instance if a new driver wants to implement some new thing under
ioctl, like migration, then it would do

static long new_driver_pci_core_ioctl(void *device_data, unsigned int cmd,
               unsigned long arg)
{
   switch (cmd) {
     case NEW_THING: return new_driver_thing();
     default: return vfio_pci_core_ioctl(device_data, cmd, arg);
   }
}
static const struct vfio_device_ops new_driver_pci_ops = {
   [...]
   .ioctl = new_driver_ioctl,
};

Simple enough, if you understand the above, then you also understand
what direction the mlx5 driver will go in.

This is also why it is clearly useful for a wide range of cases, as a
driver can use as much or as little of the vfio-pci-core ops as it
wants. The driver doesn't get to reach into vfio-pci, but it can sit
before it and intercept the entire uAPI. That is very powerful.

> or whether mdev (even with its different lifecycle model) or a

I think it is appropriate to think of mdev as only the special
lifecycle model, it doesn't have any other functionality.

mdev's lifecycle model does nothing to solve the need to libraryize
vfio-pci.

> different vfio bus driver might be a better fit for the more

What is a "vfio bus driver" ? Why would we need a bus here?

> involved cases. (For example, can s390 ISM fit here?)

Don't know what is special about ISM? What TODO do you think will help
answer that question?

Jason

  reply	other threads:[~2021-01-25 18:10 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-17 18:15 [PATCH RFC v1 0/3] Introduce vfio-pci-core subsystem Max Gurtovoy
2021-01-17 18:15 ` [PATCH 1/3] vfio-pci: rename vfio_pci.c to vfio_pci_core.c Max Gurtovoy
2021-01-17 18:15 ` [PATCH 2/3] vfio-pci: introduce vfio_pci_core subsystem driver Max Gurtovoy
2021-01-17 18:15 ` [PATCH 3/3] mlx5-vfio-pci: add new vfio_pci driver for mlx5 devices Max Gurtovoy
2021-01-18 13:38 ` [PATCH RFC v1 0/3] Introduce vfio-pci-core subsystem Cornelia Huck
2021-01-18 15:10   ` Jason Gunthorpe
2021-01-18 16:00     ` Cornelia Huck
2021-01-18 18:16       ` Jason Gunthorpe
2021-01-19 18:56         ` Cornelia Huck
2021-01-19 19:42           ` Jason Gunthorpe
2021-01-22 19:25 ` Alex Williamson
2021-01-22 20:04   ` Jason Gunthorpe
2021-01-25 16:20     ` Cornelia Huck
2021-01-25 18:04       ` Jason Gunthorpe [this message]
2021-01-25 23:31         ` Alex Williamson
2021-01-26  0:45           ` Jason Gunthorpe
2021-01-26  3:34             ` Alex Williamson
2021-01-26 13:27               ` Max Gurtovoy
2021-01-28 16:29                 ` Cornelia Huck
2021-01-28 21:02                   ` Alex Williamson
2021-01-31 18:46                     ` Max Gurtovoy
2021-02-01  4:32                       ` Alex Williamson
2021-02-01  9:40                         ` Max Gurtovoy
2021-02-01 17:29                           ` Alex Williamson
2021-02-01 17:17                         ` Jason Gunthorpe
2021-01-31 18:09                   ` Max Gurtovoy
2021-01-26 17:23               ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210125180440.GR4147@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=ACurrid@nvidia.com \
    --cc=alex.williamson@redhat.com \
    --cc=artemp@nvidia.com \
    --cc=aviadye@nvidia.com \
    --cc=cjia@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=gmataev@nvidia.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=leonro@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=liranl@nvidia.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=oren@nvidia.com \
    --cc=shahafs@nvidia.com \
    --cc=tzahio@nvidia.com \
    --cc=yarong@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).