From: Parav Pandit <parav@mellanox.com>
To: Alex Williamson <alex.williamson@redhat.com>,
Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jakub Kicinski <jakub.kicinski@netronome.com>,
Jiri Pirko <jiri@resnulli.us>, David M <david.m.ertman@intel.com>,
"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
"davem@davemloft.net" <davem@davemloft.net>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
Saeed Mahameed <saeedm@mellanox.com>,
"kwankhede@nvidia.com" <kwankhede@nvidia.com>,
"leon@kernel.org" <leon@kernel.org>,
"cohuck@redhat.com" <cohuck@redhat.com>,
Jiri Pirko <jiri@mellanox.com>,
"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
Or Gerlitz <gerlitz.or@gmail.com>,
"Jason Wang (jasowang@redhat.com)" <jasowang@redhat.com>
Subject: RE: [PATCH net-next 00/19] Mellanox, mlx5 sub function support
Date: Fri, 8 Nov 2019 22:48:31 +0000 [thread overview]
Message-ID: <AM0PR05MB4866444210721BC4EE775D27D17B0@AM0PR05MB4866.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <20191108145210.7ad6351c@x1.home>
Hi Greg, Jason,
> -----Original Message-----
> From: Alex Williamson <alex.williamson@redhat.com>
>
> On Fri, 8 Nov 2019 17:05:45 -0400
> Jason Gunthorpe <jgg@ziepe.ca> wrote:
>
> > On Fri, Nov 08, 2019 at 01:34:35PM -0700, Alex Williamson wrote:
> > > On Fri, 8 Nov 2019 16:12:53 -0400
> > > Jason Gunthorpe <jgg@ziepe.ca> wrote:
> > >
> > > > On Fri, Nov 08, 2019 at 11:12:38AM -0800, Jakub Kicinski wrote:
> > > > > On Fri, 8 Nov 2019 15:40:22 +0000, Parav Pandit wrote:
> > > > > > > The new intel driver has been having a very similar
> > > > > > > discussion about how to model their 'multi function device'
> > > > > > > ie to bind RDMA and other drivers to a shared PCI function, and I
> think that discussion settled on adding a new bus?
> > > > > > >
> > > > > > > Really these things are all very similar, it would be nice
> > > > > > > to have a clear methodology on how to use the device core if
> > > > > > > a single PCI device is split by software into multiple
> > > > > > > different functional units and attached to different driver instances.
> > > > > > >
> > > > > > > Currently there is alot of hacking in this area.. And a
> > > > > > > consistent scheme might resolve the ugliness with the dma_ops
> wrappers.
> > > > > > >
> > > > > > > We already have the 'mfd' stuff to support splitting
> > > > > > > platform devices, maybe we need to create a 'pci-mfd' to support
> splitting PCI devices?
> > > > > > >
> > > > > > > I'm not really clear how mfd and mdev relate, I always
> > > > > > > thought mdev was strongly linked to vfio.
> > > > > > >
> > > > > >
> > > > > > Mdev at beginning was strongly linked to vfio, but as I
> > > > > > mentioned above it is addressing more use case.
> > > > > >
> > > > > > I observed that discussion, but was not sure of extending mdev
> further.
> > > > > >
> > > > > > One way to do for Intel drivers to do is after series [9].
> > > > > > Where PCI driver says, MDEV_CLASS_ID_I40_FOO
> > > > > > RDMA driver mdev_register_driver(), matches on it and does the
> probe().
> > > > >
> > > > > Yup, FWIW to me the benefit of reusing mdevs for the Intel case vs
> > > > > muddying the purpose of mdevs is not a clear trade off.
> > > >
> > > > IMHO, mdev has amdev_parent_ops structure clearly intended to link
> > > > it to vfio, so using a mdev for something not related to vfio
> > > > seems like a poor choice.
> > >
> > > Unless there's some opposition, I'm intended to queue this for v5.5:
> > >
> > > https://www.spinics.net/lists/kvm/msg199613.html
> > >
> > > mdev has started out as tied to vfio, but at it's core, it's just a
> > > device life cycle infrastructure with callbacks between bus drivers
> > > and vendor devices. If virtio is on the wrong path with the above
> > > series, please speak up. Thanks,
> >
> > Well, I think Greg just objected pretty strongly.
> >
> > IMHO it is wrong to turn mdev into some API multiplexor. That is what
> > the driver core already does and AFAIK your bus type is supposed to
> > represent your API contract to your drivers.
> >
> > Since the bus type is ABI, 'mdev' is really all about vfio I guess?
> >
> > Maybe mdev should grow by factoring the special GUID life cycle stuff
> > into a helper library that can make it simpler to build proper API
> > specific bus's using that lifecycle model? ie the virtio I saw
> > proposed should probably be a mdev-virtio bus type providing this new
> > virtio API contract using a 'struct mdev_virtio'?
>
> I see, the bus:API contract is more clear when we're talking about physical
> buses and physical devices following a hardware specification.
> But if we take PCI for example, each PCI device has it's own internal API that
> operates on the bus API. PCI bus drivers match devices based on vendor and
> device ID, which defines that internal API, not the bus API. The bus API is pretty
> thin when we're talking virtual devices and virtual buses though. The bus "API"
> is essentially that lifecycle management, so I'm having a bit of a hard time
> differentiating this from saying "hey, that PCI bus is nice, but we can't have
> drivers using their own API on the same bus, so can we move the config space,
> reset, hotplug, etc, stuff into helpers and come up with an (ex.) mlx5_bus
> instead?" Essentially for virtual devices, we're dictating a bus per device type,
> whereas it seemed like a reasonable idea at the time to create a common
> virtual device bus, but maybe it went into the weeds when trying to figure out
> how device drivers match to devices on that bus and actually interact with
> them.
>
> > I only looked briefly but mdev seems like an unusual way to use the
> > driver core. *generally* I would expect that if a driver wants to
> > provide a foo_device (on a foo bus, providing the foo API contract) it
> > looks very broadly like:
> >
> > struct foo_device {
> > struct device dev;
> > const struct foo_ops *ops;
> > };
> > struct my_foo_device {
> > struct foo_device fdev;
> > };
> >
> > foo_device_register(&mydev->fdev);
> >
If I understood Greg's direction on using bus and Jason's suggestion of 'mdev-virtio' example,
User has one of the three use cases as I described in cover letter.
i.e. create a sub device and configure it.
once its configured,
Based on the use case, map it to right bus driver.
1. mdev-vfio (no demux business)
2. virtio (new bus)
3. mlx5_bus (new bus)
We should be creating 3 different buses, instead of mdev bus being de-multiplexer of that?
Hence, depending the device flavour specified, create such device on right bus?
For example,
$ devlink create subdev pci/0000:05:00.0 flavour virtio name foo subdev_id 1
$ devlink create subdev pci/0000:05:00.0 flavour mdev <uuid> subdev_id 2
$ devlink create subdev pci/0000:05:00.0 flavour mlx5 id 1 subdev_id 3
$ devlink subdev pci/0000:05:00.0/<subdev_id> config <params>
$ echo <respective_device_id> <sysfs_path>/bind
Implement power management callbacks also on all above 3 buses?
Abstract out mlx5_bus into more generic virtual bus (vdev bus?) so that multiple vendors can reuse?
next prev parent reply other threads:[~2019-11-08 22:48 UTC|newest]
Thread overview: 132+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-07 16:04 [PATCH net-next 00/19] Mellanox, mlx5 sub function support Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 01/19] net/mlx5: E-switch, Move devlink port close to eswitch port Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 02/19] net/mlx5: E-Switch, Add SF vport, vport-rep support Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 03/19] net/mlx5: Introduce SF table framework Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 04/19] net/mlx5: Introduce SF life cycle APIs to allocate/free Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 05/19] net/mlx5: E-Switch, Enable/disable SF's vport during SF life cycle Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 06/19] net/mlx5: Add support for mediated devices in switchdev mode Parav Pandit
2019-11-08 10:32 ` Jiri Pirko
2019-11-08 16:03 ` Parav Pandit
2019-11-08 16:22 ` Jiri Pirko
2019-11-08 16:29 ` Parav Pandit
2019-11-08 18:01 ` Jiri Pirko
2019-11-08 18:04 ` Jiri Pirko
2019-11-08 18:21 ` Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 07/19] vfio/mdev: Introduce sha1 based mdev alias Parav Pandit
2019-11-08 11:04 ` Jiri Pirko
2019-11-08 15:59 ` Parav Pandit
2019-11-08 16:28 ` Jiri Pirko
2019-11-08 11:10 ` Cornelia Huck
2019-11-08 16:03 ` Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 08/19] vfio/mdev: Make mdev alias unique among all mdevs Parav Pandit
2019-11-08 10:49 ` Jiri Pirko
2019-11-08 15:13 ` Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 09/19] vfio/mdev: Expose mdev alias in sysfs tree Parav Pandit
2019-11-08 13:22 ` Jiri Pirko
2019-11-08 18:03 ` Alex Williamson
2019-11-08 18:16 ` Jiri Pirko
2019-11-07 16:08 ` [PATCH net-next 10/19] vfio/mdev: Introduce an API mdev_alias Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 11/19] vfio/mdev: Improvise mdev life cycle and parent removal scheme Parav Pandit
2019-11-08 13:01 ` Cornelia Huck
2019-11-08 16:12 ` Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 12/19] devlink: Introduce mdev port flavour Parav Pandit
2019-11-07 20:38 ` Jakub Kicinski
2019-11-07 21:03 ` Parav Pandit
2019-11-08 1:17 ` Jakub Kicinski
2019-11-08 1:44 ` Parav Pandit
2019-11-08 2:20 ` Jakub Kicinski
2019-11-08 2:31 ` Parav Pandit
2019-11-08 9:46 ` Jiri Pirko
2019-11-08 15:45 ` Parav Pandit
2019-11-08 16:31 ` Jiri Pirko
2019-11-08 16:43 ` Parav Pandit
2019-11-08 18:11 ` Jiri Pirko
2019-11-08 18:23 ` Parav Pandit
2019-11-08 18:34 ` Jiri Pirko
2019-11-08 18:56 ` Parav Pandit
2019-11-08 9:30 ` Jiri Pirko
2019-11-08 15:41 ` Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 13/19] net/mlx5: Register SF devlink port Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 14/19] net/mlx5: Share irqs between SFs and parent PCI device Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 15/19] net/mlx5: Add load/unload routines for SF driver binding Parav Pandit
2019-11-08 9:48 ` Jiri Pirko
2019-11-08 11:13 ` Jiri Pirko
2019-11-07 16:08 ` [PATCH net-next 16/19] net/mlx5: Implement dma ops and params for mediated device Parav Pandit
2019-11-07 20:42 ` Jakub Kicinski
2019-11-07 21:30 ` Parav Pandit
2019-11-08 1:16 ` Jakub Kicinski
2019-11-08 6:37 ` Christoph Hellwig
2019-11-08 15:29 ` Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 17/19] net/mlx5: Add mdev driver to bind to mdev devices Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 18/19] Documentation: net: mlx5: Add mdev usage documentation Parav Pandit
2019-11-07 16:08 ` [PATCH net-next 19/19] mtty: Optionally support mtty alias Parav Pandit
2019-11-08 6:26 ` Leon Romanovsky
2019-11-08 10:45 ` Jiri Pirko
2019-11-08 15:08 ` Parav Pandit
2019-11-08 15:15 ` Jiri Pirko
2019-11-08 13:46 ` Cornelia Huck
2019-11-08 15:10 ` Parav Pandit
2019-11-08 15:28 ` Cornelia Huck
2019-11-08 15:30 ` Parav Pandit
2019-11-08 17:54 ` Alex Williamson
2019-11-08 9:51 ` [PATCH net-next 01/19] net/mlx5: E-switch, Move devlink port close to eswitch port Jiri Pirko
2019-11-08 15:50 ` Parav Pandit
2019-11-07 17:03 ` [PATCH net-next 00/19] Mellanox, mlx5 sub function support Leon Romanovsky
2019-11-07 20:10 ` Parav Pandit
2019-11-08 6:20 ` Leon Romanovsky
2019-11-08 15:01 ` Parav Pandit
2019-11-07 20:32 ` Jakub Kicinski
2019-11-07 20:52 ` Parav Pandit
2019-11-08 1:16 ` Jakub Kicinski
2019-11-08 1:49 ` Parav Pandit
2019-11-08 2:12 ` Jakub Kicinski
2019-11-08 12:12 ` Jiri Pirko
2019-11-08 14:40 ` Jason Gunthorpe
2019-11-08 15:40 ` Parav Pandit
2019-11-08 19:12 ` Jakub Kicinski
2019-11-08 20:12 ` Jason Gunthorpe
2019-11-08 20:20 ` Parav Pandit
2019-11-08 20:32 ` Jason Gunthorpe
2019-11-08 20:52 ` gregkh
2019-11-08 20:34 ` Alex Williamson
2019-11-08 21:05 ` Jason Gunthorpe
2019-11-08 21:19 ` gregkh
2019-11-08 21:52 ` Alex Williamson
2019-11-08 22:48 ` Parav Pandit [this message]
2019-11-09 0:57 ` Jason Gunthorpe
2019-11-09 17:41 ` Jakub Kicinski
2019-11-10 19:04 ` Jason Gunthorpe
2019-11-10 19:48 ` Parav Pandit
2019-11-11 14:17 ` Jiri Pirko
2019-11-11 14:58 ` Parav Pandit
2019-11-11 15:06 ` Jiri Pirko
2019-11-19 4:51 ` Parav Pandit
2019-11-09 0:12 ` Jason Gunthorpe
2019-11-09 0:45 ` Parav Pandit
2019-11-11 2:19 ` Jason Wang
2019-11-08 21:45 ` Jakub Kicinski
2019-11-09 0:44 ` Jason Gunthorpe
2019-11-09 8:46 ` gregkh
2019-11-09 11:18 ` Jiri Pirko
2019-11-09 17:28 ` Jakub Kicinski
2019-11-10 9:16 ` gregkh
2019-11-09 17:27 ` Jakub Kicinski
2019-11-10 9:18 ` gregkh
2019-11-11 3:46 ` Jakub Kicinski
2019-11-11 5:18 ` Parav Pandit
2019-11-11 13:30 ` Jiri Pirko
2019-11-11 14:14 ` gregkh
2019-11-11 14:37 ` Jiri Pirko
2019-11-10 19:37 ` Jason Gunthorpe
2019-11-11 3:57 ` Jakub Kicinski
2019-11-08 16:06 ` Parav Pandit
2019-11-08 19:06 ` Jakub Kicinski
2019-11-08 19:34 ` Parav Pandit
2019-11-08 19:48 ` Jakub Kicinski
2019-11-08 19:41 ` Jiri Pirko
2019-11-08 20:40 ` Parav Pandit
2019-11-08 21:21 ` Jakub Kicinski
2019-11-08 21:39 ` Jiri Pirko
2019-11-08 21:51 ` Jakub Kicinski
2019-11-08 22:21 ` Jiri Pirko
2019-11-07 23:57 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AM0PR05MB4866444210721BC4EE775D27D17B0@AM0PR05MB4866.eurprd05.prod.outlook.com \
--to=parav@mellanox.com \
--cc=alex.williamson@redhat.com \
--cc=cohuck@redhat.com \
--cc=davem@davemloft.net \
--cc=david.m.ertman@intel.com \
--cc=gerlitz.or@gmail.com \
--cc=gregkh@linuxfoundation.org \
--cc=jakub.kicinski@netronome.com \
--cc=jasowang@redhat.com \
--cc=jgg@ziepe.ca \
--cc=jiri@mellanox.com \
--cc=jiri@resnulli.us \
--cc=kvm@vger.kernel.org \
--cc=kwankhede@nvidia.com \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).