From: Jason Gunthorpe <jgg@ziepe.ca> To: Jakub Kicinski <jakub.kicinski@netronome.com> Cc: Parav Pandit <parav@mellanox.com>, Jiri Pirko <jiri@resnulli.us>, David M <david.m.ertman@intel.com>, "gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>, "alex.williamson@redhat.com" <alex.williamson@redhat.com>, "davem@davemloft.net" <davem@davemloft.net>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>, "netdev@vger.kernel.org" <netdev@vger.kernel.org>, Saeed Mahameed <saeedm@mellanox.com>, "kwankhede@nvidia.com" <kwankhede@nvidia.com>, "leon@kernel.org" <leon@kernel.org>, "cohuck@redhat.com" <cohuck@redhat.com>, Jiri Pirko <jiri@mellanox.com>, "linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>, Or Gerlitz <gerlitz.or@gmail.com> Subject: Re: [PATCH net-next 00/19] Mellanox, mlx5 sub function support Date: Fri, 8 Nov 2019 16:12:53 -0400 Message-ID: <20191108201253.GE10956@ziepe.ca> (raw) In-Reply-To: <20191108111238.578f44f1@cakuba> On Fri, Nov 08, 2019 at 11:12:38AM -0800, Jakub Kicinski wrote: > On Fri, 8 Nov 2019 15:40:22 +0000, Parav Pandit wrote: > > > The new intel driver has been having a very similar discussion about how to > > > model their 'multi function device' ie to bind RDMA and other drivers to a > > > shared PCI function, and I think that discussion settled on adding a new bus? > > > > > > Really these things are all very similar, it would be nice to have a clear > > > methodology on how to use the device core if a single PCI device is split by > > > software into multiple different functional units and attached to different > > > driver instances. > > > > > > Currently there is alot of hacking in this area.. And a consistent scheme > > > might resolve the ugliness with the dma_ops wrappers. > > > > > > We already have the 'mfd' stuff to support splitting platform devices, maybe > > > we need to create a 'pci-mfd' to support splitting PCI devices? > > > > > > I'm not really clear how mfd and mdev relate, I always thought mdev was > > > strongly linked to vfio. > > > > > > > Mdev at beginning was strongly linked to vfio, but as I mentioned > > above it is addressing more use case. > > > > I observed that discussion, but was not sure of extending mdev further. > > > > One way to do for Intel drivers to do is after series [9]. > > Where PCI driver says, MDEV_CLASS_ID_I40_FOO > > RDMA driver mdev_register_driver(), matches on it and does the probe(). > > Yup, FWIW to me the benefit of reusing mdevs for the Intel case vs > muddying the purpose of mdevs is not a clear trade off. IMHO, mdev has amdev_parent_ops structure clearly intended to link it to vfio, so using a mdev for something not related to vfio seems like a poor choice. I suppose this series is the start and we will eventually see the mlx5's mdev_parent_ops filled in to support vfio - but *right now* this looks identical to the problem most of the RDMA capable net drivers have splitting into a 'core' and a 'function' > IMHO MFD should be of more natural use for Intel, since it's about > providing different functionality rather than virtual slices of the > same device. I don't think the 'different functionality' should matter much. Generally these multi-function drivers are build some some common 'core' language like queues interrupts, BAR space, etc and then these common things can be specialized into netdev, rdma, scsi, etc. So we see a general rough design with a core layer managing the raw HW then drivers on top of that (including netdev) using that API. The actual layering doesn't come through in the driver model, generally people put all the core stuff in with the netdev and then try and shuffle the netdev around as the 'handle' for that core API. These SFs are pretty similar in that the core physical driver continues to provide some software API support to the SF children (at least for mlx it is a small API) For instance mdev has no generic way to learn the BAR struct resources, so there is some extra API around the side that does this - in this series it is done by hackily co-opting the drvdata to something owned by the struct device instead of the device_driver and using that to access the API surface on 'struct mlx5_sf *', which includes the BAR info and so forth. This is probably the main difference from MFD. At least the few drivers I looked at, did not try and expose an SW API from the 'core' to the 'part', everything was usual generic driver resource stuff. Jason
next prev parent reply index Thread overview: 132+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-11-07 16:04 Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 01/19] net/mlx5: E-switch, Move devlink port close to eswitch port Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 02/19] net/mlx5: E-Switch, Add SF vport, vport-rep support Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 03/19] net/mlx5: Introduce SF table framework Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 04/19] net/mlx5: Introduce SF life cycle APIs to allocate/free Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 05/19] net/mlx5: E-Switch, Enable/disable SF's vport during SF life cycle Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 06/19] net/mlx5: Add support for mediated devices in switchdev mode Parav Pandit 2019-11-08 10:32 ` Jiri Pirko 2019-11-08 16:03 ` Parav Pandit 2019-11-08 16:22 ` Jiri Pirko 2019-11-08 16:29 ` Parav Pandit 2019-11-08 18:01 ` Jiri Pirko 2019-11-08 18:04 ` Jiri Pirko 2019-11-08 18:21 ` Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 07/19] vfio/mdev: Introduce sha1 based mdev alias Parav Pandit 2019-11-08 11:04 ` Jiri Pirko 2019-11-08 15:59 ` Parav Pandit 2019-11-08 16:28 ` Jiri Pirko 2019-11-08 11:10 ` Cornelia Huck 2019-11-08 16:03 ` Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 08/19] vfio/mdev: Make mdev alias unique among all mdevs Parav Pandit 2019-11-08 10:49 ` Jiri Pirko 2019-11-08 15:13 ` Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 09/19] vfio/mdev: Expose mdev alias in sysfs tree Parav Pandit 2019-11-08 13:22 ` Jiri Pirko 2019-11-08 18:03 ` Alex Williamson 2019-11-08 18:16 ` Jiri Pirko 2019-11-07 16:08 ` [PATCH net-next 10/19] vfio/mdev: Introduce an API mdev_alias Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 11/19] vfio/mdev: Improvise mdev life cycle and parent removal scheme Parav Pandit 2019-11-08 13:01 ` Cornelia Huck 2019-11-08 16:12 ` Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 12/19] devlink: Introduce mdev port flavour Parav Pandit 2019-11-07 20:38 ` Jakub Kicinski 2019-11-07 21:03 ` Parav Pandit 2019-11-08 1:17 ` Jakub Kicinski 2019-11-08 1:44 ` Parav Pandit 2019-11-08 2:20 ` Jakub Kicinski 2019-11-08 2:31 ` Parav Pandit 2019-11-08 9:46 ` Jiri Pirko 2019-11-08 15:45 ` Parav Pandit 2019-11-08 16:31 ` Jiri Pirko 2019-11-08 16:43 ` Parav Pandit 2019-11-08 18:11 ` Jiri Pirko 2019-11-08 18:23 ` Parav Pandit 2019-11-08 18:34 ` Jiri Pirko 2019-11-08 18:56 ` Parav Pandit 2019-11-08 9:30 ` Jiri Pirko 2019-11-08 15:41 ` Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 13/19] net/mlx5: Register SF devlink port Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 14/19] net/mlx5: Share irqs between SFs and parent PCI device Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 15/19] net/mlx5: Add load/unload routines for SF driver binding Parav Pandit 2019-11-08 9:48 ` Jiri Pirko 2019-11-08 11:13 ` Jiri Pirko 2019-11-07 16:08 ` [PATCH net-next 16/19] net/mlx5: Implement dma ops and params for mediated device Parav Pandit 2019-11-07 20:42 ` Jakub Kicinski 2019-11-07 21:30 ` Parav Pandit 2019-11-08 1:16 ` Jakub Kicinski 2019-11-08 6:37 ` Christoph Hellwig 2019-11-08 15:29 ` Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 17/19] net/mlx5: Add mdev driver to bind to mdev devices Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 18/19] Documentation: net: mlx5: Add mdev usage documentation Parav Pandit 2019-11-07 16:08 ` [PATCH net-next 19/19] mtty: Optionally support mtty alias Parav Pandit 2019-11-08 6:26 ` Leon Romanovsky 2019-11-08 10:45 ` Jiri Pirko 2019-11-08 15:08 ` Parav Pandit 2019-11-08 15:15 ` Jiri Pirko 2019-11-08 13:46 ` Cornelia Huck 2019-11-08 15:10 ` Parav Pandit 2019-11-08 15:28 ` Cornelia Huck 2019-11-08 15:30 ` Parav Pandit 2019-11-08 17:54 ` Alex Williamson 2019-11-08 9:51 ` [PATCH net-next 01/19] net/mlx5: E-switch, Move devlink port close to eswitch port Jiri Pirko 2019-11-08 15:50 ` Parav Pandit 2019-11-07 17:03 ` [PATCH net-next 00/19] Mellanox, mlx5 sub function support Leon Romanovsky 2019-11-07 20:10 ` Parav Pandit 2019-11-08 6:20 ` Leon Romanovsky 2019-11-08 15:01 ` Parav Pandit 2019-11-07 20:32 ` Jakub Kicinski 2019-11-07 20:52 ` Parav Pandit 2019-11-08 1:16 ` Jakub Kicinski 2019-11-08 1:49 ` Parav Pandit 2019-11-08 2:12 ` Jakub Kicinski 2019-11-08 12:12 ` Jiri Pirko 2019-11-08 14:40 ` Jason Gunthorpe 2019-11-08 15:40 ` Parav Pandit 2019-11-08 19:12 ` Jakub Kicinski 2019-11-08 20:12 ` Jason Gunthorpe [this message] 2019-11-08 20:20 ` Parav Pandit 2019-11-08 20:32 ` Jason Gunthorpe 2019-11-08 20:52 ` gregkh 2019-11-08 20:34 ` Alex Williamson 2019-11-08 21:05 ` Jason Gunthorpe 2019-11-08 21:19 ` gregkh 2019-11-08 21:52 ` Alex Williamson 2019-11-08 22:48 ` Parav Pandit 2019-11-09 0:57 ` Jason Gunthorpe 2019-11-09 17:41 ` Jakub Kicinski 2019-11-10 19:04 ` Jason Gunthorpe 2019-11-10 19:48 ` Parav Pandit 2019-11-11 14:17 ` Jiri Pirko 2019-11-11 14:58 ` Parav Pandit 2019-11-11 15:06 ` Jiri Pirko 2019-11-19 4:51 ` Parav Pandit 2019-11-09 0:12 ` Jason Gunthorpe 2019-11-09 0:45 ` Parav Pandit 2019-11-11 2:19 ` Jason Wang 2019-11-08 21:45 ` Jakub Kicinski 2019-11-09 0:44 ` Jason Gunthorpe 2019-11-09 8:46 ` gregkh 2019-11-09 11:18 ` Jiri Pirko 2019-11-09 17:28 ` Jakub Kicinski 2019-11-10 9:16 ` gregkh 2019-11-09 17:27 ` Jakub Kicinski 2019-11-10 9:18 ` gregkh 2019-11-11 3:46 ` Jakub Kicinski 2019-11-11 5:18 ` Parav Pandit 2019-11-11 13:30 ` Jiri Pirko 2019-11-11 14:14 ` gregkh 2019-11-11 14:37 ` Jiri Pirko 2019-11-10 19:37 ` Jason Gunthorpe 2019-11-11 3:57 ` Jakub Kicinski 2019-11-08 16:06 ` Parav Pandit 2019-11-08 19:06 ` Jakub Kicinski 2019-11-08 19:34 ` Parav Pandit 2019-11-08 19:48 ` Jakub Kicinski 2019-11-08 19:41 ` Jiri Pirko 2019-11-08 20:40 ` Parav Pandit 2019-11-08 21:21 ` Jakub Kicinski 2019-11-08 21:39 ` Jiri Pirko 2019-11-08 21:51 ` Jakub Kicinski 2019-11-08 22:21 ` Jiri Pirko 2019-11-07 23:57 ` David Miller
Reply instructions: You may reply publically to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20191108201253.GE10956@ziepe.ca \ --to=jgg@ziepe.ca \ --cc=alex.williamson@redhat.com \ --cc=cohuck@redhat.com \ --cc=davem@davemloft.net \ --cc=david.m.ertman@intel.com \ --cc=gerlitz.or@gmail.com \ --cc=gregkh@linuxfoundation.org \ --cc=jakub.kicinski@netronome.com \ --cc=jiri@mellanox.com \ --cc=jiri@resnulli.us \ --cc=kvm@vger.kernel.org \ --cc=kwankhede@nvidia.com \ --cc=leon@kernel.org \ --cc=linux-rdma@vger.kernel.org \ --cc=netdev@vger.kernel.org \ --cc=parav@mellanox.com \ --cc=saeedm@mellanox.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
Linux-RDMA Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/linux-rdma/0 linux-rdma/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 linux-rdma linux-rdma/ https://lore.kernel.org/linux-rdma \ linux-rdma@vger.kernel.org public-inbox-index linux-rdma Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kernel.vger.linux-rdma AGPL code for this site: git clone https://public-inbox.org/public-inbox.git