Linux-RDMA Archive on lore.kernel.org
 help / color / Atom feed
From: "Ertman, David M" <david.m.ertman@intel.com>
To: Jason Gunthorpe <jgg@ziepe.ca>,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>
Cc: "Nguyen, Anthony L" <anthony.l.nguyen@intel.com>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"dledford@redhat.com" <dledford@redhat.com>,
	"Ismail, Mustafa" <mustafa.ismail@intel.com>,
	"Patil, Kiran" <kiran.patil@intel.com>,
	"lee.jones@linaro.org" <lee.jones@linaro.org>
Subject: RE: [RFC 01/20] ice: Initialize and register multi-function device to provide RDMA
Date: Thu, 24 Oct 2019 22:25:36 +0000
Message-ID: <2B0E3F215D1AB84DA946C8BEE234CCC97B2E1D29@ORSMSX101.amr.corp.intel.com> (raw)
In-Reply-To: <20191024191037.GC23952@ziepe.ca>

> -----Original Message-----
> From: Jason Gunthorpe [mailto:jgg@ziepe.ca]
> Sent: Thursday, October 24, 2019 12:11 PM
> To: gregkh@linuxfoundation.org
> Cc: Ertman, David M <david.m.ertman@intel.com>; Nguyen, Anthony L
> <anthony.l.nguyen@intel.com>; Kirsher, Jeffrey T
> <jeffrey.t.kirsher@intel.com>; netdev@vger.kernel.org; linux-
> rdma@vger.kernel.org; dledford@redhat.com; Ismail, Mustafa
> <mustafa.ismail@intel.com>; Patil, Kiran <kiran.patil@intel.com>
> Subject: Re: [RFC 01/20] ice: Initialize and register multi-function device to
> provide RDMA
> 
> On Thu, Oct 24, 2019 at 02:56:59PM -0400, gregkh@linuxfoundation.org wrote:
> > On Wed, Oct 23, 2019 at 03:01:09PM -0300, Jason Gunthorpe wrote:
> > > On Wed, Oct 23, 2019 at 05:55:38PM +0000, Ertman, David M wrote:
> > > > > Did any resolution happen here? Dave, do you know what to do to
> > > > > get Greg's approval?
> > > > >
> > > > > Jason
> > > >
> > > > This was the last communication that I saw on this topic.  I was
> > > > taking Greg's silence as "Oh ok, that works" :)  I hope I was not being too
> optimistic!
> > > >
> > > > If there is any outstanding issue I am not aware of it, but please
> > > > let me know if I am out of the loop!
> > > >
> > > > Greg, if you have any other concerns or questions I would be happy to
> address them!
> > >
> > > I was hoping to hear Greg say that taking a pci_device, feeding it
> > > to the multi-function-device stuff to split it to a bunch of
> > > platform_device's is OK, or that mfd should be changed somehow..
> >
> > Again, platform devices are ONLY for actual platform devices.  A PCI
> > device is NOT a platform device, sorry.
> 
> To be fair to David, IIRC, you did suggest mfd as the solution here some months
> ago, but I think you also said it might need some fixing
> :)
> 
> > If MFD needs to be changed to handle non-platform devices, fine, but
> > maybe what you really need to do here is make your own "bus" of
> > individual devices and have drivers for them, as you can't have a
> > "normal" PCI driver for these.
> 
> It does feel like MFD is the cleaner model here otherwise we'd have each
> driver making its own custom buses for its multi-function capability..
> 
> David, do you see some path to fix mfd to not use platform devices?
> 
> Maybe it needs a MFD bus type and a 'struct mfd_device' ?
> 
> I guess I'll drop these patches until it is sorted.
> 
> Jason


The original submission of the RDMA driver had separate drivers to
interact with the ice and i40e LAN drivers.  There was only about 2000
lines of code different between them, so a request was (rightly so)
made to unify the RDMA drivers into a single driver.

Our original submission for IIDC had a "software bus" that the ice driver
was creating.  The problem, now that the RDMA driver is a unified driver
for both the ice and i40e drivers, each of which would need to create their
own bus.  So, we cannot have module dependencies for the irdma driver,
as we don't know which hardware the user will have installed in the system.
or which drivers will be loaded in what order.  As new hardware is supported
(presumably by the same irdma driver) this will only get more complicated.
For instance, if the ice driver loads, then the irdma, then the i40e.  The irdma
will have no notice that a new bus was created that it needs to register with
by the i40e driver.

Our original solution to this problem was with netdev notifiers, which met with
resistance, and the statement that the bus infrastructure was the proper way to
approach the interaction of the LAN driver and peer.  This did turn out to be a
much more elegant way to approach the issue.

The direct access of the platform bus was unacceptable, and the MFD sub-system
was suggested by Greg as the solution.  The MFD sub-system uses the platform
bus in the background as a base to perform its functions, since it is a purely software
construct that is handy and fulfills its needs.  The question then is:  If the MFD sub-
system is using the platform bus for all of its background functionality, is the platform
bus really only for platform devices?  It seems that the kernel is already using the
platform bus as a generic software based bus, and it fulfills the role efficiently.

Dave E.


  reply index

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-26 16:44 [RFC 00/20] Intel RDMA/IDC Driver series Jeff Kirsher
2019-09-26 16:45 ` [RFC 01/20] ice: Initialize and register multi-function device to provide RDMA Jeff Kirsher
2019-09-26 18:05   ` Greg KH
2019-09-26 23:39     ` Nguyen, Anthony L
2019-09-27  5:13       ` gregkh
2019-09-27 18:03         ` Ertman, David M
2019-10-23 17:44           ` Jason Gunthorpe
2019-10-23 17:55             ` Ertman, David M
2019-10-23 18:01               ` Jason Gunthorpe
2019-10-24 18:56                 ` gregkh
2019-10-24 19:10                   ` Jason Gunthorpe
2019-10-24 22:25                     ` Ertman, David M [this message]
2019-10-25  1:30                       ` gregkh
2019-10-25 22:27                         ` Ertman, David M
2019-10-26 18:53                           ` gregkh
2019-10-31  7:42                             ` Tomas Winkler
2019-09-26 16:45 ` [RFC 02/20] ice: Implement peer communications Jeff Kirsher
2019-09-26 16:45 ` [RFC 03/20] i40e: Register multi-function device to provide RDMA Jeff Kirsher
2019-09-26 16:45 ` [RFC 04/20] RDMA/irdma: Add driver framework definitions Jeff Kirsher
2019-09-26 16:55   ` Jason Gunthorpe
2019-09-26 18:02     ` gregkh
2019-09-26 18:04       ` Jason Gunthorpe
2019-09-26 18:10         ` Saleem, Shiraz
2019-09-26 17:30   ` Leon Romanovsky
2019-09-26 19:51     ` Saleem, Shiraz
2019-10-04 20:12     ` Jeff Kirsher
2019-10-04 23:45       ` Jason Gunthorpe
2019-10-05  0:46         ` Jeff Kirsher
2019-10-05  6:28           ` Leon Romanovsky
2019-10-05  7:08             ` gregkh
2019-10-05 22:01           ` Jason Gunthorpe
2019-09-26 16:45 ` [RFC 05/20] RDMA/irdma: Implement device initialization definitions Jeff Kirsher
2019-09-26 16:45 ` [RFC 06/20] RDMA/irdma: Implement HW Admin Queue OPs Jeff Kirsher
2019-09-26 16:45 ` [RFC 07/20] RDMA/irdma: Add HMC backing store setup functions Jeff Kirsher
2019-09-26 16:45 ` [RFC 08/20] RDMA/irdma: Add privileged UDA queue implementation Jeff Kirsher
2019-09-26 16:45 ` [RFC 09/20] RDMA/irdma: Add QoS definitions Jeff Kirsher
2019-09-26 16:45 ` [RFC 10/20] RDMA/irdma: Add connection manager Jeff Kirsher
2019-09-26 16:45 ` [RFC 11/20] RDMA/irdma: Add PBLE resource manager Jeff Kirsher
2019-09-26 16:45 ` [RFC 12/20] RDMA/irdma: Implement device supported verb APIs Jeff Kirsher
2019-09-26 17:37   ` Leon Romanovsky
2019-09-26 17:40     ` Jason Gunthorpe
2019-09-26 19:50       ` Saleem, Shiraz
2019-09-26 19:49     ` Saleem, Shiraz
2019-09-27  4:50       ` Leon Romanovsky
2019-09-27 14:28         ` Saleem, Shiraz
2019-09-28  6:00           ` Leon Romanovsky
2019-09-30 14:14             ` Saleem, Shiraz
2019-09-26 16:45 ` [RFC 13/20] RDMA/irdma: Add RoCEv2 UD OP support Jeff Kirsher
2019-09-26 16:45 ` [RFC 14/20] RDMA/irdma: Add user/kernel shared libraries Jeff Kirsher
2019-09-26 16:45 ` [RFC 15/20] RDMA/irdma: Add miscellaneous utility definitions Jeff Kirsher
2019-09-26 17:49   ` Leon Romanovsky
2019-09-26 19:49     ` Saleem, Shiraz
2019-09-27  4:46       ` Leon Romanovsky
2019-09-27 14:28         ` Saleem, Shiraz
2019-09-27 18:23           ` gregkh
2019-09-28  5:53             ` Leon Romanovsky
2019-09-26 16:45 ` [RFC 16/20] RDMA/irdma: Add dynamic tracing for CM Jeff Kirsher
2019-09-26 16:45 ` [RFC 17/20] RDMA/irdma: Add ABI definitions Jeff Kirsher
2019-09-26 16:45 ` [RFC 18/20] RDMA/irdma: Update MAINTAINERS file Jeff Kirsher
2019-09-26 16:45 ` [RFC 19/20] RDMA/irdma: Add Kconfig and Makefile Jeff Kirsher
2019-09-26 16:45 ` [RFC 20/20] RDMA/i40iw: Mark i40iw as deprecated Jeff Kirsher
2019-09-26 17:40   ` Leon Romanovsky
2019-09-26 19:49     ` Saleem, Shiraz
2019-09-26 19:55       ` gregkh
2019-09-27 14:28         ` Saleem, Shiraz
2019-09-27 20:18           ` Doug Ledford
2019-09-27 20:17         ` Doug Ledford
2019-09-28  5:55           ` Leon Romanovsky
2019-10-02 21:15             ` Dennis Dalessandro
2019-10-03  8:23               ` Leon Romanovsky
2019-09-29  9:28 ` [RFC 00/20] Intel RDMA/IDC Driver series Or Gerlitz
2019-09-30 15:46   ` Jeff Kirsher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2B0E3F215D1AB84DA946C8BEE234CCC97B2E1D29@ORSMSX101.amr.corp.intel.com \
    --to=david.m.ertman@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=dledford@redhat.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=jgg@ziepe.ca \
    --cc=kiran.patil@intel.com \
    --cc=lee.jones@linaro.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mustafa.ismail@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-RDMA Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-rdma/0 linux-rdma/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-rdma linux-rdma/ https://lore.kernel.org/linux-rdma \
		linux-rdma@vger.kernel.org
	public-inbox-index linux-rdma

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-rdma


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git