netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tomas Winkler <tomasw@gmail.com>
To: "gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>
Cc: "Ertman, David M" <david.m.ertman@intel.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	"Nguyen, Anthony L" <anthony.l.nguyen@intel.com>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"dledford@redhat.com" <dledford@redhat.com>,
	"Ismail, Mustafa" <mustafa.ismail@intel.com>,
	"Patil, Kiran" <kiran.patil@intel.com>,
	"lee.jones@linaro.org" <lee.jones@linaro.org>
Subject: Re: [RFC 01/20] ice: Initialize and register multi-function device to provide RDMA
Date: Thu, 31 Oct 2019 08:42:06 +0100	[thread overview]
Message-ID: <CA+i0qc4pcxT6L9G-RGL6VYGDTYXZ4PSyw=sDgq8_+=jbs1E83A@mail.gmail.com> (raw)
In-Reply-To: <20191026185338.GA804892@kroah.com>

> > >
> > > On Thu, Oct 24, 2019 at 10:25:36PM +0000, Ertman, David M wrote:
> > > > The direct access of the platform bus was unacceptable, and the MFD
> > > > sub-system was suggested by Greg as the solution.  The MFD sub-system
> > > > uses the platform bus in the background as a base to perform its
> > > > functions, since it is a purely software construct that is handy and
> > > > fulfills its needs.  The question then is:  If the MFD sub- system is
> > > > using the platform bus for all of its background functionality, is the platform
> > > bus really only for platform devices?
> > >
> > > Yes, how many times do I have to keep saying this?
> > >
> > > The platform bus should ONLY be used for devices that are actually platform
> > > devices and can not be discovered any other way and are not on any other type
> > > of bus.
> > >
> > > If you try to add platform devices for a PCI device, I am going to continue to
> > > complain.  I keep saying this and am getting tired.
> > >
> > > Now yes, MFD does do "fun" things here, and that should probably be fixed up
> > > one of these days.  But I still don't see why a real bus would not work for you.
> > >
> > > greg "platform devices are dead, long live the platform device" k-h
> >
> > > -----Original Message-----
> > > From: gregkh@linuxfoundation.org [mailto:gregkh@linuxfoundation.org]
> > > Sent: Thursday, October 24, 2019 6:31 PM
> > > To: Ertman, David M <david.m.ertman@intel.com>
> > > Cc: Jason Gunthorpe <jgg@ziepe.ca>; Nguyen, Anthony L
> > > <anthony.l.nguyen@intel.com>; Kirsher, Jeffrey T
> > > <jeffrey.t.kirsher@intel.com>; netdev@vger.kernel.org; linux-
> > > rdma@vger.kernel.org; dledford@redhat.com; Ismail, Mustafa
> > > <mustafa.ismail@intel.com>; Patil, Kiran <kiran.patil@intel.com>;
> > > lee.jones@linaro.org
> > > Subject: Re: [RFC 01/20] ice: Initialize and register multi-function device to
> > > provide RDMA
> > >
> > > On Thu, Oct 24, 2019 at 10:25:36PM +0000, Ertman, David M wrote:
> > > > The direct access of the platform bus was unacceptable, and the MFD
> > > > sub-system was suggested by Greg as the solution.  The MFD sub-system
> > > > uses the platform bus in the background as a base to perform its
> > > > functions, since it is a purely software construct that is handy and
> > > > fulfills its needs.  The question then is:  If the MFD sub- system is
> > > > using the platform bus for all of its background functionality, is the platform
> > > bus really only for platform devices?
> > >
> > > Yes, how many times do I have to keep saying this?
> > >
> > > The platform bus should ONLY be used for devices that are actually platform
> > > devices and can not be discovered any other way and are not on any other type
> > > of bus.
> > >
> > > If you try to add platform devices for a PCI device, I am going to continue to
> > > complain.  I keep saying this and am getting tired.
> > >
> > > Now yes, MFD does do "fun" things here, and that should probably be fixed up
> > > one of these days.  But I still don't see why a real bus would not work for you.
> > >
> > > greg "platform devices are dead, long live the platform device" k-h
> >
> > I'm sorry, the last thing I want to do is to annoy you! I just need to
> > figure out where to go from here.  Please, don't take anything I say as
> > argumentative.
> >
> > I don't understand what you mean by "a real bus".  The irdma driver does
> > not have access to any physical bus.  It utilizes resources provided by
> > the PCI LAN drivers, but to receive those resources it needs a mechanism
> > to "hook up" with the PCI drivers.  The only way it has to locate them
> > is to register a driver function with a software based bus of some kind
> > and have the bus match it up to a compatible entity to achieve that hook up.
> >
> > The PCI LAN driver has a function that controls the PCI hardware, and then
> > we want to present an entity for the RDMA driver to connect to.
> >
> > To move forward, we are thinking of the following design proposal:
> >
> > We could add a new module to the kernel named generic_bus.ko.  This would
> > create a new generic software bus and a set of APIs that would allow for
> > adding and removing simple generic virtual devices and drivers, not as
> > a MFD cell or a platform device. The power management events would also
> > be handled by the generic_bus infrastructure (suspend, resume, shutdown).
> > We would use this for matching up by having the irdma driver register
> > with this generic bus and hook to virtual devices that were added from
> > different PCI LAN drivers.
> >
> > Pros:
> > 1) This would avoid us attaching anything to the platform bus
> > 2) Avoid having each PCI LAN driver creating its own software bus
> > 3) Provide a common matching ground for generic devices and drivers that
> > eliminates problems caused by load order (all dependent on generic_bus.ko)
> > 4) Usable by any other entity that wants a lightweight matching system
> > or information exchange mechanism
> >
> > Cons:
> > 1) Duplicates part of the platform bus functionality
> > 2) Adds a new software bus to the kernel architecture
> >
> > Is this path forward acceptable?
>
> Yes, that is much better.  But how about calling it a "virtual bus"?
> It's not really virtualization, but we already have virtual devices
> today when you look in sysfs for devices that are created that are not
> associated with any specific bus.  So this could take those over quite
> nicely!  Look at how /sys/devices/virtual/ works for specifics, you
> could create a new virtual bus of a specific "name" and then add devices
> to that bus directly.
>
> thanks,
If I'm not mistaken,  currently the virtual devices do not have a parent and
may not  have a bus so there is no enumeration and hence binding to a driver.
This is not a case here, as the parent is the PCI device, and we need
to bind to a driver.
Code-wise the platform bus contains all the functionality needed by
such virtual bus, for example helpers for the adding of resources that
are inherited
from its parent PCI device,
 MMIO and IRQ,  the issue is just the name of the bus and associated sysfs?
In that case the  platform bus will be a special case of the virtual bus?
Thanks
Tomas

  reply	other threads:[~2019-10-31  7:42 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-26 16:44 [RFC 00/20] Intel RDMA/IDC Driver series Jeff Kirsher
2019-09-26 16:45 ` [RFC 01/20] ice: Initialize and register multi-function device to provide RDMA Jeff Kirsher
2019-09-26 18:05   ` Greg KH
2019-09-26 23:39     ` Nguyen, Anthony L
2019-09-27  5:13       ` gregkh
2019-09-27 18:03         ` Ertman, David M
2019-10-23 17:44           ` Jason Gunthorpe
2019-10-23 17:55             ` Ertman, David M
2019-10-23 18:01               ` Jason Gunthorpe
2019-10-24 18:56                 ` gregkh
2019-10-24 19:10                   ` Jason Gunthorpe
2019-10-24 22:25                     ` Ertman, David M
2019-10-25  1:30                       ` gregkh
2019-10-25 22:27                         ` Ertman, David M
2019-10-26 18:53                           ` gregkh
2019-10-31  7:42                             ` Tomas Winkler [this message]
2019-09-26 16:45 ` [RFC 02/20] ice: Implement peer communications Jeff Kirsher
2019-09-26 16:45 ` [RFC 03/20] i40e: Register multi-function device to provide RDMA Jeff Kirsher
2019-09-26 16:45 ` [RFC 04/20] RDMA/irdma: Add driver framework definitions Jeff Kirsher
2019-09-26 16:55   ` Jason Gunthorpe
2019-09-26 18:02     ` gregkh
2019-09-26 18:04       ` Jason Gunthorpe
2019-09-26 18:10         ` Saleem, Shiraz
2019-09-26 17:30   ` Leon Romanovsky
2019-09-26 19:51     ` Saleem, Shiraz
2019-10-04 20:12     ` Jeff Kirsher
2019-10-04 23:45       ` Jason Gunthorpe
2019-10-05  0:46         ` Jeff Kirsher
2019-10-05  6:28           ` Leon Romanovsky
2019-10-05  7:08             ` gregkh
2019-10-05 22:01           ` Jason Gunthorpe
2019-09-26 16:45 ` [RFC 05/20] RDMA/irdma: Implement device initialization definitions Jeff Kirsher
2019-09-26 16:45 ` [RFC 06/20] RDMA/irdma: Implement HW Admin Queue OPs Jeff Kirsher
2019-09-26 16:45 ` [RFC 07/20] RDMA/irdma: Add HMC backing store setup functions Jeff Kirsher
2019-09-26 16:45 ` [RFC 08/20] RDMA/irdma: Add privileged UDA queue implementation Jeff Kirsher
2019-09-26 16:45 ` [RFC 09/20] RDMA/irdma: Add QoS definitions Jeff Kirsher
2019-09-26 16:45 ` [RFC 10/20] RDMA/irdma: Add connection manager Jeff Kirsher
2019-09-26 16:45 ` [RFC 11/20] RDMA/irdma: Add PBLE resource manager Jeff Kirsher
2019-09-26 16:45 ` [RFC 12/20] RDMA/irdma: Implement device supported verb APIs Jeff Kirsher
2019-09-26 17:37   ` Leon Romanovsky
2019-09-26 17:40     ` Jason Gunthorpe
2019-09-26 19:50       ` Saleem, Shiraz
2019-09-26 19:49     ` Saleem, Shiraz
2019-09-27  4:50       ` Leon Romanovsky
2019-09-27 14:28         ` Saleem, Shiraz
2019-09-28  6:00           ` Leon Romanovsky
2019-09-30 14:14             ` Saleem, Shiraz
2019-09-26 16:45 ` [RFC 13/20] RDMA/irdma: Add RoCEv2 UD OP support Jeff Kirsher
2019-09-26 16:45 ` [RFC 14/20] RDMA/irdma: Add user/kernel shared libraries Jeff Kirsher
2019-09-26 16:45 ` [RFC 15/20] RDMA/irdma: Add miscellaneous utility definitions Jeff Kirsher
2019-09-26 17:49   ` Leon Romanovsky
2019-09-26 19:49     ` Saleem, Shiraz
2019-09-27  4:46       ` Leon Romanovsky
2019-09-27 14:28         ` Saleem, Shiraz
2019-09-27 18:23           ` gregkh
2019-09-28  5:53             ` Leon Romanovsky
2019-09-26 16:45 ` [RFC 16/20] RDMA/irdma: Add dynamic tracing for CM Jeff Kirsher
2019-09-26 16:45 ` [RFC 17/20] RDMA/irdma: Add ABI definitions Jeff Kirsher
2019-09-26 16:45 ` [RFC 18/20] RDMA/irdma: Update MAINTAINERS file Jeff Kirsher
2019-09-26 16:45 ` [RFC 19/20] RDMA/irdma: Add Kconfig and Makefile Jeff Kirsher
2019-09-26 16:45 ` [RFC 20/20] RDMA/i40iw: Mark i40iw as deprecated Jeff Kirsher
2019-09-26 17:40   ` Leon Romanovsky
2019-09-26 19:49     ` Saleem, Shiraz
2019-09-26 19:55       ` gregkh
2019-09-27 14:28         ` Saleem, Shiraz
2019-09-27 20:18           ` Doug Ledford
2019-09-27 20:17         ` Doug Ledford
2019-09-28  5:55           ` Leon Romanovsky
2019-10-02 21:15             ` Dennis Dalessandro
2019-10-03  8:23               ` Leon Romanovsky
2019-09-29  9:28 ` [RFC 00/20] Intel RDMA/IDC Driver series Or Gerlitz
2019-09-30 15:46   ` Jeff Kirsher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+i0qc4pcxT6L9G-RGL6VYGDTYXZ4PSyw=sDgq8_+=jbs1E83A@mail.gmail.com' \
    --to=tomasw@gmail.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=david.m.ertman@intel.com \
    --cc=dledford@redhat.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=jgg@ziepe.ca \
    --cc=kiran.patil@intel.com \
    --cc=lee.jones@linaro.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mustafa.ismail@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).