linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [net-next 0/3][pull request] Intel Wired LAN ver Updates 2019-07-03
       [not found] <20190704021252.15534-1-jeffrey.t.kirsher@intel.com>
@ 2019-07-04 12:15 ` Jason Gunthorpe
       [not found] ` <20190704021252.15534-2-jeffrey.t.kirsher@intel.com>
  1 sibling, 0 replies; 11+ messages in thread
From: Jason Gunthorpe @ 2019-07-04 12:15 UTC (permalink / raw)
  To: Jeff Kirsher
  Cc: davem, dledford, netdev, linux-rdma, nhorman, sassmann,
	mustafa.ismail, shiraz.saleem, david.m.ertman

On Wed, Jul 03, 2019 at 07:12:49PM -0700, Jeff Kirsher wrote:
> This series contains updates to i40e an ice drivers only and is required
> for a series of changes being submitted to the RDMA maintainer/tree.
> Vice Versa, the Intel RDMA driver patches could not be applied to
> net-next due to dependencies to the changes currently in the for-next
> branch of the rdma git tree.

RDMA driver code is not to be applied to the net-next tree. We've
learned this causes too many work flow problems. 

You must co-ordinate your driver with a shared git tree as Mellanox is
doing, or wait for alternating kernel releases.

I'm not sure what to do with this PR, it is far too late in the cycle
to submit a new driver so most likely net should not go ahead with
this prep work until this new driver model scheme is properly
reviewed.

> The following are changes since commit a51df9f8da43e8bf9e508143630849b7d696e053:
>   gve: fix -ENOMEM null check on a page allocation
> and are available in the git repository at:
>   git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue 100GbE

And if you expect anything to be shared with rdma you need to produce
pull requests that are based on some sensible -rc tag.

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
       [not found] ` <20190704021252.15534-2-jeffrey.t.kirsher@intel.com>
@ 2019-07-04 12:16   ` Jason Gunthorpe
  2019-07-04 12:29     ` Greg KH
  0 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2019-07-04 12:16 UTC (permalink / raw)
  To: Jeff Kirsher, Greg KH
  Cc: davem, dledford, Tony Nguyen, netdev, linux-rdma, nhorman,
	sassmann, poswald, mustafa.ismail, shiraz.saleem, Dave Ertman,
	Andrew Bowers

On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> From: Tony Nguyen <anthony.l.nguyen@intel.com>
> 
> The RDMA block does not advertise on the PCI bus or any other bus.
> Thus the ice driver needs to provide access to the RDMA hardware block
> via a virtual bus; utilize the platform bus to provide this access.
> 
> This patch initializes the driver to support RDMA as well as creates
> and registers a platform device for the RDMA driver to register to. At
> this point the driver is fully initialized to register a platform
> driver, however, can not yet register as the ops have not been
> implemented.

I think you need Greg's ack on all this driver stuff - particularly
that a platform_device is OK.

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
  2019-07-04 12:16   ` [net-next 1/3] ice: Initialize and register platform device to provide RDMA Jason Gunthorpe
@ 2019-07-04 12:29     ` Greg KH
  2019-07-04 12:37       ` Jason Gunthorpe
  0 siblings, 1 reply; 11+ messages in thread
From: Greg KH @ 2019-07-04 12:29 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Jeff Kirsher, davem, dledford, Tony Nguyen, netdev, linux-rdma,
	nhorman, sassmann, poswald, mustafa.ismail, shiraz.saleem,
	Dave Ertman, Andrew Bowers

On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > From: Tony Nguyen <anthony.l.nguyen@intel.com>
> > 
> > The RDMA block does not advertise on the PCI bus or any other bus.
> > Thus the ice driver needs to provide access to the RDMA hardware block
> > via a virtual bus; utilize the platform bus to provide this access.
> > 
> > This patch initializes the driver to support RDMA as well as creates
> > and registers a platform device for the RDMA driver to register to. At
> > this point the driver is fully initialized to register a platform
> > driver, however, can not yet register as the ops have not been
> > implemented.
> 
> I think you need Greg's ack on all this driver stuff - particularly
> that a platform_device is OK.

A platform_device is almost NEVER ok.

Don't abuse it, make a real device on a real bus.  If you don't have a
real bus and just need to create a device to hang other things off of,
then use the virtual one, that's what it is there for.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
  2019-07-04 12:29     ` Greg KH
@ 2019-07-04 12:37       ` Jason Gunthorpe
  2019-07-04 12:42         ` Greg KH
  0 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2019-07-04 12:37 UTC (permalink / raw)
  To: Greg KH
  Cc: Jeff Kirsher, davem, dledford, Tony Nguyen, netdev, linux-rdma,
	nhorman, sassmann, poswald, mustafa.ismail, shiraz.saleem,
	Dave Ertman, Andrew Bowers

On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > From: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > 
> > > The RDMA block does not advertise on the PCI bus or any other bus.
> > > Thus the ice driver needs to provide access to the RDMA hardware block
> > > via a virtual bus; utilize the platform bus to provide this access.
> > > 
> > > This patch initializes the driver to support RDMA as well as creates
> > > and registers a platform device for the RDMA driver to register to. At
> > > this point the driver is fully initialized to register a platform
> > > driver, however, can not yet register as the ops have not been
> > > implemented.
> > 
> > I think you need Greg's ack on all this driver stuff - particularly
> > that a platform_device is OK.
> 
> A platform_device is almost NEVER ok.
> 
> Don't abuse it, make a real device on a real bus.  If you don't have a
> real bus and just need to create a device to hang other things off of,
> then use the virtual one, that's what it is there for.

Ideally I'd like to see all the RDMA drivers that connect to ethernet
drivers use some similar scheme.

Should it be some generic virtual bus?

This is for a PCI device that plugs into multiple subsystems in the
kernel, ie it has net driver functionality, rdma functionality, some
even have SCSI functionality

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
  2019-07-04 12:37       ` Jason Gunthorpe
@ 2019-07-04 12:42         ` Greg KH
  2019-07-04 12:48           ` Jason Gunthorpe
  0 siblings, 1 reply; 11+ messages in thread
From: Greg KH @ 2019-07-04 12:42 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Jeff Kirsher, davem, dledford, Tony Nguyen, netdev, linux-rdma,
	nhorman, sassmann, poswald, mustafa.ismail, shiraz.saleem,
	Dave Ertman, Andrew Bowers

On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > From: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > > 
> > > > The RDMA block does not advertise on the PCI bus or any other bus.
> > > > Thus the ice driver needs to provide access to the RDMA hardware block
> > > > via a virtual bus; utilize the platform bus to provide this access.
> > > > 
> > > > This patch initializes the driver to support RDMA as well as creates
> > > > and registers a platform device for the RDMA driver to register to. At
> > > > this point the driver is fully initialized to register a platform
> > > > driver, however, can not yet register as the ops have not been
> > > > implemented.
> > > 
> > > I think you need Greg's ack on all this driver stuff - particularly
> > > that a platform_device is OK.
> > 
> > A platform_device is almost NEVER ok.
> > 
> > Don't abuse it, make a real device on a real bus.  If you don't have a
> > real bus and just need to create a device to hang other things off of,
> > then use the virtual one, that's what it is there for.
> 
> Ideally I'd like to see all the RDMA drivers that connect to ethernet
> drivers use some similar scheme.

Why?  They should be attached to a "real" device, why make any up?

> Should it be some generic virtual bus?

There is a generic virtual bus today.

> This is for a PCI device that plugs into multiple subsystems in the
> kernel, ie it has net driver functionality, rdma functionality, some
> even have SCSI functionality

Sounds like a MFD device, why aren't you using that functionality
instead?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
  2019-07-04 12:42         ` Greg KH
@ 2019-07-04 12:48           ` Jason Gunthorpe
  2019-07-04 13:46             ` Greg KH
  0 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2019-07-04 12:48 UTC (permalink / raw)
  To: Greg KH
  Cc: Jeff Kirsher, davem, dledford, Tony Nguyen, netdev, linux-rdma,
	nhorman, sassmann, poswald, mustafa.ismail, shiraz.saleem,
	Dave Ertman, Andrew Bowers

On Thu, Jul 04, 2019 at 02:42:47PM +0200, Greg KH wrote:
> On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> > On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > > From: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > > > 
> > > > > The RDMA block does not advertise on the PCI bus or any other bus.
> > > > > Thus the ice driver needs to provide access to the RDMA hardware block
> > > > > via a virtual bus; utilize the platform bus to provide this access.
> > > > > 
> > > > > This patch initializes the driver to support RDMA as well as creates
> > > > > and registers a platform device for the RDMA driver to register to. At
> > > > > this point the driver is fully initialized to register a platform
> > > > > driver, however, can not yet register as the ops have not been
> > > > > implemented.
> > > > 
> > > > I think you need Greg's ack on all this driver stuff - particularly
> > > > that a platform_device is OK.
> > > 
> > > A platform_device is almost NEVER ok.
> > > 
> > > Don't abuse it, make a real device on a real bus.  If you don't have a
> > > real bus and just need to create a device to hang other things off of,
> > > then use the virtual one, that's what it is there for.
> > 
> > Ideally I'd like to see all the RDMA drivers that connect to ethernet
> > drivers use some similar scheme.
> 
> Why?  They should be attached to a "real" device, why make any up?

? A "real" device, like struct pci_device, can only bind to one
driver. How can we bind it concurrently to net, rdma, scsi, etc?

> > This is for a PCI device that plugs into multiple subsystems in the
> > kernel, ie it has net driver functionality, rdma functionality, some
> > even have SCSI functionality
> 
> Sounds like a MFD device, why aren't you using that functionality
> instead?

This was also my advice, but in another email Jeff says:

  MFD architecture was also considered, and we selected the simpler
  platform model. Supporting a MFD architecture would require an
  additional MFD core driver, individual platform netdev, RDMA function
  drivers, and stripping a large portion of the netdev drivers into
  MFD core. The sub-devices registered by MFD core for function
  drivers are indeed platform devices.  

Thanks,
Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
  2019-07-04 12:48           ` Jason Gunthorpe
@ 2019-07-04 13:46             ` Greg KH
  2019-07-04 13:53               ` Jason Gunthorpe
  2019-07-05 16:33               ` Saleem, Shiraz
  0 siblings, 2 replies; 11+ messages in thread
From: Greg KH @ 2019-07-04 13:46 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Jeff Kirsher, davem, dledford, Tony Nguyen, netdev, linux-rdma,
	nhorman, sassmann, poswald, mustafa.ismail, shiraz.saleem,
	Dave Ertman, Andrew Bowers

On Thu, Jul 04, 2019 at 12:48:29PM +0000, Jason Gunthorpe wrote:
> On Thu, Jul 04, 2019 at 02:42:47PM +0200, Greg KH wrote:
> > On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> > > On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > > > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > > > From: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > > > > 
> > > > > > The RDMA block does not advertise on the PCI bus or any other bus.
> > > > > > Thus the ice driver needs to provide access to the RDMA hardware block
> > > > > > via a virtual bus; utilize the platform bus to provide this access.
> > > > > > 
> > > > > > This patch initializes the driver to support RDMA as well as creates
> > > > > > and registers a platform device for the RDMA driver to register to. At
> > > > > > this point the driver is fully initialized to register a platform
> > > > > > driver, however, can not yet register as the ops have not been
> > > > > > implemented.
> > > > > 
> > > > > I think you need Greg's ack on all this driver stuff - particularly
> > > > > that a platform_device is OK.
> > > > 
> > > > A platform_device is almost NEVER ok.
> > > > 
> > > > Don't abuse it, make a real device on a real bus.  If you don't have a
> > > > real bus and just need to create a device to hang other things off of,
> > > > then use the virtual one, that's what it is there for.
> > > 
> > > Ideally I'd like to see all the RDMA drivers that connect to ethernet
> > > drivers use some similar scheme.
> > 
> > Why?  They should be attached to a "real" device, why make any up?
> 
> ? A "real" device, like struct pci_device, can only bind to one
> driver. How can we bind it concurrently to net, rdma, scsi, etc?

MFD was designed for this very problem.

> > > This is for a PCI device that plugs into multiple subsystems in the
> > > kernel, ie it has net driver functionality, rdma functionality, some
> > > even have SCSI functionality
> > 
> > Sounds like a MFD device, why aren't you using that functionality
> > instead?
> 
> This was also my advice, but in another email Jeff says:
> 
>   MFD architecture was also considered, and we selected the simpler
>   platform model. Supporting a MFD architecture would require an
>   additional MFD core driver, individual platform netdev, RDMA function
>   drivers, and stripping a large portion of the netdev drivers into
>   MFD core. The sub-devices registered by MFD core for function
>   drivers are indeed platform devices.  

So, "mfd is too hard, let's abuse a platform device" is ok?

People have been wanting to do MFD drivers for PCI devices for a long
time, it's about time someone actually did the work for it, I bet it
will not be all that complex if tiny embedded drivers can do it :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
  2019-07-04 13:46             ` Greg KH
@ 2019-07-04 13:53               ` Jason Gunthorpe
  2019-07-05 16:33               ` Saleem, Shiraz
  1 sibling, 0 replies; 11+ messages in thread
From: Jason Gunthorpe @ 2019-07-04 13:53 UTC (permalink / raw)
  To: Greg KH
  Cc: Jeff Kirsher, davem, dledford, Tony Nguyen, netdev, linux-rdma,
	nhorman, sassmann, poswald, mustafa.ismail, shiraz.saleem,
	Dave Ertman, Andrew Bowers

On Thu, Jul 04, 2019 at 03:46:12PM +0200, Greg KH wrote:
> On Thu, Jul 04, 2019 at 12:48:29PM +0000, Jason Gunthorpe wrote:
> > On Thu, Jul 04, 2019 at 02:42:47PM +0200, Greg KH wrote:
> > > On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> > > > On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > > > > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > > > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > > > > From: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > > > > > 
> > > > > > > The RDMA block does not advertise on the PCI bus or any other bus.
> > > > > > > Thus the ice driver needs to provide access to the RDMA hardware block
> > > > > > > via a virtual bus; utilize the platform bus to provide this access.
> > > > > > > 
> > > > > > > This patch initializes the driver to support RDMA as well as creates
> > > > > > > and registers a platform device for the RDMA driver to register to. At
> > > > > > > this point the driver is fully initialized to register a platform
> > > > > > > driver, however, can not yet register as the ops have not been
> > > > > > > implemented.
> > > > > > 
> > > > > > I think you need Greg's ack on all this driver stuff - particularly
> > > > > > that a platform_device is OK.
> > > > > 
> > > > > A platform_device is almost NEVER ok.
> > > > > 
> > > > > Don't abuse it, make a real device on a real bus.  If you don't have a
> > > > > real bus and just need to create a device to hang other things off of,
> > > > > then use the virtual one, that's what it is there for.
> > > > 
> > > > Ideally I'd like to see all the RDMA drivers that connect to ethernet
> > > > drivers use some similar scheme.
> > > 
> > > Why?  They should be attached to a "real" device, why make any up?
> > 
> > ? A "real" device, like struct pci_device, can only bind to one
> > driver. How can we bind it concurrently to net, rdma, scsi, etc?
> 
> MFD was designed for this very problem.
> 
> > > > This is for a PCI device that plugs into multiple subsystems in the
> > > > kernel, ie it has net driver functionality, rdma functionality, some
> > > > even have SCSI functionality
> > > 
> > > Sounds like a MFD device, why aren't you using that functionality
> > > instead?
> > 
> > This was also my advice, but in another email Jeff says:
> > 
> >   MFD architecture was also considered, and we selected the simpler
> >   platform model. Supporting a MFD architecture would require an
> >   additional MFD core driver, individual platform netdev, RDMA function
> >   drivers, and stripping a large portion of the netdev drivers into
> >   MFD core. The sub-devices registered by MFD core for function
> >   drivers are indeed platform devices.  
> 
> So, "mfd is too hard, let's abuse a platform device" is ok?
> 
> People have been wanting to do MFD drivers for PCI devices for a long
> time, it's about time someone actually did the work for it, I bet it
> will not be all that complex if tiny embedded drivers can do it :)

Okay, sounds like a NAK to me. I'll drop these patches from the RDMA
patchworks and Jeff can work through the MFD stuff first.

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
  2019-07-04 13:46             ` Greg KH
  2019-07-04 13:53               ` Jason Gunthorpe
@ 2019-07-05 16:33               ` Saleem, Shiraz
  2019-07-06  8:25                 ` Greg KH
  1 sibling, 1 reply; 11+ messages in thread
From: Saleem, Shiraz @ 2019-07-05 16:33 UTC (permalink / raw)
  To: Greg KH, Jason Gunthorpe
  Cc: Kirsher, Jeffrey T, davem, dledford, Nguyen, Anthony L, netdev,
	linux-rdma, nhorman, sassmann, poswald, Ismail, Mustafa, Ertman,
	David M, Bowers, AndrewX

> Subject: Re: [net-next 1/3] ice: Initialize and register platform device to provide
> RDMA
> 
> On Thu, Jul 04, 2019 at 12:48:29PM +0000, Jason Gunthorpe wrote:
> > On Thu, Jul 04, 2019 at 02:42:47PM +0200, Greg KH wrote:
> > > On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> > > > On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > > > > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > > > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > > > > From: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > > > > >
> > > > > > > The RDMA block does not advertise on the PCI bus or any other bus.
> > > > > > > Thus the ice driver needs to provide access to the RDMA
> > > > > > > hardware block via a virtual bus; utilize the platform bus to provide this
> access.
> > > > > > >
> > > > > > > This patch initializes the driver to support RDMA as well as
> > > > > > > creates and registers a platform device for the RDMA driver
> > > > > > > to register to. At this point the driver is fully
> > > > > > > initialized to register a platform driver, however, can not
> > > > > > > yet register as the ops have not been implemented.
> > > > > >
> > > > > > I think you need Greg's ack on all this driver stuff -
> > > > > > particularly that a platform_device is OK.
> > > > >
> > > > > A platform_device is almost NEVER ok.
> > > > >
> > > > > Don't abuse it, make a real device on a real bus.  If you don't
> > > > > have a real bus and just need to create a device to hang other
> > > > > things off of, then use the virtual one, that's what it is there for.
> > > >
> > > > Ideally I'd like to see all the RDMA drivers that connect to
> > > > ethernet drivers use some similar scheme.
> > >
> > > Why?  They should be attached to a "real" device, why make any up?
> >
> > ? A "real" device, like struct pci_device, can only bind to one
> > driver. How can we bind it concurrently to net, rdma, scsi, etc?
> 
> MFD was designed for this very problem.
> 
> > > > This is for a PCI device that plugs into multiple subsystems in
> > > > the kernel, ie it has net driver functionality, rdma
> > > > functionality, some even have SCSI functionality
> > >
> > > Sounds like a MFD device, why aren't you using that functionality
> > > instead?
> >
> > This was also my advice, but in another email Jeff says:
> >
> >   MFD architecture was also considered, and we selected the simpler
> >   platform model. Supporting a MFD architecture would require an
> >   additional MFD core driver, individual platform netdev, RDMA function
> >   drivers, and stripping a large portion of the netdev drivers into
> >   MFD core. The sub-devices registered by MFD core for function
> >   drivers are indeed platform devices.
> 
> So, "mfd is too hard, let's abuse a platform device" is ok?
> 
> People have been wanting to do MFD drivers for PCI devices for a long time, it's
> about time someone actually did the work for it, I bet it will not be all that complex
> if tiny embedded drivers can do it :)
> 
Hi Greg - Thanks for your feedback!

We currently have 2 PCI function netdev drivers in the kernel (i40e & ice) that support devices (x722 & e810)
which are RDMA capable. Our objective is to add a single unified RDMA driver
(as this a subsystem specific requirement) which needs to access HW resources from the
netdev PF drivers. Attaching platform devices from the netdev drivers to the platform bus
and having a single RDMA platform driver bind to them and access these resources seemed
like a simple approach to realize our objective. But seems like attaching platform devices is
wrong. I would like to understand why. 

Are platform sub devices only to be added from an MFD core driver? I am also wondering if MFD arch.
would allow for realizing a single RDMA driver and whether we need an MFD core driver for
each device, x722 & e810 or whether it can be a single driver.

Shiraz

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
  2019-07-05 16:33               ` Saleem, Shiraz
@ 2019-07-06  8:25                 ` Greg KH
  2019-07-06 16:03                   ` Saleem, Shiraz
  0 siblings, 1 reply; 11+ messages in thread
From: Greg KH @ 2019-07-06  8:25 UTC (permalink / raw)
  To: Saleem, Shiraz
  Cc: Jason Gunthorpe, Kirsher, Jeffrey T, davem, dledford, Nguyen,
	Anthony L, netdev, linux-rdma, nhorman, sassmann, poswald,
	Ismail, Mustafa, Ertman, David M, Bowers, AndrewX

On Fri, Jul 05, 2019 at 04:33:07PM +0000, Saleem, Shiraz wrote:
> > Subject: Re: [net-next 1/3] ice: Initialize and register platform device to provide
> > RDMA
> > 
> > On Thu, Jul 04, 2019 at 12:48:29PM +0000, Jason Gunthorpe wrote:
> > > On Thu, Jul 04, 2019 at 02:42:47PM +0200, Greg KH wrote:
> > > > On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> > > > > On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > > > > > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > > > > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > > > > > From: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > > > > > >
> > > > > > > > The RDMA block does not advertise on the PCI bus or any other bus.
> > > > > > > > Thus the ice driver needs to provide access to the RDMA
> > > > > > > > hardware block via a virtual bus; utilize the platform bus to provide this
> > access.
> > > > > > > >
> > > > > > > > This patch initializes the driver to support RDMA as well as
> > > > > > > > creates and registers a platform device for the RDMA driver
> > > > > > > > to register to. At this point the driver is fully
> > > > > > > > initialized to register a platform driver, however, can not
> > > > > > > > yet register as the ops have not been implemented.
> > > > > > >
> > > > > > > I think you need Greg's ack on all this driver stuff -
> > > > > > > particularly that a platform_device is OK.
> > > > > >
> > > > > > A platform_device is almost NEVER ok.
> > > > > >
> > > > > > Don't abuse it, make a real device on a real bus.  If you don't
> > > > > > have a real bus and just need to create a device to hang other
> > > > > > things off of, then use the virtual one, that's what it is there for.
> > > > >
> > > > > Ideally I'd like to see all the RDMA drivers that connect to
> > > > > ethernet drivers use some similar scheme.
> > > >
> > > > Why?  They should be attached to a "real" device, why make any up?
> > >
> > > ? A "real" device, like struct pci_device, can only bind to one
> > > driver. How can we bind it concurrently to net, rdma, scsi, etc?
> > 
> > MFD was designed for this very problem.
> > 
> > > > > This is for a PCI device that plugs into multiple subsystems in
> > > > > the kernel, ie it has net driver functionality, rdma
> > > > > functionality, some even have SCSI functionality
> > > >
> > > > Sounds like a MFD device, why aren't you using that functionality
> > > > instead?
> > >
> > > This was also my advice, but in another email Jeff says:
> > >
> > >   MFD architecture was also considered, and we selected the simpler
> > >   platform model. Supporting a MFD architecture would require an
> > >   additional MFD core driver, individual platform netdev, RDMA function
> > >   drivers, and stripping a large portion of the netdev drivers into
> > >   MFD core. The sub-devices registered by MFD core for function
> > >   drivers are indeed platform devices.
> > 
> > So, "mfd is too hard, let's abuse a platform device" is ok?
> > 
> > People have been wanting to do MFD drivers for PCI devices for a long time, it's
> > about time someone actually did the work for it, I bet it will not be all that complex
> > if tiny embedded drivers can do it :)
> > 
> Hi Greg - Thanks for your feedback!
> 
> We currently have 2 PCI function netdev drivers in the kernel (i40e & ice) that support devices (x722 & e810)
> which are RDMA capable. Our objective is to add a single unified RDMA driver
> (as this a subsystem specific requirement) which needs to access HW resources from the
> netdev PF drivers. Attaching platform devices from the netdev drivers to the platform bus
> and having a single RDMA platform driver bind to them and access these resources seemed
> like a simple approach to realize our objective. But seems like attaching platform devices is
> wrong. I would like to understand why. 

Because that is NOT what a platform device is for.

It was originally created for those types of devices that live on the
"platform" bus, i.e. things that are hard-wired and you just "know" are
in your system because they are located at specific locations.  We used
to generate them from board files, and then when we got DT, we create
them from the resources that DT says where the locations of the devices
are.

They are NOT to be abused and used whenever someone wants to put them
somewhere in the "middle" of the device tree because they feel like they
are easier to use instead of creating a real bus and drivers.

Yes, they do get abused, and I need to sweep the tree again and fix up
all of the places where this has crept back in.  But now that I know you
are thinking of doing this, I'll keep saying to NOT do it for your use
case either :)

> Are platform sub devices only to be added from an MFD core driver? I
> am also wondering if MFD arch.  would allow for realizing a single
> RDMA driver and whether we need an MFD core driver for each device,
> x722 & e810 or whether it can be a single driver.

I do not know the details of how MFD works, please ask those developers
for specifics.  If MFD doesn't work, then create a tiny virtual bus and
make sub-devices out of that.  If you need a "generic" way to do this
for PCI devices, then please create that as you are not the only one
that keeps wanting this, as for some reason PCI hardware vendors don't
like dividing up their devices in ways that would have made it much
simpler to create individual devices (probably saves some gates and
firmware complexity on the device).

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
  2019-07-06  8:25                 ` Greg KH
@ 2019-07-06 16:03                   ` Saleem, Shiraz
  0 siblings, 0 replies; 11+ messages in thread
From: Saleem, Shiraz @ 2019-07-06 16:03 UTC (permalink / raw)
  To: Greg KH
  Cc: Jason Gunthorpe, Kirsher, Jeffrey T, davem, dledford, Nguyen,
	Anthony L, netdev, linux-rdma, nhorman, sassmann, poswald,
	Ismail, Mustafa, Ertman, David M, Bowers, AndrewX

> Subject: Re: [net-next 1/3] ice: Initialize and register platform device to provide
> RDMA
> 
> On Fri, Jul 05, 2019 at 04:33:07PM +0000, Saleem, Shiraz wrote:
> > > Subject: Re: [net-next 1/3] ice: Initialize and register platform
> > > device to provide RDMA
> > >
> > > On Thu, Jul 04, 2019 at 12:48:29PM +0000, Jason Gunthorpe wrote:
> > > > On Thu, Jul 04, 2019 at 02:42:47PM +0200, Greg KH wrote:
> > > > > On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> > > > > > On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > > > > > > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > > > > > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > > > > > > From: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > > > > > > >
> > > > > > > > > The RDMA block does not advertise on the PCI bus or any other
> bus.
> > > > > > > > > Thus the ice driver needs to provide access to the RDMA
> > > > > > > > > hardware block via a virtual bus; utilize the platform
> > > > > > > > > bus to provide this
> > > access.
> > > > > > > > >
> > > > > > > > > This patch initializes the driver to support RDMA as
> > > > > > > > > well as creates and registers a platform device for the
> > > > > > > > > RDMA driver to register to. At this point the driver is
> > > > > > > > > fully initialized to register a platform driver,
> > > > > > > > > however, can not yet register as the ops have not been
> implemented.
> > > > > > > >
> > > > > > > > I think you need Greg's ack on all this driver stuff -
> > > > > > > > particularly that a platform_device is OK.
> > > > > > >
> > > > > > > A platform_device is almost NEVER ok.
> > > > > > >
> > > > > > > Don't abuse it, make a real device on a real bus.  If you
> > > > > > > don't have a real bus and just need to create a device to
> > > > > > > hang other things off of, then use the virtual one, that's what it is there
> for.
> > > > > >
> > > > > > Ideally I'd like to see all the RDMA drivers that connect to
> > > > > > ethernet drivers use some similar scheme.
> > > > >
> > > > > Why?  They should be attached to a "real" device, why make any up?
> > > >
> > > > ? A "real" device, like struct pci_device, can only bind to one
> > > > driver. How can we bind it concurrently to net, rdma, scsi, etc?
> > >
> > > MFD was designed for this very problem.
> > >
> > > > > > This is for a PCI device that plugs into multiple subsystems
> > > > > > in the kernel, ie it has net driver functionality, rdma
> > > > > > functionality, some even have SCSI functionality
> > > > >
> > > > > Sounds like a MFD device, why aren't you using that
> > > > > functionality instead?
> > > >
> > > > This was also my advice, but in another email Jeff says:
> > > >
> > > >   MFD architecture was also considered, and we selected the simpler
> > > >   platform model. Supporting a MFD architecture would require an
> > > >   additional MFD core driver, individual platform netdev, RDMA function
> > > >   drivers, and stripping a large portion of the netdev drivers into
> > > >   MFD core. The sub-devices registered by MFD core for function
> > > >   drivers are indeed platform devices.
> > >
> > > So, "mfd is too hard, let's abuse a platform device" is ok?
> > >
> > > People have been wanting to do MFD drivers for PCI devices for a
> > > long time, it's about time someone actually did the work for it, I
> > > bet it will not be all that complex if tiny embedded drivers can do
> > > it :)
> > >
> > Hi Greg - Thanks for your feedback!
> >
> > We currently have 2 PCI function netdev drivers in the kernel (i40e &
> > ice) that support devices (x722 & e810) which are RDMA capable. Our
> > objective is to add a single unified RDMA driver (as this a subsystem
> > specific requirement) which needs to access HW resources from the
> > netdev PF drivers. Attaching platform devices from the netdev drivers
> > to the platform bus and having a single RDMA platform driver bind to
> > them and access these resources seemed like a simple approach to realize our
> objective. But seems like attaching platform devices is wrong. I would like to
> understand why.
> 
> Because that is NOT what a platform device is for.
> 
> It was originally created for those types of devices that live on the "platform" bus,
> i.e. things that are hard-wired and you just "know" are in your system because they
> are located at specific locations.  We used to generate them from board files, and
> then when we got DT, we create them from the resources that DT says where the
> locations of the devices are.
> 
> They are NOT to be abused and used whenever someone wants to put them
> somewhere in the "middle" of the device tree because they feel like they are easier
> to use instead of creating a real bus and drivers.
> 
> Yes, they do get abused, and I need to sweep the tree again and fix up all of the
> places where this has crept back in.  But now that I know you are thinking of doing
> this, I'll keep saying to NOT do it for your use case either :)

Thanks Greg for the explanation.
And yes, we went by some example usages in the tree.
> 
> > Are platform sub devices only to be added from an MFD core driver? I
> > am also wondering if MFD arch.  would allow for realizing a single
> > RDMA driver and whether we need an MFD core driver for each device,
> > x722 & e810 or whether it can be a single driver.
> 
> I do not know the details of how MFD works, please ask those developers for
> specifics.  If MFD doesn't work, then create a tiny virtual bus and make sub-
> devices out of that.  If you need a "generic" way to do this for PCI devices, then
> please create that as you are not the only one that keeps wanting this, as for some
> reason PCI hardware vendors don't like dividing up their devices in ways that
> would have made it much simpler to create individual devices (probably saves
> some gates and firmware complexity on the device).
> 

Thank you for laying out options. We will review internally and get back.

Shiraz

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-07-06 16:03 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20190704021252.15534-1-jeffrey.t.kirsher@intel.com>
2019-07-04 12:15 ` [net-next 0/3][pull request] Intel Wired LAN ver Updates 2019-07-03 Jason Gunthorpe
     [not found] ` <20190704021252.15534-2-jeffrey.t.kirsher@intel.com>
2019-07-04 12:16   ` [net-next 1/3] ice: Initialize and register platform device to provide RDMA Jason Gunthorpe
2019-07-04 12:29     ` Greg KH
2019-07-04 12:37       ` Jason Gunthorpe
2019-07-04 12:42         ` Greg KH
2019-07-04 12:48           ` Jason Gunthorpe
2019-07-04 13:46             ` Greg KH
2019-07-04 13:53               ` Jason Gunthorpe
2019-07-05 16:33               ` Saleem, Shiraz
2019-07-06  8:25                 ` Greg KH
2019-07-06 16:03                   ` Saleem, Shiraz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).