kernelnewbies.kernelnewbies.org archive mirror
 help / color / mirror / Atom feed
* Source code organization
@ 2022-10-17 18:00 Billie Alsup (balsup)
  2022-10-20  8:01 ` Greg KH
  0 siblings, 1 reply; 6+ messages in thread
From: Billie Alsup (balsup) @ 2022-10-17 18:00 UTC (permalink / raw)
  To: kernelnewbies

I have a set of drivers that I would like to upstream.  These are primarily
MFD style drivers supporting Cisco-8000 FPGAs.  The devices can be
instantiated through multiple top level drivers, which provide the access
mechanism for the MFD driver.  For example, an FPGA can be accessed via
PCI, or via i2c, or via SLPC, or via P2PM (a point-to-point interface).  We
currently build these drivers out of tree, under a single directory drivers/cisco.
I note that existing drivers are spread out across the kernel tree, sometimes
by bus (pci, i2c), sometimes by function (gpio, net/mdio, spi), and sometimes
under the generic mfd.

I would like to start the upstream review process for our drivers, but first want
to get recommendations on the source code layout.  Is it permissible to keep
a top level directory such as drivers/cisco to organize our code?  It is not only
the source code that is affected, but also provides a central place for Kconfig
entries.  Our FPGAs have multiple logical blocks, each of which is handled by
a different MFD driver, e.g. i2c controllers, gpio controllers, spi controllers, mdio
controllers.  There can be multiple instances of each block as well (so multiple
MFD devices are instantiated for each driver).  And of course, there can be
multiple FPGAs in a system, each with different combinations of logical blocks.  

The drivers themselves are pretty specific to our FPGAs, thus it makes sense to
use Kconfig to select a hardware platform to automatically select the set of MFD
drivers (and top level bus drivers) that would apply. 

Would a source layout putting all the code under drivers/cisco be acceptable in
this case, or do I need to move things around and spread out the Kconfig entries
across directories? I note that a single drivers/cisco would simplify any related
modifications to MAINTAINERS as well.


_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Source code organization
  2022-10-17 18:00 Source code organization Billie Alsup (balsup)
@ 2022-10-20  8:01 ` Greg KH
  2022-10-21  1:51   ` Billie Alsup (balsup)
  0 siblings, 1 reply; 6+ messages in thread
From: Greg KH @ 2022-10-20  8:01 UTC (permalink / raw)
  To: Billie Alsup (balsup); +Cc: kernelnewbies

On Mon, Oct 17, 2022 at 06:00:40PM +0000, Billie Alsup (balsup) wrote:
> I have a set of drivers that I would like to upstream.  These are primarily
> MFD style drivers supporting Cisco-8000 FPGAs.  The devices can be
> instantiated through multiple top level drivers, which provide the access
> mechanism for the MFD driver.  For example, an FPGA can be accessed via
> PCI, or via i2c, or via SLPC, or via P2PM (a point-to-point interface).  We
> currently build these drivers out of tree, under a single directory drivers/cisco.
> I note that existing drivers are spread out across the kernel tree, sometimes
> by bus (pci, i2c), sometimes by function (gpio, net/mdio, spi), and sometimes
> under the generic mfd.

Yes, it's not always easy to find the common pattern, but usually it is:
	- is this a bus driver that allows bus access to this hardware
	  (pci, gpio, i2c)?  If so, add to drivers/pci|gpio|i2c
	- is this a userspace-facing driver that implements a user
	  "class" of interactions (input, HID, block, etc.).  If so, add
	  to drivers/input|hid|block
	- is this something else?  Then pick a place and submit a patch
	  and people will tell you if you got it wrong :)

> I would like to start the upstream review process for our drivers, but first want
> to get recommendations on the source code layout.  Is it permissible to keep
> a top level directory such as drivers/cisco to organize our code?  It is not only
> the source code that is affected, but also provides a central place for Kconfig
> entries.  Our FPGAs have multiple logical blocks, each of which is handled by
> a different MFD driver, e.g. i2c controllers, gpio controllers, spi controllers, mdio
> controllers.  There can be multiple instances of each block as well (so multiple
> MFD devices are instantiated for each driver).  And of course, there can be
> multiple FPGAs in a system, each with different combinations of logical blocks.  
> 
> The drivers themselves are pretty specific to our FPGAs, thus it makes sense to
> use Kconfig to select a hardware platform to automatically select the set of MFD
> drivers (and top level bus drivers) that would apply. 
> 
> Would a source layout putting all the code under drivers/cisco be acceptable in
> this case, or do I need to move things around and spread out the Kconfig entries
> across directories? I note that a single drivers/cisco would simplify any related
> modifications to MAINTAINERS as well.

No, we do not have "vendor" directories like this, as that's almost
always not what you want over time anyway.  Just scatter your drivers
around the tree based on the type and the subsystem interactions you
need to have.

But step back, why are you creating MFD devices anyway?  Why not make
"real" devices in your hardware representation as you control the FPGA
and DT definitions, right?  That would make things much easier if you
put the devices on a discoverable bus then the drivers can be autoloaded
as needed by the kernel when they are discovered.

Do that, then you can put your fpga interface in drivers/fpga :)

thanks,

greg k-h

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Source code organization
  2022-10-20  8:01 ` Greg KH
@ 2022-10-21  1:51   ` Billie Alsup (balsup)
  2022-10-21  4:54     ` Greg KH
  0 siblings, 1 reply; 6+ messages in thread
From: Billie Alsup (balsup) @ 2022-10-21  1:51 UTC (permalink / raw)
  To: Greg KH; +Cc: kernelnewbies


> From: Greg KH <greg@kroah.com>
>        - is this something else?  Then pick a place and submit a patch
>          and people will tell you if you got it wrong :)

I think this is going to be my strategy, except per a separate recommendation,
I will put it in drivers/platform/cisco, similar to the existing chrome, goldfish,
mellanox, and surface already under drivers/platform.  These drivers really
are Cisco hardware specific, so I'd like to keep them grouped together,
if reviewers will allow it!

> But step back, why are you creating MFD devices anyway?  Why not make
> "real" devices in your hardware representation as you control the FPGA
> and DT definitions, right?

Currently the FPGAs themselves are discovered (for pci bus), or explicitly
instantiated (for i2c, whether via user mode script, device tree, or
ACPI configuration).  The slpc and p2pm MFD drivers also "discover"
the peer FPGA. Are you suggesting that I create a new bus driver
for a Cisco FPGA?  I was under the impression that the MFD framework
was explicitly meant for this type of application.  I do dynamically
create the array of struct mfd_cell passed to devm_mfd_add_devices
(versus having an fpga specific driver), which achieves the same 
purpose as a bus driver (I think).  It's just that I thought MFD framework
was meant for this type of application, and that's what I went with.
The drivers themselves have been written over the last several years
(starting with use in ONIE and SONiC, and now being used with FBOSS).  It
is only now that I have been authorized to make the source public.  Changing
to a bus driver top layer would seem to be a very big change.  There
are 30 or so drivers being used today with Cisco FPGAs, and
that number will only grow over time.

> Do that, then you can put your fpga interface in drivers/fpga :)

I will have to look into this more.  I don't think I can make such a
drastic change at this time, but it might be something we can put
on the roadmap if that is the right approach architecturally.  The code
is organized with a separation of layers already, so it
should be doable.  

Thanks for your feedback!

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Source code organization
  2022-10-21  1:51   ` Billie Alsup (balsup)
@ 2022-10-21  4:54     ` Greg KH
  2022-10-21 15:45       ` Billie Alsup (balsup)
  0 siblings, 1 reply; 6+ messages in thread
From: Greg KH @ 2022-10-21  4:54 UTC (permalink / raw)
  To: Billie Alsup (balsup); +Cc: kernelnewbies

On Fri, Oct 21, 2022 at 01:51:48AM +0000, Billie Alsup (balsup) wrote:
> 
> > From: Greg KH <greg@kroah.com>
> >        - is this something else?  Then pick a place and submit a patch
> >          and people will tell you if you got it wrong :)
> 
> I think this is going to be my strategy, except per a separate recommendation,
> I will put it in drivers/platform/cisco, similar to the existing chrome, goldfish,
> mellanox, and surface already under drivers/platform.  These drivers really
> are Cisco hardware specific, so I'd like to keep them grouped together,
> if reviewers will allow it!

Platform drivers are the "catch all" for where you have to talk to tiny
hardware-specific devices.  That's normally not an i2c controller.

> > But step back, why are you creating MFD devices anyway?  Why not make
> > "real" devices in your hardware representation as you control the FPGA
> > and DT definitions, right?
> 
> Currently the FPGAs themselves are discovered (for pci bus), or explicitly
> instantiated (for i2c, whether via user mode script, device tree, or
> ACPI configuration).

So you are creating platform devices for a dynamic device that is on
another bus?  Please no, that is an abuse of the platform device code.
That's the biggest reason NOT to use MFD for your code.

Just make normal drivers for these, no need for MFD at all, why do you
want to use that api here?

> The slpc and p2pm MFD drivers also "discover"
> the peer FPGA. Are you suggesting that I create a new bus driver
> for a Cisco FPGA?

If it is a bus, then yes, you should.  That's a very simple thing to do.

> I was under the impression that the MFD framework was explicitly meant
> for this type of application.

Nope!  But it has been abused for situations like this.  there are
simpler ways to do this now, including the auxiliary bus code, which
might be what you need to use here instead.

> I do dynamically
> create the array of struct mfd_cell passed to devm_mfd_add_devices
> (versus having an fpga specific driver), which achieves the same 
> purpose as a bus driver (I think).

Close, but you are making fake platform devices for a dynamic discovered
device, which is not ok.  Please use a real bus for this, either your
own, or the aux bus code.

> It's just that I thought MFD framework
> was meant for this type of application, and that's what I went with.
> The drivers themselves have been written over the last several years
> (starting with use in ONIE and SONiC, and now being used with FBOSS).  It
> is only now that I have been authorized to make the source public.  Changing
> to a bus driver top layer would seem to be a very big change.  There
> are 30 or so drivers being used today with Cisco FPGAs, and
> that number will only grow over time.

Changing the bus layer should not be a big deal, and just because the
code is old doesn't mean it has to be fixed up properly to get it merged
into the tree.  That's the best reason for why it needs to be fixed up,
you shouldn't be using apis and interfaces that are not designed for
what you are doing :)

Do you have a pointer to the source for these?  That might make it
easier to discuss than general concepts here.

thanks,

greg k-h

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Source code organization
  2022-10-21  4:54     ` Greg KH
@ 2022-10-21 15:45       ` Billie Alsup (balsup)
  2024-04-23  4:10         ` auxiliary bus/drivers feature set versus platform/mfd Billie Alsup (balsup)
  0 siblings, 1 reply; 6+ messages in thread
From: Billie Alsup (balsup) @ 2022-10-21 15:45 UTC (permalink / raw)
  To: Greg KH; +Cc: kernelnewbies

> From: Greg KH <greg@kroah.com>

> Platform drivers are the "catch all" for where you have to talk to tiny
> hardware-specific devices.  That's normally not an i2c controller.

I always thought of these as hardware specific devices, as the FPGA images
are hardware specific.  Yes, the top level is currently a "real" driver,
and the MFD drivers are all platform drivers (as required by the MFD
infrastructure).  Certainly we integrate with standard subsystems in
those drivers (off the top of my head: i2c, gpio, spi, mdio, led, watchdog,
 and reboot notifier). 

> So you are creating platform devices for a dynamic device that is on
> another bus?  Please no, that is an abuse of the platform device code.
> That's the biggest reason NOT to use MFD for your code.

I was not aware of the alternatives.  I inherited the first set of drivers from
a proof of concept implementation, and just went with it.  Originally,
all of this code was in user space, with no integration with standard
kernel infrastructure.  Movement to standard kernel integration started
with the 4.9.168 kernel in the SONiC 201811 branch.

> Just make normal drivers for these, no need for MFD at all, why do you
> want to use that api here?

I will need to read up on the auxiliary bus code and see what it takes to
move to it, or another bus driver implementation.

> Do you have a pointer to the source for these?  That might make it
> easier to discuss than general concepts here.

The first subset of drivers (meant for out-of-tree builds for OpenBMC)
are publicly available in github.com:cisco-open/cisco-8000-kernel-modules.
This set accesses the FPGA through an i2c bus, and provides only a subset
of drivers that are explicitly needed by OpenBMC.  Note that the same
FPGA will be accessed via PCI by an X86 processor in the same chassis
running a different kernel instance from the OpenBMC ARM instance.

The top level in this case is cisco-fpga-bmc.ko.  There is a "library" of 
shared functions in libcisco.ko, including the code to walk the FPGA,
discover the IP blocks, and setup the struct mfd_cell array (other common
code shared between the blocks include routines to integrate with
the reboot notifier, and some arbitration code between the ARM
and X86 processors when accessing the same i2c IP block)..  The IP blocks
being used with OpenBMC include

cisco-fpga-gpio.ko
cisco-fpga-i2c.ko 
cisco-fpga-i2c-ext.ko
cisco-fpga-info.ko
cisco-fpga-msd.ko
cisco-fpga-xil.ko

Note that there are two different i2c implementations provided in this
repository, and there are two more that will eventually be published here.
They each have different register specifications and have different
capabilities. For example, one type is meant for small SMB accesses, as 
might be the case with a sensor. Another is meant for larger read/writes,
such as might be the case with an eeprom.  Another provides for
automatic polling of transceivers and providing quick access to the
shadow data so read from the transceivers.  etc. 

Walking the FPGA to setup the struct mfd_cell array is within 

https://github.com/cisco-open/cisco-8000-kernel-modules/blob/main/drivers/cisco/mfd.c

One other thing to note, which may be a bit weird, is how we setup regmap.
As previously indicated, each IP block (with its own MFD instantiated platform driver)
can be accessed in multiple ways, e.g. PCI, I2C, SLPC, and P2PM.  The parent driver
provides a function for the child device to create a new child specific regmap.  As you
will see in 

https://github.com/cisco-open/cisco-8000-kernel-modules/blob/main/drivers/cisco/cisco-fpga-bmc.c

there are read/write functions that are used in the child's regmap registration.  Similar is done
for PCI, SLPC, and P2PM, although those drivers are not in this repository yet.
_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 6+ messages in thread

* auxiliary bus/drivers feature set versus platform/mfd
  2022-10-21 15:45       ` Billie Alsup (balsup)
@ 2024-04-23  4:10         ` Billie Alsup (balsup)
  0 siblings, 0 replies; 6+ messages in thread
From: Billie Alsup (balsup) @ 2024-04-23  4:10 UTC (permalink / raw)
  To: kernelnewbies

I'm looking at switching from platform drivers (via mfd subsystem) to 
auxiliary bus/device drivers.  There is a lot of functionality available for
platform devices and MFD subsystem.  I'm wondering if there are any
plans to enhance auxiliary devices to share some of these features?

Some examples include:

     platform_get_resource
     platform_get_irq{_optional}
     mfd_acpi_add_device
     PLATFORM_DEVID_NONE
     PLATFORM_DEVID_AUTO

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-04-23  4:13 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-17 18:00 Source code organization Billie Alsup (balsup)
2022-10-20  8:01 ` Greg KH
2022-10-21  1:51   ` Billie Alsup (balsup)
2022-10-21  4:54     ` Greg KH
2022-10-21 15:45       ` Billie Alsup (balsup)
2024-04-23  4:10         ` auxiliary bus/drivers feature set versus platform/mfd Billie Alsup (balsup)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).