All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rob Herring <robh@kernel.org>
To: Frank Rowand <frowand.list@gmail.com>
Cc: clement.leger@bootlin.com, Lizhi Hou <lizhi.hou@amd.com>,
	linux-pci@vger.kernel.org, devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org, helgaas@kernel.org,
	max.zhen@amd.com, sonal.santan@amd.com, larry.liu@amd.com,
	brian.xu@amd.com, stefano.stabellini@xilinx.com, trix@redhat.com,
	Thomas Petazzoni <thomas.petazzoni@bootlin.com>,
	Alexandre Belloni <alexandre.belloni@bootlin.com>,
	"Steen.Hegelund@microchip.com" <Steen.Hegelund@microchip.com>,
	"Horatiu.Vultur@microchip.com" <Horatiu.Vultur@microchip.com>,
	"Allan.Nielsen@microchip.com" <Allan.Nielsen@microchip.com>,
	Vincent Whitchurch <vincent.whitchurch@axis.com>
Subject: Re: [PATCH V7 0/3] Generate device tree node for pci devices
Date: Mon, 6 Mar 2023 18:52:42 -0600	[thread overview]
Message-ID: <CAL_JsqL_ER32ys-yW_7-QKLjEmKK8StOeM5yvH2ChuvX++fe5Q@mail.gmail.com> (raw)
In-Reply-To: <1886b888-a0e8-b1ee-c48a-ddbc8b5b0c63@gmail.com>

On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/6/23 02:35, clement.leger@bootlin.com wrote:
> > Le 2023-03-04 00:42, Frank Rowand a écrit :
> >> On 2/27/23 04:31, Clément Léger wrote:
> >>> Le Mon, 27 Feb 2023 00:51:29 -0600,
> >>> Frank Rowand <frowand.list@gmail.com> a écrit :
> >>>
> >>>> On 1/19/23 21:02, Lizhi Hou wrote:
> >>>>> This patch series introduces OF overlay support for PCI devices
> >>>>> which
> >>>>> primarily addresses two use cases. First, it provides a data driven
> >>>>> method
> >>>>> to describe hardware peripherals that are present in a PCI endpoint
> >>>>> and
> >>>>> hence can be accessed by the PCI host. Second, it allows reuse of a
> >>>>> OF
> >>>>> compatible driver -- often used in SoC platforms -- in a PCI host
> >>>>> based
> >>>>> system.
> >>>>>
> >>>>> There are 2 series devices rely on this patch:
> >>>>>
> >>>>>   1) Xilinx Alveo Accelerator cards (FPGA based device)
> >>>>>   2) Microchip LAN9662 Ethernet Controller
> >>>>>
> >>>>>      Please see:
> >>>>> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
> >>>>>
> >>>>
> >>>>
> >>>>> Normally, the PCI core discovers PCI devices and their BARs using
> >>>>> the
> >>>>> PCI enumeration process. However, the process does not provide a way
> >>>>> to
> >>>>> discover the hardware peripherals that are present in a PCI device,
> >>>>> and
> >>>>> which can be accessed through the PCI BARs. Also, the enumeration
> >>>>> process
> >>>>
> >>>> I'm confused.  The PCI Configuration Header Registers should describe
> >>>> the
> >>>> hardware on the PCI card.
> >>>>
> >>>> Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
> >>>> themselves, so I would like to analyze that case separately), does
> >>>> the
> >>>> second device, "Microchip LAN9662 Ethernet Controller" properly
> >>>> implement
> >>>> the PCI Configuration Header Registers?  What additional information
> >>>> is
> >>>> needed that is not provided in those registers?
> >>>
> >>> Hi Frank,
> >>>
> >>> I guess Lizhi wanted to say that it does not provide a way to describe
> >>> all the "platform" devices that are exposed by this PCI device. Which
> >>> is of course the whole point of the work we are doing right now. But
> >>> all the BARs are correctly described by the LAN9662 PCI card.
> >>>
> >>> Clément
> >>
> >> I remain confused.
> >>
> >> [RFC 00/10] add support for fwnode in i2c mux system and sfp
> >> https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/
> >>
> >>   references a PCIe driver:
> >>   [2]
> >> https://github.com/clementleger/linux/blob/fwnode_support/drivers/mfd/lan966x_pci_mfd.c
> >>
> >> So there is a PCIe driver that works.
> >>
> >> However, the RFC patch series was proposing adding fwnode support to
> >> the driver.  My first
> >> surface reading (just part of that one email, not the entire series or
> >> the replies yet),
> >> notes:
> >>
> >>   ... However, when
> >>   plugged in a PCIe slot (on a x86), there is no device-tree support
> >> and
> >>   the peripherals that are present must be described in some other way.
> >>
> >> I am assuming that the peripherals are what you mentioned above as
> >> '"platform"
> >> devices'.  This is where my current confusion lies.  Are the "platform"
> >> devices accessed via the PCI bus or is there some other electrical
> >> connection
> >> between the host system and the PCIe card?
> >
> > Hi Frank,
> >
> > The platform devices exposed by this PCIe card are available via some
> > BAR using PCI memory mapped areas, so it's totally standard PCI stuff.
> >
> >>
> >> If the "platform" devices are accessed via the PCI bus, then I would
> >> expect them
> >> to be described by PCI configuration header registers.  Are the PCI
> >> configuration
> >> registers to describe the "platform" devices not present?
> >
> > I'm not sure to understand what you mean here. PCI configuration headers
> > only provides some basic registers allowing to identify the PCI device
> > (vendor/product) and some memory areas that are exposed (BAR). They do
> > not provides the "list" of peripherals that are exposed by the devices,
> > only some BARs that can be mapped and that allows to access.
>
> Yes, "identify the PCI device (vendor/product) and some memory areas".
> The driver for the (vendor/product) 'knows' what peripherals are exposed
> by the device and where within the BAR to find the registers for each
> of the devices.
>
> A normal PCI driver would contain this information.  If I understand the
> proposal of this patch series, of_pci_make_dev_node() adds a node to
> the devicetree, when invoked via a PCI quirk for certain specific
> vendor/product cards.  This node must exist for the flattened device
> tree (FDT) overlay for the card to be loaded.  The driver for the card
> will get the overlay FDT from the card and load it into the kernel.
> The driver will use the information that then exists in the devicetree
> describing the card, instead of using information from the PCI configuration
> headers from the card.

How would all the sub devices be defined by the PCI config space other
than a VID/PID implies *everything*. That's the same as the pre-DT
world where the ARM machine ID number (from RMK's registry) implied
everything. These days, we can have an entire SoC exposed behind a PCI
BAR which I think is pretty much Clement's usecase. Putting an SoC
behind a PCI BAR is no more discoverable than a "normal" SoC.

>
> The intent is to be able to re-use devicetree based drivers instead of
> having the driver be a native PCI driver.

Not instead of. There's the PCI driver for the FPGA or SoC bus with
multiple unrelated devices behind it. The PCI driver is just a bus
driver much like we have for various custom SoC bus drivers.

> This goes against historical Linux practice.  The idea of taking a driver
> from another environment (eg Windows, HP Unix, Sun Unix, IBM Unix, etc)
> and adding a shim layer to translate between Linux and the other
> environment has been rejected.  Ironically, in this case, the other
> environment is Linux (more specifically the Linux OF implementation).

I don't see how your example relates to this in any way whatsoever.
We're talking about different discovery mechanisms, not different
driver models/environments.

> Even thought the other environment is Linux, this is still adding a
> shim layer to translate between that other environment and the native
> Linux PCI environment for which the driver would normally be written.
>
> In other words, this is not acceptable.  Normal alternatives would be
> something like
> (1) add the PCI awareness to the existing drivers,

The downstream devices don't have their own PCI config space. That
won't work. PCI drivers expect a device with PCI config space. Devices
to drivers are always 1:1, so we couldn't share the config space among
multiple drivers or something. For devices which are not discoverable
like these are, our choices are DT, ACPI or s/w nodes (aka
platform_data 2.0).

> (2) split the devicetree aware and PCI aware portions of the driver
> to common code that would be invoked from separate devicetree and PCI
> drivers,

That only makes sense for something that is a single driver. Not the
case here. For the FPGA, the devices are not known up front.

> (3) write entirely separate devicetree and PCI drivers, or

For the same reason as 1, that simply won't work.

> (4) some other creative solution.
>
> Am I mis-interpretting or misunderstanding anything crucial here?

Yes...

We now have 3 different use cases all needing the same thing. The
3rd[1] is the recent test infrastructure change to have test devices
added. They all have non-discoverable devices downstream of a PCI
device. We need a solution here.

Rob

[1] https://lore.kernel.org/lkml/20230120-simple-mfd-pci-v1-1-c46b3d6601ef@axis.com/

  reply	other threads:[~2023-03-07  0:53 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-20  3:02 [PATCH V7 0/3] Generate device tree node for pci devices Lizhi Hou
2023-01-20  3:02 ` [PATCH V7 1/3] of: dynamic: Add interfaces for creating device node dynamically Lizhi Hou
2023-03-23 22:40   ` Rob Herring
2023-03-24  2:11     ` Lizhi Hou
2023-03-24 14:14       ` Rob Herring
2023-03-24 21:26         ` Lizhi Hou
2023-04-05  1:45           ` Rob Herring
2023-01-20  3:02 ` [PATCH V7 2/3] PCI: Create device tree node for selected devices Lizhi Hou
2023-01-20  3:02 ` [PATCH V7 3/3] PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50 Lizhi Hou
2023-01-23  4:32 ` [PATCH V7 0/3] Generate device tree node for pci devices Frank Rowand
2023-01-23 23:40   ` Frank Rowand
2023-02-22 15:26     ` Frank Rowand
2023-02-23  0:37       ` Lizhi Hou
2023-02-10  9:37 ` Christian Gmeiner
2023-02-26 22:38 ` Frank Rowand
2023-02-27 10:22   ` Clément Léger
2023-02-27  6:51 ` Frank Rowand
2023-02-27 10:31   ` Clément Léger
2023-03-03 23:42     ` Frank Rowand
2023-03-06  8:35       ` clement.leger
2023-03-06 21:24         ` Frank Rowand
2023-03-07  0:52           ` Rob Herring [this message]
2023-03-07  7:54             ` Stefan Roese
2023-03-08  7:38               ` Frank Rowand
2023-03-07  8:47             ` Clément Léger
2023-03-08  7:29               ` Frank Rowand
2023-03-08  7:31             ` Frank Rowand
2023-03-09  8:45               ` Clément Léger
2023-03-21  8:44                 ` Christian Gmeiner
2023-03-27  3:01                   ` Frank Rowand
2023-03-29 16:50                 ` Frank Rowand
2023-03-30 15:19                   ` Rob Herring
2023-03-31 21:56                     ` Frank Rowand
2023-03-09  5:52 ` Frank Rowand
2023-03-09  5:56   ` Frank Rowand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAL_JsqL_ER32ys-yW_7-QKLjEmKK8StOeM5yvH2ChuvX++fe5Q@mail.gmail.com \
    --to=robh@kernel.org \
    --cc=Allan.Nielsen@microchip.com \
    --cc=Horatiu.Vultur@microchip.com \
    --cc=Steen.Hegelund@microchip.com \
    --cc=alexandre.belloni@bootlin.com \
    --cc=brian.xu@amd.com \
    --cc=clement.leger@bootlin.com \
    --cc=devicetree@vger.kernel.org \
    --cc=frowand.list@gmail.com \
    --cc=helgaas@kernel.org \
    --cc=larry.liu@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lizhi.hou@amd.com \
    --cc=max.zhen@amd.com \
    --cc=sonal.santan@amd.com \
    --cc=stefano.stabellini@xilinx.com \
    --cc=thomas.petazzoni@bootlin.com \
    --cc=trix@redhat.com \
    --cc=vincent.whitchurch@axis.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.