From: David Daney <ddaney@caviumnetworks.com> To: Bjorn Helgaas <helgaas@kernel.org> Cc: Rob Herring <robh@kernel.org>, David Daney <ddaney.cavm@gmail.com>, Bjorn Helgaas <bhelgaas@google.com>, "linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>, Will Deacon <will.deacon@arm.com>, "linux-arm-kernel@lists.infradead.org" <linux-arm-kernel@lists.infradead.org>, Pawel Moll <pawel.moll@arm.com>, Mark Rutland <mark.rutland@arm.com>, Ian Campbell <ijc+devicetree@hellion.org.uk>, Kumar Gala <galak@codeaurora.org>, "devicetree@vger.kernel.org" <devicetree@vger.kernel.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, David Daney <david.daney@cavium.com> Subject: Re: [PATCH v5 3/3] pci, pci-thunder-ecam: Add driver for ThunderX-pass1 on-chip devices Date: Mon, 8 Feb 2016 15:31:54 -0800 [thread overview] Message-ID: <56B9256A.5040106@caviumnetworks.com> (raw) In-Reply-To: <20160208232430.GA1353@localhost> On 02/08/2016 03:24 PM, Bjorn Helgaas wrote: > On Mon, Feb 08, 2016 at 02:41:41PM -0800, David Daney wrote: >> On 02/08/2016 02:12 PM, Bjorn Helgaas wrote: >>> On Mon, Feb 08, 2016 at 01:39:21PM -0800, David Daney wrote: >>>> On 02/08/2016 01:12 PM, Rob Herring wrote: >>>>> On Mon, Feb 8, 2016 at 2:47 PM, David Daney <ddaney@caviumnetworks.com> wrote: >>>>>> On 02/08/2016 11:56 AM, Rob Herring wrote: >>>>>>> On Fri, Feb 05, 2016 at 03:41:15PM -0800, David Daney wrote: >>>>>>>> From: David Daney <david.daney@cavium.com> >>>>>>>> +Properties of the host controller node that differ from >>>>>>>> +host-generic-pci.txt: >>>>>>>> + >>>>>>>> +- compatible : Must be "cavium,pci-host-thunder-ecam" >>>>>>>> + >>>>>>>> +Example: >>>>>>>> + >>>>>>>> + pci@84b0,00000000 { >> ... >>>>>>> and the node name should be "pcie". >>>>>> >>>>>> Why pcie? >>>>>> >>>>>> There are no PCIe devices or buses reachable from this type of root complex. >>>>>> There are however many PCI devices. >> ... > >>>> Really, it is a bit of a gray area here as we don't have any bridges >>>> to PCIe buses and there are multiple devices residing on each bus, >>>> so from that point of view it cannot be PCIe. There are, however, >>>> devices that implement the PCI Express Capability structure, so does >>>> that make it PCIe? It is not clear what the specifications demand >>>> here. >>> >>> The PCI core doesn't care about the node name in the device tree. But >>> it *does* care about some details of PCI/PCIe topology. We consider >>> anything with a PCIe capability to be PCIe. For example, >>> >>> - pci_cfg_space_size() thinks PCIe devices have 4K of config space >>> >>> - only_one_child() thinks a PCIe bus, i.e., a link, only has a >>> single device on it >>> >>> - a PCIe device should have a PCIe Root Port or PCIe Downstream Port >>> upstream from it (we did remove some of these restrictions with >>> b35b1df5e6c2 ("PCI: Tolerate hierarchies with no Root Port"), but >>> it's possible we didn't get them all) >>> >>> I assume your system conforms to expectations like these; I'm just >>> pointing them out because you mentioned buses with multiple devices on >>> them, which is definitely something one doesn't expect in PCIe. >> >> The topology we have is currently working with the kernel's core PCI >> code. I don't really want to get into discussing what the >> definition of PCIe is. We have multiple devices (more than 32) on a >> single bus, and they have PCI Express and ARI Capabilities. Is that >> PCIe? I don't know. > > I don't need to know the details of your topology. As long as it > conforms to the PCIe spec, it should be fine. If it *doesn't* conform > to the spec, but things currently seem to work, that's less fine, > because a future Linux change is liable to break something for you. > > I was a little concerned about your statement that "there are multiple > devices residing on each bus, so from that point of view it cannot be > PCIe." That made it sound like you're doing something outside the > spec. If you're just using regular multi-function devices or ARI, > then I don't see any issue (or any reason to say it can't be PCIe). OK, I will make it "pcie@...." Really, ARI is the only reason. But since ARI is defined in the PCI Express specification, pcie it is. I will send revised patches today. David Daney > > Bjorn >
WARNING: multiple messages have this Message-ID (diff)
From: ddaney@caviumnetworks.com (David Daney) To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 3/3] pci, pci-thunder-ecam: Add driver for ThunderX-pass1 on-chip devices Date: Mon, 8 Feb 2016 15:31:54 -0800 [thread overview] Message-ID: <56B9256A.5040106@caviumnetworks.com> (raw) In-Reply-To: <20160208232430.GA1353@localhost> On 02/08/2016 03:24 PM, Bjorn Helgaas wrote: > On Mon, Feb 08, 2016 at 02:41:41PM -0800, David Daney wrote: >> On 02/08/2016 02:12 PM, Bjorn Helgaas wrote: >>> On Mon, Feb 08, 2016 at 01:39:21PM -0800, David Daney wrote: >>>> On 02/08/2016 01:12 PM, Rob Herring wrote: >>>>> On Mon, Feb 8, 2016 at 2:47 PM, David Daney <ddaney@caviumnetworks.com> wrote: >>>>>> On 02/08/2016 11:56 AM, Rob Herring wrote: >>>>>>> On Fri, Feb 05, 2016 at 03:41:15PM -0800, David Daney wrote: >>>>>>>> From: David Daney <david.daney@cavium.com> >>>>>>>> +Properties of the host controller node that differ from >>>>>>>> +host-generic-pci.txt: >>>>>>>> + >>>>>>>> +- compatible : Must be "cavium,pci-host-thunder-ecam" >>>>>>>> + >>>>>>>> +Example: >>>>>>>> + >>>>>>>> + pci at 84b0,00000000 { >> ... >>>>>>> and the node name should be "pcie". >>>>>> >>>>>> Why pcie? >>>>>> >>>>>> There are no PCIe devices or buses reachable from this type of root complex. >>>>>> There are however many PCI devices. >> ... > >>>> Really, it is a bit of a gray area here as we don't have any bridges >>>> to PCIe buses and there are multiple devices residing on each bus, >>>> so from that point of view it cannot be PCIe. There are, however, >>>> devices that implement the PCI Express Capability structure, so does >>>> that make it PCIe? It is not clear what the specifications demand >>>> here. >>> >>> The PCI core doesn't care about the node name in the device tree. But >>> it *does* care about some details of PCI/PCIe topology. We consider >>> anything with a PCIe capability to be PCIe. For example, >>> >>> - pci_cfg_space_size() thinks PCIe devices have 4K of config space >>> >>> - only_one_child() thinks a PCIe bus, i.e., a link, only has a >>> single device on it >>> >>> - a PCIe device should have a PCIe Root Port or PCIe Downstream Port >>> upstream from it (we did remove some of these restrictions with >>> b35b1df5e6c2 ("PCI: Tolerate hierarchies with no Root Port"), but >>> it's possible we didn't get them all) >>> >>> I assume your system conforms to expectations like these; I'm just >>> pointing them out because you mentioned buses with multiple devices on >>> them, which is definitely something one doesn't expect in PCIe. >> >> The topology we have is currently working with the kernel's core PCI >> code. I don't really want to get into discussing what the >> definition of PCIe is. We have multiple devices (more than 32) on a >> single bus, and they have PCI Express and ARI Capabilities. Is that >> PCIe? I don't know. > > I don't need to know the details of your topology. As long as it > conforms to the PCIe spec, it should be fine. If it *doesn't* conform > to the spec, but things currently seem to work, that's less fine, > because a future Linux change is liable to break something for you. > > I was a little concerned about your statement that "there are multiple > devices residing on each bus, so from that point of view it cannot be > PCIe." That made it sound like you're doing something outside the > spec. If you're just using regular multi-function devices or ARI, > then I don't see any issue (or any reason to say it can't be PCIe). OK, I will make it "pcie at ...." Really, ARI is the only reason. But since ARI is defined in the PCI Express specification, pcie it is. I will send revised patches today. David Daney > > Bjorn >
next prev parent reply other threads:[~2016-02-08 23:32 UTC|newest] Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-02-05 23:41 [PATCH v5 0/3] Add host controller drivers for Cavium ThunderX PCI David Daney 2016-02-05 23:41 ` David Daney 2016-02-05 23:41 ` David Daney 2016-02-05 23:41 ` [PATCH v5 1/3] PCI: generic: Refactor code to enable reuse by other drivers David Daney 2016-02-05 23:41 ` David Daney 2016-02-05 23:41 ` [PATCH v5 2/3] pci, pci-thunder-pem: Add PCIe host driver for ThunderX processors David Daney 2016-02-05 23:41 ` David Daney 2016-02-06 16:01 ` Bjorn Helgaas 2016-02-06 16:01 ` Bjorn Helgaas 2016-02-06 16:01 ` Bjorn Helgaas 2016-02-05 23:41 ` [PATCH v5 3/3] pci, pci-thunder-ecam: Add driver for ThunderX-pass1 on-chip devices David Daney 2016-02-05 23:41 ` David Daney 2016-02-08 19:56 ` Rob Herring 2016-02-08 19:56 ` Rob Herring 2016-02-08 20:47 ` David Daney 2016-02-08 20:47 ` David Daney 2016-02-08 20:47 ` David Daney 2016-02-08 21:12 ` Rob Herring 2016-02-08 21:12 ` Rob Herring 2016-02-08 21:12 ` Rob Herring 2016-02-08 21:39 ` David Daney 2016-02-08 21:39 ` David Daney 2016-02-08 21:39 ` David Daney 2016-02-08 22:12 ` Bjorn Helgaas 2016-02-08 22:12 ` Bjorn Helgaas 2016-02-08 22:12 ` Bjorn Helgaas 2016-02-08 22:41 ` David Daney 2016-02-08 22:41 ` David Daney 2016-02-08 22:41 ` David Daney 2016-02-08 22:41 ` David Daney 2016-02-08 23:24 ` Bjorn Helgaas 2016-02-08 23:24 ` Bjorn Helgaas 2016-02-08 23:24 ` Bjorn Helgaas 2016-02-08 23:31 ` David Daney [this message] 2016-02-08 23:31 ` David Daney 2016-02-08 23:31 ` David Daney 2016-02-09 9:25 ` Arnd Bergmann 2016-02-09 9:25 ` Arnd Bergmann 2016-02-09 16:26 ` Bjorn Helgaas 2016-02-09 16:26 ` Bjorn Helgaas 2016-02-09 16:31 ` Arnd Bergmann 2016-02-09 16:31 ` Arnd Bergmann 2016-02-09 16:58 ` David Daney 2016-02-09 16:58 ` David Daney 2016-02-09 16:58 ` David Daney
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=56B9256A.5040106@caviumnetworks.com \ --to=ddaney@caviumnetworks.com \ --cc=bhelgaas@google.com \ --cc=david.daney@cavium.com \ --cc=ddaney.cavm@gmail.com \ --cc=devicetree@vger.kernel.org \ --cc=galak@codeaurora.org \ --cc=helgaas@kernel.org \ --cc=ijc+devicetree@hellion.org.uk \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-pci@vger.kernel.org \ --cc=mark.rutland@arm.com \ --cc=pawel.moll@arm.com \ --cc=robh@kernel.org \ --cc=will.deacon@arm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.