From: Thierry Reding <thierry.reding@gmail.com>
To: Georgi Djakov <georgi.djakov@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>,
Jon Hunter <jonathanh@nvidia.com>,
Dmitry Osipenko <digetx@gmail.com>,
linux-tegra@vger.kernel.org, devicetree@vger.kernel.org
Subject: Re: [RFC 2/2] dt-bindings: firmware: tegra186-bpmp: Document interconnects property
Date: Mon, 20 Jan 2020 16:06:05 +0100 [thread overview]
Message-ID: <20200120150605.GA712203@ulmo> (raw)
In-Reply-To: <7aefac6c-092c-b5a6-2fa6-e283d2147fc3@linaro.org>
[-- Attachment #1: Type: text/plain, Size: 7052 bytes --]
On Fri, Jan 17, 2020 at 05:23:43PM +0200, Georgi Djakov wrote:
> Hi Thierry,
>
> Thanks for the patch!
>
> On 1/14/20 20:15, Thierry Reding wrote:
> > From: Thierry Reding <treding@nvidia.com>
> >
> > Document the interconnects property that is used to describe the paths
> > from and to system memory from and to the BPMP.
> >
> > Signed-off-by: Thierry Reding <treding@nvidia.com>
> > ---
> > Rob, Georgi,
> >
> > after the initial RFC that I did for adding interconnect properties on
> > Tegra, I realized that the description wasn't complete. This is an
> > attempt at a more accurate description, but unfortunately I'm not sure
> > if it's even correct in terms of the interconnect bindings.
> >
> > The problem here is that on Tegra, each device has multiple paths to
> > system memory, and I have no good idea on what to pick as the default.
> > They are all basically the same path, but each provides extra controls
> > to configure the "interconnect".
>
> Are these multiple paths between a device and system memory used simultaneously
> for load-balancing, or who makes the decision about which path would be used?
It varies. The vast majority of these paths are read/write pairs, which
can be configured separately. There are also cases where multiple paths
are used for load-balancing and I don't think there's any direct
software control over which path will be used.
A third class is where you have one device, but two read/write pairs,
one which is tied to a microcontroller that's part of the device, and
another read/write pair that is used for DMA to/from the device.
Often in the latter case, the microcontroller memory client interfaces
will be used by the microcontroller to read firmware and once the micro-
controller has booted up, the DMA memory client interfaces will be used
to read/write system memory with bulk data (like frame buffers, etc.).
> Is this based on the client/stream ID that you mentioned previously?
These are now all what's call memory client IDs, which identify the
corresponding interface to the memory controller. Stream IDs are
slightly higher-level and typically identify the "module" that uses
the SMMU. Generally a stream ID is mapped to one or more memory client
IDs.
> Looking at the the binding below, it seems to me like there are different
> master/slave pairs between MC and EMC and each link is used for
> unidirectional traffic only. In terms of the interconnect API, both read
> and write paths have the same direction.
I'm not sure I understand what you mean by this last sentence. Are you
saying that each path in terms of the interconnect API is a always a
bidirectional link?
> Is the EMC really an interconnect provider or is it just a slave port? Can
> we scale both EMC and MC independently?
The EMC is the only one where we can scale the frequency, but the MC has
various knobs that can be used to fine-tune arbitration, set maximum
latency, etc.
I vaguely recall Dmitry mentioning that the EMC in early generations of
Tegra used to have controls for individual memory clients, but I don't
see that in more recent generations.
Thierry
> > Any ideas on how to resolve this? Let me know if the DT bindings and
> > example don't make things clear enough.
> >
> > Thierry
> >
> > .../firmware/nvidia,tegra186-bpmp.yaml | 59 +++++++++++++++++++
> > 1 file changed, 59 insertions(+)
> >
> > diff --git a/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml b/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml
> > index dabf1c1aec2f..d40fcd836e90 100644
> > --- a/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml
> > +++ b/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml
> > @@ -43,6 +43,24 @@ properties:
> > - enum:
> > - nvidia,tegra186-bpmp
> >
> > + interconnects:
> > + $ref: /schemas/types.yaml#/definitions/phandle-array
> > + description: A list of phandle and specifier pairs that describe the
> > + interconnect paths to and from the BPMP.
> > +
> > + interconnect-names:
> > + $ref: /schemas/types.yaml#/definitions/non-unique-string-array
> > + description: One string for each pair of phandle and specifier in the
> > + "interconnects" property.
> > + # XXX We need at least one of these to be named dma-mem so that the core
> > + # will set the DMA mask based on the DMA parent, but all of these go to
> > + # system memory eventually.
> > + items:
> > + - const: dma-mem
> > + - const: dma-mem
> > + - const: dma-mem
> > + - const: dma-mem
> > +
> > iommus:
> > $ref: /schemas/types.yaml#/definitions/phandle-array
> > description: |
> > @@ -152,8 +170,43 @@ additionalProperties: false
> >
> > examples:
> > - |
> > + #include <dt-bindings/clock/tegra186-clock.h>
> > #include <dt-bindings/interrupt-controller/arm-gic.h>
> > #include <dt-bindings/mailbox/tegra186-hsp.h>
> > + #include <dt-bindings/memory/tegra186-mc.h>
> > +
> > + mc: memory-controller@2c00000 {
> > + compatible = "nvidia,tegra186-mc";
> > + reg = <0x02c00000 0xb0000>;
> > + interrupts = <GIC_SPI 223 IRQ_TYPE_LEVEL_HIGH>;
> > + status = "disabled";
> > +
> > + #interconnect-cells = <1>;
> > + #address-cells = <2>;
> > + #size-cells = <2>;
> > +
> > + ranges = <0x02c00000 0x0 0x02c00000 0x0 0xb0000>;
> > +
> > + /*
> > + * Memory clients have access to all 40 bits that the memory
> > + * controller can address.
> > + */
> > + dma-ranges = <0x0 0x0 0x0 0x100 0x0>;
> > +
> > + #memory-controller-cells = <0>;
> > +
> > + emc: external-memory-controller@2c60000 {
> > + compatible = "nvidia,tegra186-emc";
> > + reg = <0x0 0x02c60000 0x0 0x50000>;
> > + interrupts = <GIC_SPI 224 IRQ_TYPE_LEVEL_HIGH>;
> > + clocks = <&bpmp TEGRA186_CLK_EMC>;
> > + clock-names = "emc";
> > +
> > + #interconnect-cells = <0>;
> > +
> > + nvidia,bpmp = <&bpmp>;
> > + };
> > + };
> >
> > hsp_top0: hsp@3c00000 {
> > compatible = "nvidia,tegra186-hsp";
> > @@ -187,6 +240,12 @@ examples:
> >
> > bpmp {
> > compatible = "nvidia,tegra186-bpmp";
> > + interconnects = <&emc &mc TEGRA186_MEMORY_CLIENT_BPMPR>,
> > + <&mc TEGRA186_MEMORY_CLIENT_BPMPW &emc>,
> > + <&emc &mc TEGRA186_MEMORY_CLIENT_BPMPDMAR>,
> > + <&mc TEGRA186_MEMORY_CLIENT_BPMPDMAW &emc>;
> > + interconnect-names = "dma-mem", "dma-mem", "dma-mem", "dma-mem";
> > +
> > iommus = <&smmu TEGRA186_SID_BPMP>;
> > mboxes = <&hsp_top0 TEGRA_HSP_MBOX_TYPE_DB TEGRA_HSP_DB_MASTER_BPMP>;
> > shmem = <&cpu_bpmp_tx &cpu_bpmp_rx>;
> >
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2020-01-20 15:06 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-14 18:15 [PATCH 1/2] dt-bindings: firmware: Convert Tegra186 BPMP bindings to json-schema Thierry Reding
2020-01-14 18:15 ` [RFC 2/2] dt-bindings: firmware: tegra186-bpmp: Document interconnects property Thierry Reding
2020-01-17 15:23 ` Georgi Djakov
2020-01-20 15:06 ` Thierry Reding [this message]
2020-01-21 6:53 ` Dmitry Osipenko
2020-01-21 14:10 ` Thierry Reding
2020-01-21 15:18 ` Georgi Djakov
2020-01-21 15:54 ` Thierry Reding
2020-01-21 20:12 ` Dmitry Osipenko
2020-01-26 21:56 ` Dmitry Osipenko
2020-01-26 22:03 ` Dmitry Osipenko
2020-01-27 12:52 ` Thierry Reding
2020-01-27 12:49 ` Thierry Reding
2020-02-05 21:34 ` Dmitry Osipenko
2020-01-27 12:21 ` Thierry Reding
2020-01-28 19:27 ` Dmitry Osipenko
2020-01-29 9:36 ` Thierry Reding
2020-01-29 16:02 ` Dmitry Osipenko
2020-01-29 16:13 ` Georgi Djakov
2020-01-29 16:16 ` Dmitry Osipenko
2020-01-16 19:28 ` [PATCH 1/2] dt-bindings: firmware: Convert Tegra186 BPMP bindings to json-schema Rob Herring
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200120150605.GA712203@ulmo \
--to=thierry.reding@gmail.com \
--cc=devicetree@vger.kernel.org \
--cc=digetx@gmail.com \
--cc=georgi.djakov@linaro.org \
--cc=jonathanh@nvidia.com \
--cc=linux-tegra@vger.kernel.org \
--cc=robh+dt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).