linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thierry Reding <thierry.reding@gmail.com>
To: Dmitry Osipenko <digetx@gmail.com>
Cc: Jonathan Hunter <jonathanh@nvidia.com>,
	Peter De Schrijver <pdeschrijver@nvidia.com>,
	Mikko Perttunen <mperttunen@nvidia.com>,
	Georgi Djakov <georgi.djakov@linaro.org>,
	Rob Herring <robh+dt@kernel.org>,
	linux-tegra@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	devicetree@vger.kernel.org
Subject: Re: [PATCH v1 08/29] dt-bindings: interconnect: tegra: Add initial IDs
Date: Mon, 25 Nov 2019 12:32:18 +0100	[thread overview]
Message-ID: <20191125113218.GK1409040@ulmo> (raw)
In-Reply-To: <f0f36176-8070-08a6-a61f-77221d916f04@gmail.com>

[-- Attachment #1: Type: text/plain, Size: 4216 bytes --]

On Thu, Nov 21, 2019 at 08:14:35PM +0300, Dmitry Osipenko wrote:
> 19.11.2019 19:56, Dmitry Osipenko пишет:
> > 19.11.2019 09:25, Thierry Reding пишет:
> >> On Mon, Nov 18, 2019 at 11:02:26PM +0300, Dmitry Osipenko wrote:
> >>> Define interconnect IDs for memory controller (MC), external memory
> >>> controller (EMC), external memory (EMEM) and memory clients of display
> >>> controllers (DC).
> >>>
> >>> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
> >>> ---
> >>>  include/dt-bindings/interconnect/tegra-icc.h | 11 +++++++++++
> >>>  1 file changed, 11 insertions(+)
> >>>  create mode 100644 include/dt-bindings/interconnect/tegra-icc.h
> > 
> > 
> > Hello Thierry,
> > 
> >> There was a bit of discussion regarding this for a recent patch that I
> >> was working on, see:
> >>
> >> 	http://patchwork.ozlabs.org/project/linux-tegra/list/?series=140318
> > 
> > Thank you very much for the link.
> > 
> >> I'd rather not use an additional set of definitions for this. The memory
> >> controller already has a set of native IDs for memory clients that I
> >> think we can reuse for this.
> > 
> > I missed that it's fine to have multiple ICC connections defined
> > per-path, at quick glance looks like indeed it should be fine to re-use
> > MC IDs.
> 
> Well, it is not quite correct to have multiple connections per-path.
> 
> Please take look at interconnect's binding and core.c:
> 
>   1. there should be one src->dst connection per-path
>   2. each connection should comprise of one source and one destination nodes
> 
> >> I've only added these client IDs for Tegra194 because that's where we
> >> need it to actually describe a specific hardware quirk, but I can come
> >> up with the equivalent for older chips as well.
> > 
> > Older Tegra SoCs have hardware units connected to MC through AHB bus,
> > like USB for example. These units do not have MC client IDs and there is
> > no MC ID defined for the AHB bus either, but probably it won't be a
> > problem to define IDs for them if will be necessary.
> > 
> 
> Since interconnect binding requires to define both source and
> destination nodes for the path, then MC IDs are not enough in order to
> define interconnect path because these IDs represent only the source
> nodes. Destination node should be either EMC or EMEM.

This doesn't really map well to Tegra. The source of the path is always
the device and the destination is always the memory controller. We also
can have multiple paths between a device and the memory controller. The
typical case is to have at least a read and a write path, but there are
a number of devices that have multiple read and/or multiple write paths
to the memory controller.

Or perhaps I'm looking at this the wrong way, and what we really ought
to describe is the paths with MC sitting in the middle. So it'd be
something like:

	MC ID --- source ---> MC --- destination ---> EMC

for write paths and:

	EMC --- source ---> MC --- destination ---> MC ID

for read paths. I have no idea what would be a good connection ID for
EMC, since I don't think MC really differentiates at that level. Perhaps
#interconnect-cells = <0> for EMC would be appropriate.

This would make the bindings look more like this, taking a random sample
from the above series:

	ethernet@2490000 {
		...
		interconnects = <&emc &mc TEGRA194_MEMORY_CLIENT_EQOSR>,
				<&mc TEGRA194_MEMORY_CLIENT_EQOSW &emc>;
		interconnect-names = "dma-mem", "dma-mem";
		...
	};

In words, the above would mean that for the ethernet device there is one
path (a read slave interface) where data flows from the EMC through the
MC to the device with memory client ID TEGRA194_MEMORY_CLIENT_EQOSR. The
second path (a write slave interface) describes data flowing from the
device (with memory client ID TEGRA194_MEMORY_CLIENT_EQOSW) through the
MC and towards the EMC.

Irrespective of the above, I think we definitely need to keep separate
IDs for read and write paths because each of them have separate controls
for arbitration and latency allowance. So each of those may need to be
separately configurable.

Does that make sense?

Thierry

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2019-11-25 11:32 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-18 20:02 [PATCH v1 00/29] Introduce memory interconnect for NVIDIA Tegra SoCs Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 01/29] dt-bindings: memory: tegra20: mc: Document new interconnect property Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 02/29] dt-bindings: memory: tegra20: emc: " Dmitry Osipenko
2019-11-19  6:21   ` Thierry Reding
2019-11-19 16:57     ` Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 03/29] dt-bindings: memory: tegra30: mc: " Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 04/29] dt-bindings: memory: tegra30: emc: " Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 05/29] dt-bindings: memory: tegra124: mc: " Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 06/29] dt-bindings: memory: tegra124: emc: " Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 07/29] dt-bindings: host1x: Document new interconnect properties Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 08/29] dt-bindings: interconnect: tegra: Add initial IDs Dmitry Osipenko
2019-11-19  6:25   ` Thierry Reding
2019-11-19 16:56     ` Dmitry Osipenko
2019-11-21 17:14       ` Dmitry Osipenko
2019-11-25 11:32         ` Thierry Reding [this message]
2019-11-28 20:06           ` Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 09/29] ARM: tegra: Add interconnect properties to Tegra20 device-tree Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 10/29] ARM: tegra: Add interconnect properties to Tegra30 device-tree Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 11/29] ARM: tegra: Add interconnect properties to Tegra124 device-tree Dmitry Osipenko
2019-11-19  6:27   ` Thierry Reding
2019-11-18 20:02 ` [PATCH v1 12/29] interconnect: Add memory interconnection providers for NVIDIA Tegra SoCs Dmitry Osipenko
2019-11-19  6:30   ` Thierry Reding
2019-11-19 16:58     ` Dmitry Osipenko
2019-11-21 17:33       ` Dmitry Osipenko
2019-11-19  6:31   ` Thierry Reding
2019-11-19 16:59     ` Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 13/29] memory: tegra: Register as interconnect provider Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 14/29] memory: tegra: Add interconnect nodes for Terga20 display controllers Dmitry Osipenko
2019-11-19  6:34   ` Thierry Reding
2019-11-18 20:02 ` [PATCH v1 15/29] memory: tegra: Add interconnect nodes for Terga30 " Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 16/29] memory: tegra: Add interconnect nodes for Terga124 " Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 17/29] memory: tegra20-emc: Use devm_platform_ioremap_resource Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 18/29] memory: tegra20-emc: Continue probing if timings/IRQ are missing in device-tree Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 19/29] memory: tegra20-emc: Register as interconnect provider Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 20/29] memory: tegra30-emc: Continue probing if timings are missing in device-tree Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 21/29] memory: tegra30-emc: Register as interconnect provider Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 22/29] memory: tegra124-emc: Use devm_platform_ioremap_resource Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 23/29] memory: tegra124-emc: Register as interconnect provider Dmitry Osipenko
2019-11-19 16:57   ` Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 24/29] drm/tegra: dc: Use devm_platform_ioremap_resource Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 25/29] drm/tegra: dc: Release PM and RGB output when client's registration fails Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 26/29] drm/tegra: dc: Support memory bandwidth management Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 27/29] ARM: tegra: Enable interconnect API in tegra_defconfig Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 28/29] ARM: multi_v7_defconfig: Enable NVIDIA Tegra interconnect providers Dmitry Osipenko
2019-11-18 20:02 ` [PATCH v1 29/29] MAINTAINERS: Add maintainers for NVIDIA Tegra interconnect drivers Dmitry Osipenko
2019-11-19  6:19 ` [PATCH v1 00/29] Introduce memory interconnect for NVIDIA Tegra SoCs Thierry Reding

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191125113218.GK1409040@ulmo \
    --to=thierry.reding@gmail.com \
    --cc=devicetree@vger.kernel.org \
    --cc=digetx@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=georgi.djakov@linaro.org \
    --cc=jonathanh@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=mperttunen@nvidia.com \
    --cc=pdeschrijver@nvidia.com \
    --cc=robh+dt@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).