From: Linus Walleij <linus.walleij@linaro.org> To: Ben Levinsky <ben.levinsky@xilinx.com>, Catalin Marinas <catalin.marinas@arm.com> Cc: ed.mooring@xilinx.com, sunnyliangjy@gmail.com, Punit Agrawal <punit1.agrawal@toshiba.co.jp>, stefanos@xilinx.com, michals@xilinx.com, michael.auchter@ni.com, "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" <devicetree@vger.kernel.org>, Mathieu Poirier <mathieu.poirier@linaro.org>, linux-remoteproc@vger.kernel.org, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, Rob Herring <robh+dt@kernel.org>, Linux ARM <linux-arm-kernel@lists.infradead.org> Subject: Re: [PATCH v18 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings Date: Thu, 8 Oct 2020 14:37:14 +0200 [thread overview] Message-ID: <CACRpkdb1x=U28VWZGDJh6gJSzaqeNxx0m+WtnUQZJKGvXjvXYQ@mail.gmail.com> (raw) In-Reply-To: <20201005160614.3749-5-ben.levinsky@xilinx.com> Hi Ben, thanks for your patch! I noticed this today and pay some interest because in the past I used with implementing the support for TCM memory on ARM32. On Mon, Oct 5, 2020 at 6:06 PM Ben Levinsky <ben.levinsky@xilinx.com> wrote: > Add binding for ZynqMP R5 OpenAMP. > > Represent the RPU domain resources in one device node. Each RPU > processor is a subnode of the top RPU domain node. > > Signed-off-by: Jason Wu <j.wu@xilinx.com> > Signed-off-by: Wendy Liang <jliang@xilinx.com> > Signed-off-by: Michal Simek <michal.simek@xilinx.com> > Signed-off-by: Ben Levinsky <ben.levinsky@xilinx.com> (...) > +title: Xilinx R5 remote processor controller bindings > + > +description: > + This document defines the binding for the remoteproc component that loads and > + boots firmwares on the Xilinx Zynqmp and Versal family chipset. ... firmwares for the on-board Cortex R5 of the Zynqmp .. (etc) > + > + Note that the Linux has global addressing view of the R5-related memory (TCM) > + so the absolute address ranges are provided in TCM reg's. Please do not refer to Linux in bindings, they are also for other operating systems. Isn't that spelled out "Tightly Coupled Memory" (please expand the acronym). I had a hard time to parse this description, do you mean: "The Tightly Coupled Memory (an on-chip SRAM) used by the Cortex R5 is double-ported and visible in both the physical memory space of the Cortex A5 and the memory space of the main ZynqMP processor cluster. This is visible in the address space of the ZynqMP processor at the address indicated here." That would make sense, but please confirm/update. > + memory-region: > + description: > + collection of memory carveouts used for elf-loading and inter-processor > + communication. each carveout in this case should be in DDR, not > + chip-specific memory. In Xilinx case, this is TCM, OCM, BRAM, etc. > + $ref: /schemas/types.yaml#/definitions/phandle-array This is nice, you're reusing the infrastructure we already have for these carveouts, good design! > + meta-memory-regions: > + description: > + collection of memories that are not present in the top level memory > + nodes' mapping. For example, R5s' TCM banks. These banks are needed > + for R5 firmware meta data such as the R5 firmware's heap and stack. > + To be more precise, this is on-chip reserved SRAM regions, e.g. TCM, > + BRAM, OCM, etc. > + $ref: /schemas/types.yaml#/definitions/phandle-array Is this in the memory space of the main CPU cluster? It sure looks like that. > + /* > + * Below nodes are required if using TCM to load R5 firmware > + * if not, then either do not provide nodes are label as disabled in > + * status property > + */ > + tcm0a: tcm_0a@ffe00000 { > + reg = <0xffe00000 0x10000>; > + pnode-id = <0xf>; > + no-map; > + status = "okay"; > + phandle = <0x40>; > + }; > + tcm0b: tcm_1a@ffe20000 { > + reg = <0xffe20000 0x10000>; > + pnode-id = <0x10>; > + no-map; > + status = "okay"; > + phandle = <0x41>; > + }; All right so this looks suspicious to me. Please explain what we are seeing in those reg entries? Is this the address seen by the main CPU cluster? Does it mean that the main CPU see the memory of the R5 as "some kind of TCM" and that TCM is physically mapped at 0xffe00000 (ITCM) and 0xffe20000 (DTCM)? If the first is ITCM and the second DTCM that is pretty important to point out, since this reflects the harvard architecture properties of these two memory areas. The phandle = thing I do not understand at all, but maybe there is generic documentation for it that I've missed? Last time I checked (which was on the ARM32) the physical address of the ITCM and DTCM could be changed at runtime with CP15 instructions. I might be wrong about this, but if that (or something similar) is still the case you can't just say hardcode these addresses here, the CPU can move that physical address somewhere else. See the code in arch/arm/kernel/tcm.c It appears the ARM64 Linux kernel does not have any TCM handling today, but that could change. So is this just regular ARM TCM memory (as seen by the main ARM64 cluster)? If this is the case, you should probably add back the compatible string, add a separate device tree binding for TCM memories along the lines of compatible = "arm,itcm"; compatible = "arm,dtcm"; The reg address should then ideally be interpreted by the ARM64 kernel and assigned to the I/DTCM. I'm paging Catalin on this because I do not know if ARM64 really has [I|D]TCM or if this is some invention of Xilinx's. Yours, Linus Walleij
WARNING: multiple messages have this Message-ID (diff)
From: Linus Walleij <linus.walleij@linaro.org> To: Ben Levinsky <ben.levinsky@xilinx.com>, Catalin Marinas <catalin.marinas@arm.com> Cc: stefanos@xilinx.com, michael.auchter@ni.com, Punit Agrawal <punit1.agrawal@toshiba.co.jp>, Mathieu Poirier <mathieu.poirier@linaro.org>, "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" <devicetree@vger.kernel.org>, ed.mooring@xilinx.com, linux-remoteproc@vger.kernel.org, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, sunnyliangjy@gmail.com, Rob Herring <robh+dt@kernel.org>, michals@xilinx.com, Linux ARM <linux-arm-kernel@lists.infradead.org> Subject: Re: [PATCH v18 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings Date: Thu, 8 Oct 2020 14:37:14 +0200 [thread overview] Message-ID: <CACRpkdb1x=U28VWZGDJh6gJSzaqeNxx0m+WtnUQZJKGvXjvXYQ@mail.gmail.com> (raw) In-Reply-To: <20201005160614.3749-5-ben.levinsky@xilinx.com> Hi Ben, thanks for your patch! I noticed this today and pay some interest because in the past I used with implementing the support for TCM memory on ARM32. On Mon, Oct 5, 2020 at 6:06 PM Ben Levinsky <ben.levinsky@xilinx.com> wrote: > Add binding for ZynqMP R5 OpenAMP. > > Represent the RPU domain resources in one device node. Each RPU > processor is a subnode of the top RPU domain node. > > Signed-off-by: Jason Wu <j.wu@xilinx.com> > Signed-off-by: Wendy Liang <jliang@xilinx.com> > Signed-off-by: Michal Simek <michal.simek@xilinx.com> > Signed-off-by: Ben Levinsky <ben.levinsky@xilinx.com> (...) > +title: Xilinx R5 remote processor controller bindings > + > +description: > + This document defines the binding for the remoteproc component that loads and > + boots firmwares on the Xilinx Zynqmp and Versal family chipset. ... firmwares for the on-board Cortex R5 of the Zynqmp .. (etc) > + > + Note that the Linux has global addressing view of the R5-related memory (TCM) > + so the absolute address ranges are provided in TCM reg's. Please do not refer to Linux in bindings, they are also for other operating systems. Isn't that spelled out "Tightly Coupled Memory" (please expand the acronym). I had a hard time to parse this description, do you mean: "The Tightly Coupled Memory (an on-chip SRAM) used by the Cortex R5 is double-ported and visible in both the physical memory space of the Cortex A5 and the memory space of the main ZynqMP processor cluster. This is visible in the address space of the ZynqMP processor at the address indicated here." That would make sense, but please confirm/update. > + memory-region: > + description: > + collection of memory carveouts used for elf-loading and inter-processor > + communication. each carveout in this case should be in DDR, not > + chip-specific memory. In Xilinx case, this is TCM, OCM, BRAM, etc. > + $ref: /schemas/types.yaml#/definitions/phandle-array This is nice, you're reusing the infrastructure we already have for these carveouts, good design! > + meta-memory-regions: > + description: > + collection of memories that are not present in the top level memory > + nodes' mapping. For example, R5s' TCM banks. These banks are needed > + for R5 firmware meta data such as the R5 firmware's heap and stack. > + To be more precise, this is on-chip reserved SRAM regions, e.g. TCM, > + BRAM, OCM, etc. > + $ref: /schemas/types.yaml#/definitions/phandle-array Is this in the memory space of the main CPU cluster? It sure looks like that. > + /* > + * Below nodes are required if using TCM to load R5 firmware > + * if not, then either do not provide nodes are label as disabled in > + * status property > + */ > + tcm0a: tcm_0a@ffe00000 { > + reg = <0xffe00000 0x10000>; > + pnode-id = <0xf>; > + no-map; > + status = "okay"; > + phandle = <0x40>; > + }; > + tcm0b: tcm_1a@ffe20000 { > + reg = <0xffe20000 0x10000>; > + pnode-id = <0x10>; > + no-map; > + status = "okay"; > + phandle = <0x41>; > + }; All right so this looks suspicious to me. Please explain what we are seeing in those reg entries? Is this the address seen by the main CPU cluster? Does it mean that the main CPU see the memory of the R5 as "some kind of TCM" and that TCM is physically mapped at 0xffe00000 (ITCM) and 0xffe20000 (DTCM)? If the first is ITCM and the second DTCM that is pretty important to point out, since this reflects the harvard architecture properties of these two memory areas. The phandle = thing I do not understand at all, but maybe there is generic documentation for it that I've missed? Last time I checked (which was on the ARM32) the physical address of the ITCM and DTCM could be changed at runtime with CP15 instructions. I might be wrong about this, but if that (or something similar) is still the case you can't just say hardcode these addresses here, the CPU can move that physical address somewhere else. See the code in arch/arm/kernel/tcm.c It appears the ARM64 Linux kernel does not have any TCM handling today, but that could change. So is this just regular ARM TCM memory (as seen by the main ARM64 cluster)? If this is the case, you should probably add back the compatible string, add a separate device tree binding for TCM memories along the lines of compatible = "arm,itcm"; compatible = "arm,dtcm"; The reg address should then ideally be interpreted by the ARM64 kernel and assigned to the I/DTCM. I'm paging Catalin on this because I do not know if ARM64 really has [I|D]TCM or if this is some invention of Xilinx's. Yours, Linus Walleij _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-10-08 12:37 UTC|newest] Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-10-05 16:06 [PATCH v18 0/5] Provide basic driver to control Arm R5 co-processor found on Xilinx ZynqMP Ben Levinsky 2020-10-05 16:06 ` Ben Levinsky 2020-10-05 16:06 ` [PATCH v18 1/5] firmware: xilinx: Add ZynqMP firmware ioctl enums for RPU configuration Ben Levinsky 2020-10-05 16:06 ` Ben Levinsky 2020-10-05 16:06 ` [PATCH v18 2/5] firmware: xilinx: Add shutdown/wakeup APIs Ben Levinsky 2020-10-05 16:06 ` Ben Levinsky 2020-10-05 16:06 ` [PATCH v18 3/5] firmware: xilinx: Add RPU configuration APIs Ben Levinsky 2020-10-05 16:06 ` Ben Levinsky 2020-10-05 16:06 ` [PATCH v18 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings Ben Levinsky 2020-10-05 16:06 ` Ben Levinsky 2020-10-08 12:37 ` Linus Walleij [this message] 2020-10-08 12:37 ` Linus Walleij 2020-10-08 14:21 ` Ben Levinsky 2020-10-08 14:21 ` Ben Levinsky 2020-10-08 16:45 ` Ben Levinsky 2020-10-08 16:45 ` Ben Levinsky 2020-10-08 20:22 ` Stefano Stabellini 2020-10-08 20:22 ` Stefano Stabellini 2020-10-08 20:54 ` Linus Walleij 2020-10-08 20:54 ` Linus Walleij 2020-10-05 16:06 ` [PATCH v18 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver Ben Levinsky 2020-10-05 16:06 ` Ben Levinsky 2020-10-05 19:34 ` Michael Auchter 2020-10-05 19:34 ` Michael Auchter 2020-10-06 19:15 ` Ben Levinsky 2020-10-06 19:15 ` Ben Levinsky 2020-10-06 21:31 ` Michael Auchter 2020-10-06 21:31 ` Michael Auchter 2020-10-06 21:46 ` Ben Levinsky 2020-10-06 21:46 ` Ben Levinsky 2020-10-06 22:20 ` Michael Auchter 2020-10-06 22:20 ` Michael Auchter 2020-10-07 14:31 ` Ben Levinsky 2020-10-07 14:31 ` Ben Levinsky 2020-10-15 18:31 ` Ben Levinsky 2020-10-15 18:31 ` Ben Levinsky 2020-10-19 20:43 ` Stefano Stabellini 2020-10-19 20:43 ` Stefano Stabellini 2020-10-19 21:33 ` Ben Levinsky 2020-10-19 21:33 ` Ben Levinsky
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to='CACRpkdb1x=U28VWZGDJh6gJSzaqeNxx0m+WtnUQZJKGvXjvXYQ@mail.gmail.com' \ --to=linus.walleij@linaro.org \ --cc=ben.levinsky@xilinx.com \ --cc=catalin.marinas@arm.com \ --cc=devicetree@vger.kernel.org \ --cc=ed.mooring@xilinx.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-remoteproc@vger.kernel.org \ --cc=mathieu.poirier@linaro.org \ --cc=michael.auchter@ni.com \ --cc=michals@xilinx.com \ --cc=punit1.agrawal@toshiba.co.jp \ --cc=robh+dt@kernel.org \ --cc=stefanos@xilinx.com \ --cc=sunnyliangjy@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.