From: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
To: Sumit Gupta <sumitg@nvidia.com>,
treding@nvidia.com, dmitry.osipenko@collabora.com,
viresh.kumar@linaro.org, rafael@kernel.org, jonathanh@nvidia.com,
robh+dt@kernel.org, lpieralisi@kernel.org
Cc: linux-kernel@vger.kernel.org, linux-tegra@vger.kernel.org,
linux-pm@vger.kernel.org, devicetree@vger.kernel.org,
linux-pci@vger.kernel.org, mmaddireddy@nvidia.com, kw@linux.com,
bhelgaas@google.com, vidyas@nvidia.com, sanjayc@nvidia.com,
ksitaraman@nvidia.com, ishah@nvidia.com, bbasu@nvidia.com
Subject: Re: [Patch v2 0/9] Tegra234 Memory interconnect support
Date: Mon, 6 Mar 2023 16:05:22 +0100 [thread overview]
Message-ID: <c8cf2435-8b18-7af7-c751-267021142f5a@linaro.org> (raw)
In-Reply-To: <20230220140559.28289-1-sumitg@nvidia.com>
On 20/02/2023 15:05, Sumit Gupta wrote:
> This patch series adds memory interconnect support for Tegra234 SoC.
> It is used to dynamically scale DRAM Frequency as per the bandwidth
> requests from different Memory Controller (MC) clients.
> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
> for the DRAM bandwidth (BW). As per path, the request will be routed
> from MC to the EMC driver. MC driver passes the request info like the
> Client ID, type, and frequency request info to the BPMP-FW which will
> set the final DRAM freq considering all exisiting requests.
>
> MC and EMC are the ICC providers. Nodes in path for a request will be:
> Client[1-n] -> MC -> EMC -> EMEM/DRAM
>
> The patch series also adds interconnect support in below client drivers:
> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
> added per cluster OPP table which will be used in the CPUFREQ driver
> by requesting the minimum BW respective to the given CPU frequency in
> the OPP table of given cluster.
> 2) PCIE driver to request BW required for different modes.
No dependencies or ordering written, so I am free to take memory
controller bits, I assume.
Best regards,
Krzysztof
next prev parent reply other threads:[~2023-03-06 15:05 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-20 14:05 [Patch v2 0/9] Tegra234 Memory interconnect support Sumit Gupta
2023-02-20 14:05 ` [Patch v2 1/9] firmware: tegra: add function to get BPMP data Sumit Gupta
2023-02-20 14:05 ` [Patch v2 2/9] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
2023-03-19 15:19 ` Krzysztof Kozlowski
2023-02-20 14:05 ` [Patch v2 3/9] memory: tegra: add mc clients for Tegra234 Sumit Gupta
2023-03-19 15:19 ` Krzysztof Kozlowski
2023-02-20 14:05 ` [Patch v2 4/9] memory: tegra: add software mc clients in Tegra234 Sumit Gupta
2023-03-19 15:19 ` Krzysztof Kozlowski
2023-02-20 14:05 ` [Patch v2 5/9] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
2023-02-20 14:05 ` [Patch v2 6/9] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
2023-02-20 14:05 ` [Patch v2 7/9] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
2023-02-22 4:03 ` Viresh Kumar
2023-02-23 9:36 ` Sumit Gupta
2023-02-27 12:44 ` Thierry Reding
2023-02-28 1:18 ` Viresh Kumar
2023-02-20 14:05 ` [Patch v2 8/9] memory: tegra: make cpu cluster bw request a multiple of mc channels Sumit Gupta
2023-03-19 15:20 ` Krzysztof Kozlowski
2023-02-20 14:05 ` [Patch v2 9/9] PCI: tegra194: add interconnect support in Tegra234 Sumit Gupta
2023-03-06 15:05 ` Krzysztof Kozlowski [this message]
2023-03-06 15:07 ` [Patch v2 0/9] Tegra234 Memory interconnect support Krzysztof Kozlowski
2023-03-06 20:43 ` Sumit Gupta
2023-03-07 9:21 ` Krzysztof Kozlowski
2023-03-06 20:19 ` Sumit Gupta
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c8cf2435-8b18-7af7-c751-267021142f5a@linaro.org \
--to=krzysztof.kozlowski@linaro.org \
--cc=bbasu@nvidia.com \
--cc=bhelgaas@google.com \
--cc=devicetree@vger.kernel.org \
--cc=dmitry.osipenko@collabora.com \
--cc=ishah@nvidia.com \
--cc=jonathanh@nvidia.com \
--cc=ksitaraman@nvidia.com \
--cc=kw@linux.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=lpieralisi@kernel.org \
--cc=mmaddireddy@nvidia.com \
--cc=rafael@kernel.org \
--cc=robh+dt@kernel.org \
--cc=sanjayc@nvidia.com \
--cc=sumitg@nvidia.com \
--cc=treding@nvidia.com \
--cc=vidyas@nvidia.com \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).