From: Sumit Gupta <sumitg@nvidia.com>
To: <treding@nvidia.com>, <krzysztof.kozlowski@linaro.org>,
<dmitry.osipenko@collabora.com>, <viresh.kumar@linaro.org>,
<rafael@kernel.org>, <jonathanh@nvidia.com>, <robh+dt@kernel.org>,
<lpieralisi@kernel.org>
Cc: <linux-kernel@vger.kernel.org>, <linux-tegra@vger.kernel.org>,
<linux-pm@vger.kernel.org>, <devicetree@vger.kernel.org>,
<linux-pci@vger.kernel.org>, <mmaddireddy@nvidia.com>,
<kw@linux.com>, <bhelgaas@google.com>, <vidyas@nvidia.com>,
<sanjayc@nvidia.com>, <ksitaraman@nvidia.com>, <ishah@nvidia.com>,
<bbasu@nvidia.com>, <sumitg@nvidia.com>
Subject: [Patch v3 00/11] Tegra234 Memory interconnect support
Date: Mon, 20 Mar 2023 23:54:30 +0530 [thread overview]
Message-ID: <20230320182441.11904-1-sumitg@nvidia.com> (raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="y", Size: 4698 bytes --]
Hi Krzysztof.
Thank you for the ACK on the Memory patches [2-5 & 8].
Rebased the patch series on latest next and added two more patches
for 'memory' in v3. Please review and ACK if fine.
[Patch v3 10/11] memory: tegra: handle no BWMGR MRQ support in BPMP
[Patch v3 11/11] memory: tegra186-emc: fix interconnect registration
Hi All,
Requesting for ACK on below remaining patches and please consider for merging
in "6.4".
- Thierry:
"Memory Interconnect base support" patches are dependent on the bpmp patch.
[Patch v3 1/9] firmware: tegra: add function to get BPMP data
- Rafael & Viresh: For the CPUFREQ MC Client patches.
[Patch v3 06/11] arm64: tegra: Add cpu OPP tables and interconnects property
[Patch v3 07/11] cpufreq: tegra194: add OPP support and set bandwidth
- Lorenzo, Bjorn & Krzysztof Wilczyński: For the PCIE MC client patch.
[Patch v3 09/11] PCI: tegra194: add interconnect support in Tegra234
Thank you,
Sumit Gupta
============
This patch series adds memory interconnect support for Tegra234 SoC.
It is used to dynamically scale DRAM Frequency as per the bandwidth
requests from different Memory Controller (MC) clients.
MC Clients use ICC Framework's icc_set_bw() api to dynamically request
for the DRAM bandwidth (BW). As per path, the request will be routed
from MC to the EMC driver. MC driver passes the request info like the
Client ID, type, and frequency request info to the BPMP-FW which will
set the final DRAM freq considering all exisiting requests.
MC and EMC are the ICC providers. Nodes in path for a request will be:
Client[1-n] -> MC -> EMC -> EMEM/DRAM
The patch series also adds interconnect support in below client drivers:
1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
added per cluster OPP table which will be used in the CPUFREQ driver
by requesting the minimum BW respective to the given CPU frequency in
the OPP table of given cluster.
2) PCIE driver to request BW required for different modes.
---
v2[2] -> v3:
- in 'patch 7', set 'icc_dram_bw_scaling' to false if set_opp call failed
to avoid flooding of uart with 'Failed to set bw' messages.
- added 'patch 10' to handle if the bpmp-fw is old and not support bwmgr mrq.
- added 'patch 11' to fix interconnect registration in tegra186-emc.
ref patch link in linux next:
[https://lore.kernel.org/all/20230306075651.2449-21-johan+linaro@kernel.org/]
v1[1] -> v2:
- moved BW setting to tegra234_mc_icc_set() from EMC driver.
- moved sw clients to the 'tegra_mc_clients' table.
- point 'node->data' to the entry within 'tegra_mc_clients'.
- removed 'struct tegra_icc_node' and get client info using 'node->data'.
- changed error handling in and around tegra_emc_interconnect_init().
- moved 'tegra-icc.h' from 'include/soc/tegra' to 'include/linux'.
- added interconnect support to PCIE driver in 'Patch 9'.
- merged 'Patch 9 & 10' from [1] to get num_channels and use.
- merged 'Patch 2 & 3' from [1] to add ISO and NISO clients.
- added 'Acked-by' of Krzysztof from 'Patch 05/10' of [1].
- Removed 'Patch 7' from [1] as that is merged now.
Sumit Gupta (11):
firmware: tegra: add function to get BPMP data
memory: tegra: add interconnect support for DRAM scaling in Tegra234
memory: tegra: add mc clients for Tegra234
memory: tegra: add software mc clients in Tegra234
dt-bindings: tegra: add icc ids for dummy MC clients
arm64: tegra: Add cpu OPP tables and interconnects property
cpufreq: tegra194: add OPP support and set bandwidth
memory: tegra: make cpu cluster bw request a multiple of mc channels
PCI: tegra194: add interconnect support in Tegra234
memory: tegra: handle no BWMGR MRQ support in BPMP
memory: tegra186-emc: fix interconnect registration race
arch/arm64/boot/dts/nvidia/tegra234.dtsi | 276 ++++++++++
drivers/cpufreq/tegra194-cpufreq.c | 156 +++++-
drivers/firmware/tegra/bpmp.c | 38 ++
drivers/memory/tegra/mc.c | 24 +
drivers/memory/tegra/mc.h | 1 +
drivers/memory/tegra/tegra186-emc.c | 118 ++++
drivers/memory/tegra/tegra234.c | 599 ++++++++++++++++++++-
drivers/pci/controller/dwc/pcie-tegra194.c | 40 +-
include/dt-bindings/memory/tegra234-mc.h | 5 +
include/linux/tegra-icc.h | 65 +++
include/soc/tegra/bpmp.h | 5 +
include/soc/tegra/mc.h | 8 +-
12 files changed, 1312 insertions(+), 23 deletions(-)
create mode 100644 include/linux/tegra-icc.h
[1] https://lore.kernel.org/lkml/20221220160240.27494-1-sumitg@nvidia.com/T/
[2] https://lore.kernel.org/linux-tegra/20230220140559.28289-1-sumitg@nvidia.com/
--
2.17.1
next reply other threads:[~2023-03-20 18:33 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-20 18:24 Sumit Gupta [this message]
2023-03-20 18:24 ` [Patch v3 01/11] firmware: tegra: add function to get BPMP data Sumit Gupta
2023-03-23 10:08 ` Thierry Reding
2023-03-23 12:59 ` Sumit Gupta
2023-03-20 18:24 ` [Patch v3 02/11] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
2023-03-23 10:14 ` Thierry Reding
2023-03-20 18:24 ` [Patch v3 03/11] memory: tegra: add mc clients for Tegra234 Sumit Gupta
2023-03-20 18:24 ` [Patch v3 04/11] memory: tegra: add software mc clients in Tegra234 Sumit Gupta
2023-03-20 18:24 ` [Patch v3 05/11] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
2023-03-20 18:24 ` [Patch v3 06/11] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
2023-03-20 18:24 ` [Patch v3 07/11] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
2023-03-21 7:36 ` kernel test robot
2023-03-21 11:49 ` Sumit Gupta
2023-03-22 17:51 ` Krzysztof Kozlowski
2023-03-23 12:50 ` Sumit Gupta
2023-03-20 18:24 ` [Patch v3 08/11] memory: tegra: make cpu cluster bw request a multiple of mc channels Sumit Gupta
2023-03-20 18:24 ` [Patch v3 09/11] PCI: tegra194: add interconnect support in Tegra234 Sumit Gupta
2023-03-20 18:24 ` [Patch v3 10/11] memory: tegra: handle no BWMGR MRQ support in BPMP Sumit Gupta
2023-03-22 17:50 ` Krzysztof Kozlowski
2023-03-23 9:55 ` Thierry Reding
2023-03-23 9:58 ` Krzysztof Kozlowski
2023-03-23 10:02 ` Thierry Reding
2023-03-23 12:46 ` Sumit Gupta
2023-03-20 18:24 ` [Patch v3 11/11] memory: tegra186-emc: fix interconnect registration race Sumit Gupta
2023-03-22 17:50 ` Krzysztof Kozlowski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230320182441.11904-1-sumitg@nvidia.com \
--to=sumitg@nvidia.com \
--cc=bbasu@nvidia.com \
--cc=bhelgaas@google.com \
--cc=devicetree@vger.kernel.org \
--cc=dmitry.osipenko@collabora.com \
--cc=ishah@nvidia.com \
--cc=jonathanh@nvidia.com \
--cc=krzysztof.kozlowski@linaro.org \
--cc=ksitaraman@nvidia.com \
--cc=kw@linux.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=lpieralisi@kernel.org \
--cc=mmaddireddy@nvidia.com \
--cc=rafael@kernel.org \
--cc=robh+dt@kernel.org \
--cc=sanjayc@nvidia.com \
--cc=treding@nvidia.com \
--cc=vidyas@nvidia.com \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.