From: Sibi Sankar <sibis@codeaurora.org>
To: Georgi Djakov <georgi.djakov@linaro.org>
Cc: linux-pm@vger.kernel.org, devicetree@vger.kernel.org,
bjorn.andersson@linaro.org, robh+dt@kernel.org, mka@chromium.org,
dianders@chromium.org, linux-kernel@vger.kernel.org,
linux-kernel-owner@vger.kernel.org
Subject: Re: [PATCH 4/6] arm64: dts: qcom: sdm845: Increase the number of interconnect cells
Date: Mon, 27 Jul 2020 16:28:35 +0530 [thread overview]
Message-ID: <3c8c4aae7697d9d5a052b9dfd1ea0cf4@codeaurora.org> (raw)
In-Reply-To: <20200723130942.28491-5-georgi.djakov@linaro.org>
On 2020-07-23 18:39, Georgi Djakov wrote:
> Increase the number of interconnect-cells, as now we can include
> the tag information. The consumers can specify the path tag as an
> additional argument to the endpoints.
Tested-by: Sibi Sankar <sibis@codeaurora.org>
Reviewed-by: Sibi Sankar <sibis@codeaurora.org>
https://patchwork.kernel.org/patch/11655409/
I'll replace the tag ids with the
macros once ^^ lands.
>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
> arch/arm64/boot/dts/qcom/sdm845.dtsi | 44 ++++++++++++++--------------
> 1 file changed, 22 insertions(+), 22 deletions(-)
>
> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi
> b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> index e506793407d8..94f5d27f2927 100644
> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> @@ -200,7 +200,7 @@ &LITTLE_CPU_SLEEP_1
> dynamic-power-coefficient = <100>;
> qcom,freq-domain = <&cpufreq_hw 0>;
> operating-points-v2 = <&cpu0_opp_table>;
> - interconnects = <&gladiator_noc MASTER_APPSS_PROC &mem_noc
> SLAVE_EBI1>,
> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc
> SLAVE_EBI1 3>,
> <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
> #cooling-cells = <2>;
> next-level-cache = <&L2_0>;
> @@ -225,7 +225,7 @@ &LITTLE_CPU_SLEEP_1
> dynamic-power-coefficient = <100>;
> qcom,freq-domain = <&cpufreq_hw 0>;
> operating-points-v2 = <&cpu0_opp_table>;
> - interconnects = <&gladiator_noc MASTER_APPSS_PROC &mem_noc
> SLAVE_EBI1>,
> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc
> SLAVE_EBI1 3>,
> <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
> #cooling-cells = <2>;
> next-level-cache = <&L2_100>;
> @@ -247,7 +247,7 @@ &LITTLE_CPU_SLEEP_1
> dynamic-power-coefficient = <100>;
> qcom,freq-domain = <&cpufreq_hw 0>;
> operating-points-v2 = <&cpu0_opp_table>;
> - interconnects = <&gladiator_noc MASTER_APPSS_PROC &mem_noc
> SLAVE_EBI1>,
> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc
> SLAVE_EBI1 3>,
> <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
> #cooling-cells = <2>;
> next-level-cache = <&L2_200>;
> @@ -269,7 +269,7 @@ &LITTLE_CPU_SLEEP_1
> dynamic-power-coefficient = <100>;
> qcom,freq-domain = <&cpufreq_hw 0>;
> operating-points-v2 = <&cpu0_opp_table>;
> - interconnects = <&gladiator_noc MASTER_APPSS_PROC &mem_noc
> SLAVE_EBI1>,
> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc
> SLAVE_EBI1 3>,
> <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
> #cooling-cells = <2>;
> next-level-cache = <&L2_300>;
> @@ -291,7 +291,7 @@ &BIG_CPU_SLEEP_1
> dynamic-power-coefficient = <396>;
> qcom,freq-domain = <&cpufreq_hw 1>;
> operating-points-v2 = <&cpu4_opp_table>;
> - interconnects = <&gladiator_noc MASTER_APPSS_PROC &mem_noc
> SLAVE_EBI1>,
> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc
> SLAVE_EBI1 3>,
> <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
> #cooling-cells = <2>;
> next-level-cache = <&L2_400>;
> @@ -313,7 +313,7 @@ &BIG_CPU_SLEEP_1
> dynamic-power-coefficient = <396>;
> qcom,freq-domain = <&cpufreq_hw 1>;
> operating-points-v2 = <&cpu4_opp_table>;
> - interconnects = <&gladiator_noc MASTER_APPSS_PROC &mem_noc
> SLAVE_EBI1>,
> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc
> SLAVE_EBI1 3>,
> <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
> #cooling-cells = <2>;
> next-level-cache = <&L2_500>;
> @@ -335,7 +335,7 @@ &BIG_CPU_SLEEP_1
> dynamic-power-coefficient = <396>;
> qcom,freq-domain = <&cpufreq_hw 1>;
> operating-points-v2 = <&cpu4_opp_table>;
> - interconnects = <&gladiator_noc MASTER_APPSS_PROC &mem_noc
> SLAVE_EBI1>,
> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc
> SLAVE_EBI1 3>,
> <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
> #cooling-cells = <2>;
> next-level-cache = <&L2_600>;
> @@ -357,7 +357,7 @@ &BIG_CPU_SLEEP_1
> dynamic-power-coefficient = <396>;
> qcom,freq-domain = <&cpufreq_hw 1>;
> operating-points-v2 = <&cpu4_opp_table>;
> - interconnects = <&gladiator_noc MASTER_APPSS_PROC &mem_noc
> SLAVE_EBI1>,
> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc
> SLAVE_EBI1 3>,
> <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
> #cooling-cells = <2>;
> next-level-cache = <&L2_700>;
> @@ -2011,49 +2011,49 @@ pcie1_lane: lanes@1c06200 {
> mem_noc: interconnect@1380000 {
> compatible = "qcom,sdm845-mem-noc";
> reg = <0 0x01380000 0 0x27200>;
> - #interconnect-cells = <1>;
> + #interconnect-cells = <2>;
> qcom,bcm-voters = <&apps_bcm_voter>;
> };
>
> dc_noc: interconnect@14e0000 {
> compatible = "qcom,sdm845-dc-noc";
> reg = <0 0x014e0000 0 0x400>;
> - #interconnect-cells = <1>;
> + #interconnect-cells = <2>;
> qcom,bcm-voters = <&apps_bcm_voter>;
> };
>
> config_noc: interconnect@1500000 {
> compatible = "qcom,sdm845-config-noc";
> reg = <0 0x01500000 0 0x5080>;
> - #interconnect-cells = <1>;
> + #interconnect-cells = <2>;
> qcom,bcm-voters = <&apps_bcm_voter>;
> };
>
> system_noc: interconnect@1620000 {
> compatible = "qcom,sdm845-system-noc";
> reg = <0 0x01620000 0 0x18080>;
> - #interconnect-cells = <1>;
> + #interconnect-cells = <2>;
> qcom,bcm-voters = <&apps_bcm_voter>;
> };
>
> aggre1_noc: interconnect@16e0000 {
> compatible = "qcom,sdm845-aggre1-noc";
> reg = <0 0x016e0000 0 0x15080>;
> - #interconnect-cells = <1>;
> + #interconnect-cells = <2>;
> qcom,bcm-voters = <&apps_bcm_voter>;
> };
>
> aggre2_noc: interconnect@1700000 {
> compatible = "qcom,sdm845-aggre2-noc";
> reg = <0 0x01700000 0 0x1f300>;
> - #interconnect-cells = <1>;
> + #interconnect-cells = <2>;
> qcom,bcm-voters = <&apps_bcm_voter>;
> };
>
> mmss_noc: interconnect@1740000 {
> compatible = "qcom,sdm845-mmss-noc";
> reg = <0 0x01740000 0 0x1c100>;
> - #interconnect-cells = <1>;
> + #interconnect-cells = <2>;
> qcom,bcm-voters = <&apps_bcm_voter>;
> };
>
> @@ -2156,8 +2156,8 @@ ipa: ipa@1e40000 {
> clocks = <&rpmhcc RPMH_IPA_CLK>;
> clock-names = "core";
>
> - interconnects = <&aggre2_noc MASTER_IPA &mem_noc SLAVE_EBI1>,
> - <&aggre2_noc MASTER_IPA &system_noc SLAVE_IMEM>,
> + interconnects = <&aggre2_noc MASTER_IPA 0 &mem_noc SLAVE_EBI1 0>,
> + <&aggre2_noc MASTER_IPA 0 &system_noc SLAVE_IMEM 0>,
> <&gladiator_noc MASTER_APPSS_PROC &config_noc SLAVE_IPA_CFG>;
> interconnect-names = "memory",
> "imem",
> @@ -3561,8 +3561,8 @@ usb_1: usb@a6f8800 {
>
> resets = <&gcc GCC_USB30_PRIM_BCR>;
>
> - interconnects = <&aggre2_noc MASTER_USB3_0 &mem_noc SLAVE_EBI1>,
> - <&gladiator_noc MASTER_APPSS_PROC &config_noc SLAVE_USB3_0>;
> + interconnects = <&aggre2_noc MASTER_USB3_0 0 &mem_noc SLAVE_EBI1
> 0>,
> + <&gladiator_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3_0 0>;
> interconnect-names = "usb-ddr", "apps-usb";
>
> usb_1_dwc3: dwc3@a600000 {
> @@ -3609,8 +3609,8 @@ usb_2: usb@a8f8800 {
>
> resets = <&gcc GCC_USB30_SEC_BCR>;
>
> - interconnects = <&aggre2_noc MASTER_USB3_1 &mem_noc SLAVE_EBI1>,
> - <&gladiator_noc MASTER_APPSS_PROC &config_noc SLAVE_USB3_1>;
> + interconnects = <&aggre2_noc MASTER_USB3_1 0 &mem_noc SLAVE_EBI1
> 0>,
> + <&gladiator_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3_1 0>;
> interconnect-names = "usb-ddr", "apps-usb";
>
> usb_2_dwc3: dwc3@a800000 {
> @@ -4306,7 +4306,7 @@ lpasscc: clock-controller@17014000 {
> gladiator_noc: interconnect@17900000 {
> compatible = "qcom,sdm845-gladiator-noc";
> reg = <0 0x17900000 0 0xd080>;
> - #interconnect-cells = <1>;
> + #interconnect-cells = <2>;
> qcom,bcm-voters = <&apps_bcm_voter>;
> };
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project.
next prev parent reply other threads:[~2020-07-27 10:58 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-23 13:09 [PATCH 0/6] interconnect: Introduce xlate_extended() Georgi Djakov
2020-07-23 13:09 ` [PATCH 1/6] interconnect: Introduce xlate_extended() callback Georgi Djakov
2020-07-27 10:51 ` Sibi Sankar
2020-07-27 20:10 ` Matthias Kaehlcke
2020-07-23 13:09 ` [PATCH 2/6] interconnect: qcom: Implement xlate_extended() to parse tags Georgi Djakov
2020-07-27 14:47 ` Sibi Sankar
2020-07-27 20:19 ` Matthias Kaehlcke
2020-07-23 13:09 ` [PATCH 3/6] interconnect: qcom: sdm845: Replace xlate with xlate_extended Georgi Djakov
2020-07-27 10:54 ` Sibi Sankar
2020-07-27 20:51 ` Matthias Kaehlcke
2020-07-23 13:09 ` [PATCH 4/6] arm64: dts: qcom: sdm845: Increase the number of interconnect cells Georgi Djakov
2020-07-27 10:58 ` Sibi Sankar [this message]
2020-07-27 20:55 ` Matthias Kaehlcke
2020-07-23 13:09 ` [PATCH 5/6] interconnect: qcom: sc7180: Replace xlate with xlate_extended Georgi Djakov
2020-07-27 20:58 ` Matthias Kaehlcke
2020-07-23 13:09 ` [PATCH 6/6] arm64: dts: qcom: sc7180: Increase the number of interconnect cells Georgi Djakov
2020-07-27 21:06 ` Matthias Kaehlcke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3c8c4aae7697d9d5a052b9dfd1ea0cf4@codeaurora.org \
--to=sibis@codeaurora.org \
--cc=bjorn.andersson@linaro.org \
--cc=devicetree@vger.kernel.org \
--cc=dianders@chromium.org \
--cc=georgi.djakov@linaro.org \
--cc=linux-kernel-owner@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=mka@chromium.org \
--cc=robh+dt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).