All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ulf Hansson <ulf.hansson@linaro.org>
To: Maulik Shah <quic_mkshah@quicinc.com>
Cc: bjorn.andersson@linaro.org, linux-arm-msm@vger.kernel.org,
	linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org,
	rafael@kernel.org, daniel.lezcano@linaro.org,
	quic_lsrao@quicinc.com, quic_rjendra@quicinc.com,
	devicetree@vger.kernel.org
Subject: Re: [PATCH 02/10] arm64: dts: qcom: sm8250: Add cpuidle states
Date: Fri, 14 Jan 2022 13:30:20 +0100	[thread overview]
Message-ID: <CAPDyKFr3_RJXE7_6YQzgusRNhsZ1F8CBG5iV27ZTspGacwo6uA@mail.gmail.com> (raw)
In-Reply-To: <1641749107-31979-3-git-send-email-quic_mkshah@quicinc.com>

On Sun, 9 Jan 2022 at 18:25, Maulik Shah <quic_mkshah@quicinc.com> wrote:
>
> This change adds various idle states and add devices to power domains.
>
> Cc: devicetree@vger.kernel.org
> Signed-off-by: Maulik Shah <quic_mkshah@quicinc.com>

Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>

Kind regards
Uffe


> ---
>  arch/arm64/boot/dts/qcom/sm8250.dtsi | 105 +++++++++++++++++++++++++++++++++++
>  1 file changed, 105 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
> index 5617a46..077d0ab 100644
> --- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
> +++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
> @@ -98,6 +98,8 @@
>                         capacity-dmips-mhz = <448>;
>                         dynamic-power-coefficient = <205>;
>                         next-level-cache = <&L2_0>;
> +                       power-domains = <&CPU_PD0>;
> +                       power-domain-names = "psci";
>                         qcom,freq-domain = <&cpufreq_hw 0>;
>                         operating-points-v2 = <&cpu0_opp_table>;
>                         interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
> @@ -120,6 +122,8 @@
>                         capacity-dmips-mhz = <448>;
>                         dynamic-power-coefficient = <205>;
>                         next-level-cache = <&L2_100>;
> +                       power-domains = <&CPU_PD1>;
> +                       power-domain-names = "psci";
>                         qcom,freq-domain = <&cpufreq_hw 0>;
>                         operating-points-v2 = <&cpu0_opp_table>;
>                         interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
> @@ -139,6 +143,8 @@
>                         capacity-dmips-mhz = <448>;
>                         dynamic-power-coefficient = <205>;
>                         next-level-cache = <&L2_200>;
> +                       power-domains = <&CPU_PD2>;
> +                       power-domain-names = "psci";
>                         qcom,freq-domain = <&cpufreq_hw 0>;
>                         operating-points-v2 = <&cpu0_opp_table>;
>                         interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
> @@ -158,6 +164,8 @@
>                         capacity-dmips-mhz = <448>;
>                         dynamic-power-coefficient = <205>;
>                         next-level-cache = <&L2_300>;
> +                       power-domains = <&CPU_PD3>;
> +                       power-domain-names = "psci";
>                         qcom,freq-domain = <&cpufreq_hw 0>;
>                         operating-points-v2 = <&cpu0_opp_table>;
>                         interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
> @@ -177,6 +185,8 @@
>                         capacity-dmips-mhz = <1024>;
>                         dynamic-power-coefficient = <379>;
>                         next-level-cache = <&L2_400>;
> +                       power-domains = <&CPU_PD4>;
> +                       power-domain-names = "psci";
>                         qcom,freq-domain = <&cpufreq_hw 1>;
>                         operating-points-v2 = <&cpu4_opp_table>;
>                         interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
> @@ -196,6 +206,8 @@
>                         capacity-dmips-mhz = <1024>;
>                         dynamic-power-coefficient = <379>;
>                         next-level-cache = <&L2_500>;
> +                       power-domains = <&CPU_PD5>;
> +                       power-domain-names = "psci";
>                         qcom,freq-domain = <&cpufreq_hw 1>;
>                         operating-points-v2 = <&cpu4_opp_table>;
>                         interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
> @@ -216,6 +228,8 @@
>                         capacity-dmips-mhz = <1024>;
>                         dynamic-power-coefficient = <379>;
>                         next-level-cache = <&L2_600>;
> +                       power-domains = <&CPU_PD6>;
> +                       power-domain-names = "psci";
>                         qcom,freq-domain = <&cpufreq_hw 1>;
>                         operating-points-v2 = <&cpu4_opp_table>;
>                         interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
> @@ -235,6 +249,8 @@
>                         capacity-dmips-mhz = <1024>;
>                         dynamic-power-coefficient = <444>;
>                         next-level-cache = <&L2_700>;
> +                       power-domains = <&CPU_PD7>;
> +                       power-domain-names = "psci";
>                         qcom,freq-domain = <&cpufreq_hw 2>;
>                         operating-points-v2 = <&cpu7_opp_table>;
>                         interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
> @@ -281,6 +297,42 @@
>                                 };
>                         };
>                 };
> +
> +               idle-states {
> +                       entry-method = "psci";
> +
> +                       LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
> +                               compatible = "arm,idle-state";
> +                               idle-state-name = "silver-rail-power-collapse";
> +                               arm,psci-suspend-param = <0x40000004>;
> +                               entry-latency-us = <360>;
> +                               exit-latency-us = <531>;
> +                               min-residency-us = <3934>;
> +                               local-timer-stop;
> +                       };
> +
> +                       BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
> +                               compatible = "arm,idle-state";
> +                               idle-state-name = "gold-rail-power-collapse";
> +                               arm,psci-suspend-param = <0x40000004>;
> +                               entry-latency-us = <702>;
> +                               exit-latency-us = <1061>;
> +                               min-residency-us = <4488>;
> +                               local-timer-stop;
> +                       };
> +               };
> +
> +               domain-idle-states {
> +                       CLUSTER_SLEEP_0: cluster-sleep-0 {
> +                               compatible = "domain-idle-state";
> +                               idle-state-name = "cluster-llcc-off";
> +                               arm,psci-suspend-param = <0x4100c244>;
> +                               entry-latency-us = <3264>;
> +                               exit-latency-us = <6562>;
> +                               min-residency-us = <9987>;
> +                               local-timer-stop;
> +                       };
> +               };
>         };
>
>         cpu0_opp_table: cpu0_opp_table {
> @@ -594,6 +646,59 @@
>         psci {
>                 compatible = "arm,psci-1.0";
>                 method = "smc";
> +
> +               CPU_PD0: cpu0 {
> +                       #power-domain-cells = <0>;
> +                       power-domains = <&CLUSTER_PD>;
> +                       domain-idle-states = <&LITTLE_CPU_SLEEP_0>;
> +               };
> +
> +               CPU_PD1: cpu1 {
> +                       #power-domain-cells = <0>;
> +                       power-domains = <&CLUSTER_PD>;
> +                       domain-idle-states = <&LITTLE_CPU_SLEEP_0>;
> +               };
> +
> +               CPU_PD2: cpu2 {
> +                       #power-domain-cells = <0>;
> +                       power-domains = <&CLUSTER_PD>;
> +                       domain-idle-states = <&LITTLE_CPU_SLEEP_0>;
> +               };
> +
> +               CPU_PD3: cpu3 {
> +                       #power-domain-cells = <0>;
> +                       power-domains = <&CLUSTER_PD>;
> +                       domain-idle-states = <&LITTLE_CPU_SLEEP_0>;
> +               };
> +
> +               CPU_PD4: cpu4 {
> +                       #power-domain-cells = <0>;
> +                       power-domains = <&CLUSTER_PD>;
> +                       domain-idle-states = <&BIG_CPU_SLEEP_0>;
> +               };
> +
> +               CPU_PD5: cpu5 {
> +                       #power-domain-cells = <0>;
> +                       power-domains = <&CLUSTER_PD>;
> +                       domain-idle-states = <&BIG_CPU_SLEEP_0>;
> +               };
> +
> +               CPU_PD6: cpu6 {
> +                       #power-domain-cells = <0>;
> +                       power-domains = <&CLUSTER_PD>;
> +                       domain-idle-states = <&BIG_CPU_SLEEP_0>;
> +               };
> +
> +               CPU_PD7: cpu7 {
> +                       #power-domain-cells = <0>;
> +                       power-domains = <&CLUSTER_PD>;
> +                       domain-idle-states = <&BIG_CPU_SLEEP_0>;
> +               };
> +
> +               CLUSTER_PD: cpu-cluster0 {
> +                       #power-domain-cells = <0>;
> +                       domain-idle-states = <&CLUSTER_SLEEP_0>;
> +               };
>         };
>
>         reserved-memory {
> --
> 2.7.4
>

  reply	other threads:[~2022-01-14 12:31 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-09 17:24 [PATCH 00/10] Add APSS RSC to Cluster power domain Maulik Shah
2022-01-09 17:24 ` [PATCH 01/10] arm64: dts: qcom: sm8150: Correct TCS configuration for apps rsc Maulik Shah
2022-01-09 17:24 ` [PATCH 02/10] arm64: dts: qcom: sm8250: Add cpuidle states Maulik Shah
2022-01-14 12:30   ` Ulf Hansson [this message]
2022-01-09 17:25 ` [PATCH 03/10] arm64: dts: qcom: sm8350: Correct TCS configuration for apps rsc Maulik Shah
2022-01-09 17:25 ` [PATCH 04/10] arm64: dts: qcom: sm8450: Update cpuidle states parameters Maulik Shah
2022-01-14 12:30   ` Ulf Hansson
2022-01-17  8:12     ` Maulik Shah
2022-01-09 17:25 ` [PATCH 05/10] dt-bindings: soc: qcom: Update devicetree binding document for rpmh-rsc Maulik Shah
2022-01-14 12:31   ` Ulf Hansson
2022-01-21 23:06   ` Rob Herring
2022-01-09 17:25 ` [PATCH 06/10] soc: qcom: rpmh-rsc: Attach RSC to cluster PM domain Maulik Shah
2022-01-14 12:30   ` Ulf Hansson
2022-01-09 17:25 ` [PATCH 07/10] arm64: dts: qcom: Add power-domains property for apps_rsc Maulik Shah
2022-01-14 12:33   ` Ulf Hansson
2022-01-09 17:25 ` [PATCH 08/10] PM: domains: Store the closest hrtimer event of the domain CPUs Maulik Shah
2022-01-14 13:38   ` Ulf Hansson
2022-01-25 18:49   ` Ulf Hansson
2022-01-09 17:25 ` [PATCH 09/10] soc: qcom: rpmh-rsc: Save base address of drv Maulik Shah
2022-01-14 12:35   ` Ulf Hansson
2022-01-09 17:25 ` [PATCH 10/10] soc: qcom: rpmh-rsc: Write CONTROL_TCS with next timer wakeup Maulik Shah
2022-01-14 13:34   ` Ulf Hansson
2022-02-01  5:19 ` (subset) [PATCH 00/10] Add APSS RSC to Cluster power domain Bjorn Andersson
2022-03-29  9:55   ` Amit Pundir
2022-04-25 16:56     ` Amit Pundir

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPDyKFr3_RJXE7_6YQzgusRNhsZ1F8CBG5iV27ZTspGacwo6uA@mail.gmail.com \
    --to=ulf.hansson@linaro.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=daniel.lezcano@linaro.org \
    --cc=devicetree@vger.kernel.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=quic_lsrao@quicinc.com \
    --cc=quic_mkshah@quicinc.com \
    --cc=quic_rjendra@quicinc.com \
    --cc=rafael@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.