linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 0/3] Invoke rpmh_flush for non OSI targets
@ 2020-02-27  8:56 Maulik Shah
  2020-02-27  8:56 ` [PATCH v8 1/3] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Maulik Shah @ 2020-02-27  8:56 UTC (permalink / raw)
  To: swboyd, mka, evgreen, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, dianders, rnayak, ilina,
	lsrao, Maulik Shah

Changes in v8:
- Address Stephen's comments on changes 2 and 3
- Add Reviewed by from Stephen on change 1

Changes in v7:
- Address Srinivas's comments to update commit text
- Add Reviewed by from Srinivas

Changes in v6:
- Drop 1 & 2 changes from v5 as they already landed in maintainer tree
- Drop 3 & 4 changes from v5 as no user at present for power domain in rsc
- Rename subject to appropriate since power domain changes are dropped
- Rebase other changes on top of next-20200221

Changes in v5:
- Add Rob's Acked by on dt-bindings change
- Drop firmware psci change
- Update cpuidle stats in dtsi to follow PC mode
- Include change to update dirty flag when data is updated from [4]
- Add change to invoke rpmh_flush when caches are dirty

Changes in v4:
- Add change to allow hierarchical topology in PC mode
- Drop hierarchical domain idle states converter from v3
- Address Merge sc7180 dtsi change to add low power modes

Changes in v3:
- Address Rob's comment on dt property value
- Address Stephen's comments on rpmh-rsc driver change
- Include sc7180 cpuidle low power mode changes from [1]
- Include hierarchical domain idle states converter change from [2]

Changes in v2:
- Add Stephen's Reviewed-By to the first three patches
- Addressed Stephen's comments on fourth patch
- Include changes to connect rpmh domain to cpuidle and genpds

Resource State Coordinator (RSC) is responsible for powering off/lowering
the requirements from CPU subsystem for the associated hardware like buses,
clocks, and regulators when all CPUs and cluster is powered down.

RSC power domain uses last-man activities provided by genpd framework based
on Ulf Hansoon's patch series[3], when the cluster of CPUs enter deepest
idle states. As a part of domain poweroff, RSC can lower resource state
requirements by flushing the cached sleep and wake state votes for various
resources.

[1] https://patchwork.kernel.org/patch/11218965
[2] https://patchwork.kernel.org/patch/10941671
[3] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=222355
[4] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=236503

Maulik Shah (3):
  arm64: dts: qcom: sc7180: Add cpuidle low power states
  soc: qcom: rpmh: Update dirty flag only when data changes
  soc: qcom: rpmh: Invoke rpmh_flush for dirty caches

 arch/arm64/boot/dts/qcom/sc7180.dtsi | 78 ++++++++++++++++++++++++++++++++++++
 drivers/soc/qcom/rpmh.c              | 27 ++++++++++---
 2 files changed, 100 insertions(+), 5 deletions(-)

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v8 1/3] arm64: dts: qcom: sc7180: Add cpuidle low power states
  2020-02-27  8:56 [PATCH v8 0/3] Invoke rpmh_flush for non OSI targets Maulik Shah
@ 2020-02-27  8:56 ` Maulik Shah
  2020-02-27  8:56 ` [PATCH v8 2/3] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
  2020-02-27  8:56 ` [PATCH v8 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
  2 siblings, 0 replies; 6+ messages in thread
From: Maulik Shah @ 2020-02-27  8:56 UTC (permalink / raw)
  To: swboyd, mka, evgreen, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, dianders, rnayak, ilina,
	lsrao, Maulik Shah, devicetree

Add device bindings for cpuidle states for cpu devices.

Cc: devicetree@vger.kernel.orgi
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
Reviewed-by: Stephen Boyd <swboyd@chromium.org>
---
 arch/arm64/boot/dts/qcom/sc7180.dtsi | 78 ++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
index cc5a94f..2941a7e 100644
--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
@@ -86,6 +86,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x0>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			next-level-cache = <&L2_0>;
 			#cooling-cells = <2>;
 			qcom,freq-domain = <&cpufreq_hw 0>;
@@ -103,6 +106,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x100>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			next-level-cache = <&L2_100>;
 			#cooling-cells = <2>;
 			qcom,freq-domain = <&cpufreq_hw 0>;
@@ -117,6 +123,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x200>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			next-level-cache = <&L2_200>;
 			#cooling-cells = <2>;
 			qcom,freq-domain = <&cpufreq_hw 0>;
@@ -131,6 +140,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x300>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			next-level-cache = <&L2_300>;
 			#cooling-cells = <2>;
 			qcom,freq-domain = <&cpufreq_hw 0>;
@@ -145,6 +157,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x400>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			next-level-cache = <&L2_400>;
 			#cooling-cells = <2>;
 			qcom,freq-domain = <&cpufreq_hw 0>;
@@ -159,6 +174,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x500>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			next-level-cache = <&L2_500>;
 			#cooling-cells = <2>;
 			qcom,freq-domain = <&cpufreq_hw 0>;
@@ -173,6 +191,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x600>;
 			enable-method = "psci";
+			cpu-idle-states = <&BIG_CPU_SLEEP_0
+					   &BIG_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			next-level-cache = <&L2_600>;
 			#cooling-cells = <2>;
 			qcom,freq-domain = <&cpufreq_hw 1>;
@@ -187,6 +208,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x700>;
 			enable-method = "psci";
+			cpu-idle-states = <&BIG_CPU_SLEEP_0
+					   &BIG_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			next-level-cache = <&L2_700>;
 			#cooling-cells = <2>;
 			qcom,freq-domain = <&cpufreq_hw 1>;
@@ -195,6 +219,60 @@
 				next-level-cache = <&L3_0>;
 			};
 		};
+
+		idle-states {
+			entry-method = "psci";
+
+			LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
+				compatible = "arm,idle-state";
+				idle-state-name = "little-power-down";
+				arm,psci-suspend-param = <0x40000003>;
+				entry-latency-us = <549>;
+				exit-latency-us = <901>;
+				min-residency-us = <1774>;
+				local-timer-stop;
+			};
+
+			LITTLE_CPU_SLEEP_1: cpu-sleep-0-1 {
+				compatible = "arm,idle-state";
+				idle-state-name = "little-rail-power-down";
+				arm,psci-suspend-param = <0x40000004>;
+				entry-latency-us = <702>;
+				exit-latency-us = <915>;
+				min-residency-us = <4001>;
+				local-timer-stop;
+			};
+
+			BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
+				compatible = "arm,idle-state";
+				idle-state-name = "big-power-down";
+				arm,psci-suspend-param = <0x40000003>;
+				entry-latency-us = <523>;
+				exit-latency-us = <1244>;
+				min-residency-us = <2207>;
+				local-timer-stop;
+			};
+
+			BIG_CPU_SLEEP_1: cpu-sleep-1-1 {
+				compatible = "arm,idle-state";
+				idle-state-name = "big-rail-power-down";
+				arm,psci-suspend-param = <0x40000004>;
+				entry-latency-us = <526>;
+				exit-latency-us = <1854>;
+				min-residency-us = <5555>;
+				local-timer-stop;
+			};
+
+			CLUSTER_SLEEP_0: cluster-sleep-0 {
+				compatible = "arm,idle-state";
+				idle-state-name = "cluster-power-down";
+				arm,psci-suspend-param = <0x40003444>;
+				entry-latency-us = <3263>;
+				exit-latency-us = <6562>;
+				min-residency-us = <9926>;
+				local-timer-stop;
+			};
+		};
 	};
 
 	memory@80000000 {
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v8 2/3] soc: qcom: rpmh: Update dirty flag only when data changes
  2020-02-27  8:56 [PATCH v8 0/3] Invoke rpmh_flush for non OSI targets Maulik Shah
  2020-02-27  8:56 ` [PATCH v8 1/3] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
@ 2020-02-27  8:56 ` Maulik Shah
  2020-02-27 18:18   ` Evan Green
  2020-02-27  8:56 ` [PATCH v8 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
  2 siblings, 1 reply; 6+ messages in thread
From: Maulik Shah @ 2020-02-27  8:56 UTC (permalink / raw)
  To: swboyd, mka, evgreen, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, dianders, rnayak, ilina,
	lsrao, Maulik Shah

Currently rpmh ctrlr dirty flag is set for all cases regardless of data
is really changed or not. Add changes to update dirty flag when data is
changed to newer values.

Also move dirty flag updates to happen from within cache_lock and remove
unnecessary INIT_LIST_HEAD() call and a default case from switch.

Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
---
 drivers/soc/qcom/rpmh.c | 29 ++++++++++++++++-------------
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index eb0ded0..3f5d9eb 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -133,26 +133,30 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
 
 	req->addr = cmd->addr;
 	req->sleep_val = req->wake_val = UINT_MAX;
-	INIT_LIST_HEAD(&req->list);
 	list_add_tail(&req->list, &ctrlr->cache);
 
 existing:
 	switch (state) {
 	case RPMH_ACTIVE_ONLY_STATE:
-		if (req->sleep_val != UINT_MAX)
+		if (req->sleep_val != UINT_MAX) {
 			req->wake_val = cmd->data;
+			ctrlr->dirty = true;
+		}
 		break;
 	case RPMH_WAKE_ONLY_STATE:
-		req->wake_val = cmd->data;
+		if (req->wake_val != cmd->data) {
+			req->wake_val = cmd->data;
+			ctrlr->dirty = true;
+		}
 		break;
 	case RPMH_SLEEP_STATE:
-		req->sleep_val = cmd->data;
-		break;
-	default:
+		if (req->sleep_val != cmd->data) {
+			req->sleep_val = cmd->data;
+			ctrlr->dirty = true;
+		}
 		break;
 	}
 
-	ctrlr->dirty = true;
 unlock:
 	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
@@ -287,6 +291,7 @@ static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
 
 	spin_lock_irqsave(&ctrlr->cache_lock, flags);
 	list_add_tail(&req->list, &ctrlr->batch_cache);
+	ctrlr->dirty = true;
 	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 }
 
@@ -323,6 +328,7 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
 	list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
 		kfree(req);
 	INIT_LIST_HEAD(&ctrlr->batch_cache);
+	ctrlr->dirty = true;
 	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 }
 
@@ -456,13 +462,9 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
 int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 {
 	struct cache_req *p;
+	unsigned long flags;
 	int ret;
 
-	if (!ctrlr->dirty) {
-		pr_debug("Skipping flush, TCS has latest data.\n");
-		return 0;
-	}
-
 	/* First flush the cached batch requests */
 	ret = flush_batch(ctrlr);
 	if (ret)
@@ -488,7 +490,9 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 			return ret;
 	}
 
+	spin_lock_irqsave(&ctrlr->cache_lock, flags);
 	ctrlr->dirty = false;
+	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
 	return 0;
 }
@@ -507,7 +511,6 @@ int rpmh_invalidate(const struct device *dev)
 	int ret;
 
 	invalidate_batch(ctrlr);
-	ctrlr->dirty = true;
 
 	do {
 		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v8 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-02-27  8:56 [PATCH v8 0/3] Invoke rpmh_flush for non OSI targets Maulik Shah
  2020-02-27  8:56 ` [PATCH v8 1/3] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
  2020-02-27  8:56 ` [PATCH v8 2/3] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
@ 2020-02-27  8:56 ` Maulik Shah
  2 siblings, 0 replies; 6+ messages in thread
From: Maulik Shah @ 2020-02-27  8:56 UTC (permalink / raw)
  To: swboyd, mka, evgreen, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, dianders, rnayak, ilina,
	lsrao, Maulik Shah

Add changes to invoke rpmh flush() when the data in cache is dirty.

This is done only if OSI is not supported in PSCI. If OSI is supported
rpmh_flush can get invoked when the last cpu going to power collapse
deepest low power mode.

Also remove "depends on COMPILE_TEST" for Kconfig option QCOM_RPMH so the
driver is only compiled for arm64 which supports psci_has_osi_support()
API.

Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
---
 drivers/soc/qcom/Kconfig | 2 +-
 drivers/soc/qcom/rpmh.c  | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index d0a73e7..2e581bc 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -105,7 +105,7 @@ config QCOM_RMTFS_MEM
 
 config QCOM_RPMH
 	bool "Qualcomm RPM-Hardened (RPMH) Communication"
-	depends on ARCH_QCOM && ARM64 || COMPILE_TEST
+	depends on ARCH_QCOM && ARM64
 	help
 	  Support for communication with the hardened-RPM blocks in
 	  Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index 3f5d9eb..326c58a 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -12,6 +12,7 @@
 #include <linux/module.h>
 #include <linux/of.h>
 #include <linux/platform_device.h>
+#include <linux/psci.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
 #include <linux/types.h>
@@ -160,6 +161,9 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
 unlock:
 	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
+	if (ctrlr->dirty && !psci_has_osi_support())
+		return rpmh_flush(ctrlr) ? ERR_PTR(-EINVAL) : req;
+
 	return req;
 }
 
@@ -388,6 +392,8 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
 
 	if (state != RPMH_ACTIVE_ONLY_STATE) {
 		cache_batch(ctrlr, req);
+		if (!psci_has_osi_support())
+			return rpmh_flush(ctrlr);
 		return 0;
 	}
 
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v8 2/3] soc: qcom: rpmh: Update dirty flag only when data changes
  2020-02-27  8:56 ` [PATCH v8 2/3] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
@ 2020-02-27 18:18   ` Evan Green
  2020-02-28 11:12     ` Maulik Shah
  0 siblings, 1 reply; 6+ messages in thread
From: Evan Green @ 2020-02-27 18:18 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Bjorn Andersson, LKML,
	linux-arm-msm, Andy Gross, Doug Anderson, Rajendra Nayak,
	Lina Iyer, lsrao

On Thu, Feb 27, 2020 at 12:57 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> Currently rpmh ctrlr dirty flag is set for all cases regardless of data
> is really changed or not. Add changes to update dirty flag when data is
> changed to newer values.
>
> Also move dirty flag updates to happen from within cache_lock and remove
> unnecessary INIT_LIST_HEAD() call and a default case from switch.
>
> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
> ---
>  drivers/soc/qcom/rpmh.c | 29 ++++++++++++++++-------------
>  1 file changed, 16 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index eb0ded0..3f5d9eb 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -133,26 +133,30 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
>
>         req->addr = cmd->addr;
>         req->sleep_val = req->wake_val = UINT_MAX;
> -       INIT_LIST_HEAD(&req->list);

Thanks!

>         list_add_tail(&req->list, &ctrlr->cache);
>
>  existing:
>         switch (state) {
>         case RPMH_ACTIVE_ONLY_STATE:
> -               if (req->sleep_val != UINT_MAX)
> +               if (req->sleep_val != UINT_MAX) {
>                         req->wake_val = cmd->data;
> +                       ctrlr->dirty = true;
> +               }
>                 break;
>         case RPMH_WAKE_ONLY_STATE:
> -               req->wake_val = cmd->data;
> +               if (req->wake_val != cmd->data) {
> +                       req->wake_val = cmd->data;
> +                       ctrlr->dirty = true;
> +               }
>                 break;
>         case RPMH_SLEEP_STATE:
> -               req->sleep_val = cmd->data;
> -               break;
> -       default:
> +               if (req->sleep_val != cmd->data) {
> +                       req->sleep_val = cmd->data;
> +                       ctrlr->dirty = true;
> +               }
>                 break;
>         }
>
> -       ctrlr->dirty = true;
>  unlock:
>         spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>
> @@ -287,6 +291,7 @@ static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
>
>         spin_lock_irqsave(&ctrlr->cache_lock, flags);
>         list_add_tail(&req->list, &ctrlr->batch_cache);
> +       ctrlr->dirty = true;
>         spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>  }
>
> @@ -323,6 +328,7 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
>         list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
>                 kfree(req);
>         INIT_LIST_HEAD(&ctrlr->batch_cache);
> +       ctrlr->dirty = true;
>         spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>  }
>
> @@ -456,13 +462,9 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>  int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>  {
>         struct cache_req *p;
> +       unsigned long flags;
>         int ret;
>
> -       if (!ctrlr->dirty) {
> -               pr_debug("Skipping flush, TCS has latest data.\n");
> -               return 0;
> -       }
> -
>         /* First flush the cached batch requests */
>         ret = flush_batch(ctrlr);
>         if (ret)
> @@ -488,7 +490,9 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>                         return ret;
>         }
>
> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>         ctrlr->dirty = false;
> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);

You're acquiring a lock around an operation that's already inherently
atomic, which is not right. If the comment earlier in this function is
still correct that "Nobody else should be calling this function other
than system PM, hence we can run without locks", then you can simply
remove this hunk and the part moving ->dirty = true into
invalidate_batch.

However, if rpmh_flush() can now be called in a scenario where
pre-emption is enabled or multiple cores are alive, then ctrlr->cache
is no longer adequately protected. You'd need to add a lock
acquire/release around the list iteration above, and fix up the
comment.
-Evan

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v8 2/3] soc: qcom: rpmh: Update dirty flag only when data changes
  2020-02-27 18:18   ` Evan Green
@ 2020-02-28 11:12     ` Maulik Shah
  0 siblings, 0 replies; 6+ messages in thread
From: Maulik Shah @ 2020-02-28 11:12 UTC (permalink / raw)
  To: Evan Green
  Cc: Stephen Boyd, Matthias Kaehlcke, Bjorn Andersson, LKML,
	linux-arm-msm, Andy Gross, Doug Anderson, Rajendra Nayak,
	Lina Iyer, lsrao


On 2/27/2020 11:48 PM, Evan Green wrote:
> On Thu, Feb 27, 2020 at 12:57 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>> Currently rpmh ctrlr dirty flag is set for all cases regardless of data
>> is really changed or not. Add changes to update dirty flag when data is
>> changed to newer values.
>>
>> Also move dirty flag updates to happen from within cache_lock and remove
>> unnecessary INIT_LIST_HEAD() call and a default case from switch.
>>
>> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
>> ---
>>   drivers/soc/qcom/rpmh.c | 29 ++++++++++++++++-------------
>>   1 file changed, 16 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index eb0ded0..3f5d9eb 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -133,26 +133,30 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
>>
>>          req->addr = cmd->addr;
>>          req->sleep_val = req->wake_val = UINT_MAX;
>> -       INIT_LIST_HEAD(&req->list);
> Thanks!
>
>>          list_add_tail(&req->list, &ctrlr->cache);
>>
>>   existing:
>>          switch (state) {
>>          case RPMH_ACTIVE_ONLY_STATE:
>> -               if (req->sleep_val != UINT_MAX)
>> +               if (req->sleep_val != UINT_MAX) {
>>                          req->wake_val = cmd->data;
>> +                       ctrlr->dirty = true;
>> +               }
>>                  break;
>>          case RPMH_WAKE_ONLY_STATE:
>> -               req->wake_val = cmd->data;
>> +               if (req->wake_val != cmd->data) {
>> +                       req->wake_val = cmd->data;
>> +                       ctrlr->dirty = true;
>> +               }
>>                  break;
>>          case RPMH_SLEEP_STATE:
>> -               req->sleep_val = cmd->data;
>> -               break;
>> -       default:
>> +               if (req->sleep_val != cmd->data) {
>> +                       req->sleep_val = cmd->data;
>> +                       ctrlr->dirty = true;
>> +               }
>>                  break;
>>          }
>>
>> -       ctrlr->dirty = true;
>>   unlock:
>>          spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>
>> @@ -287,6 +291,7 @@ static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
>>
>>          spin_lock_irqsave(&ctrlr->cache_lock, flags);
>>          list_add_tail(&req->list, &ctrlr->batch_cache);
>> +       ctrlr->dirty = true;
>>          spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>   }
>>
>> @@ -323,6 +328,7 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
>>          list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
>>                  kfree(req);
>>          INIT_LIST_HEAD(&ctrlr->batch_cache);
>> +       ctrlr->dirty = true;
>>          spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>   }
>>
>> @@ -456,13 +462,9 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>>   int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>   {
>>          struct cache_req *p;
>> +       unsigned long flags;
>>          int ret;
>>
>> -       if (!ctrlr->dirty) {
>> -               pr_debug("Skipping flush, TCS has latest data.\n");
>> -               return 0;
>> -       }
>> -
>>          /* First flush the cached batch requests */
>>          ret = flush_batch(ctrlr);
>>          if (ret)
>> @@ -488,7 +490,9 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>                          return ret;
>>          }
>>
>> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>>          ctrlr->dirty = false;
>> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> You're acquiring a lock around an operation that's already inherently
> atomic, which is not right. If the comment earlier in this function is
> still correct that "Nobody else should be calling this function other
> than system PM, hence we can run without locks", then you can simply
> remove this hunk and the part moving ->dirty = true into
> invalidate_batch.
>
> However, if rpmh_flush() can now be called in a scenario where
> pre-emption is enabled or multiple cores are alive, then ctrlr->cache
> is no longer adequately protected. You'd need to add a lock
> acquire/release around the list iteration above, and fix up the
> comment.
> -Evan

Hi Evan,

Right, rpmh_flush() now can be called from any cpu. i will remove 
comments from above.

part for rpmh_flush(),  flush_batch()  and ctrlr->dirty update was 
already covered within cache lock, however its needed to protect

entire rpmh_flush() in cache_lock now.

updates in v9.

Thanks,

Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-02-28 11:12 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-27  8:56 [PATCH v8 0/3] Invoke rpmh_flush for non OSI targets Maulik Shah
2020-02-27  8:56 ` [PATCH v8 1/3] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
2020-02-27  8:56 ` [PATCH v8 2/3] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
2020-02-27 18:18   ` Evan Green
2020-02-28 11:12     ` Maulik Shah
2020-02-27  8:56 ` [PATCH v8 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).