linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v16 0/6] Invoke rpmh_flush for non OSI targets
@ 2020-04-06  6:32 Maulik Shah
  2020-04-06  6:32 ` [PATCH v16 1/6] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: Maulik Shah @ 2020-04-06  6:32 UTC (permalink / raw)
  To: swboyd, evgreen, dianders, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao,
	Maulik Shah

Changes in v16:
- Use base address in probe only, drop change to save it in drv->base
- Address Doug's comments on change 5,6 and 7.
- Add Doug's Reviewed-by.

Changes in v15:
- Address Doug's comments on change 3 of v14 and add Reviewed-by
- Split change 4 of v14 to save drv->base in a new change
- Address Doug's comments on change 4, 5, 6 of v14
- Add missing NOTIFY_OK for rpmh_flush() success case
- First 5 changes in this series can be merged without change 6 and 7

Changes in v14:
- Address Doug's comments on change 3 from v13
- Drop new APIs for start and end transaction from change 4 in v13
- Update change 4 to use cpu pm notifications instead
- Add [5] as change 5 to enable use of WAKE TCS when ACTIVE TCS count is 0
- Add change 6 to Allow multiple WAKE TCS to be used as ACTIVE TCSes
- First 4 changes can be merged even without change 5 and 6.

Changes in v13:
- Address Stephen's comment to maintain COMPILE_TEST
- Address Doug's comments and add new APIs for start and end transaction

Changes in v12:
- Kconfig change to remove COMPILE_TEST was dropped in v11, reinclude it.

Changes in v11:
- Address Doug's comments on change 2 and 3
- Include change to invalidate TCSes before flush from [4]

Changes in v10:
- Address Evan's comments to update commit message on change 2
- Add Evan's Reviewed by on change 2
- Remove comment from rpmh_flush() related to last CPU invoking it
- Rebase all changes on top of next-20200302

Changes in v9:
- Keep rpmh_flush() to invoke from within cache_lock
- Remove comments related to only last cpu invoking rpmh_flush()

Changes in v8:
- Address Stephen's comments on changes 2 and 3
- Add Reviewed by from Stephen on change 1

Changes in v7:
- Address Srinivas's comments to update commit text
- Add Reviewed by from Srinivas

Changes in v6:
- Drop 1 & 2 changes from v5 as they already landed in maintainer tree
- Drop 3 & 4 changes from v5 as no user at present for power domain in rsc
- Rename subject to appropriate since power domain changes are dropped
- Rebase other changes on top of next-20200221

Changes in v5:
- Add Rob's Acked by on dt-bindings change
- Drop firmware psci change
- Update cpuidle stats in dtsi to follow PC mode
- Include change to update dirty flag when data is updated from [4]
- Add change to invoke rpmh_flush when caches are dirty

Changes in v4:
- Add change to allow hierarchical topology in PC mode
- Drop hierarchical domain idle states converter from v3
- Address Merge sc7180 dtsi change to add low power modes

Changes in v3:
- Address Rob's comment on dt property value
- Address Stephen's comments on rpmh-rsc driver change
- Include sc7180 cpuidle low power mode changes from [1]
- Include hierarchical domain idle states converter change from [2]

Changes in v2:
- Add Stephen's Reviewed-By to the first three patches
- Addressed Stephen's comments on fourth patch
- Include changes to connect rpmh domain to cpuidle and genpds

Resource State Coordinator (RSC) is responsible for powering off/lowering
the requirements from CPU subsystem for the associated hardware like buses,
clocks, and regulators when all CPUs and cluster is powered down.

RSC power domain uses last-man activities provided by genpd framework based
on Ulf Hansoon's patch series[3], when the cluster of CPUs enter deepest
idle states. As a part of domain poweroff, RSC can lower resource state
requirements by flushing the cached sleep and wake state votes for various
resources.

[1] https://patchwork.kernel.org/patch/11218965
[2] https://patchwork.kernel.org/patch/10941671
[3] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=222355
[4] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=236503
[5] https://patchwork.kernel.org/patch/10818129

Maulik Shah (5):
  arm64: dts: qcom: sc7180: Add cpuidle low power states
  soc: qcom: rpmh: Update dirty flag only when data changes
  soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new
    data
  soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  soc: qcom: rpmh-rsc: Allow using free WAKE TCS for active request

Raju P.L.S.S.S.N (1):
  soc: qcom: rpmh-rsc: Clear active mode configuration for wake TCS

 arch/arm64/boot/dts/qcom/sc7180.dtsi |  78 +++++++++++++
 drivers/soc/qcom/rpmh-internal.h     |  25 ++--
 drivers/soc/qcom/rpmh-rsc.c          | 221 +++++++++++++++++++++++++++--------
 drivers/soc/qcom/rpmh.c              |  76 ++++++------
 4 files changed, 303 insertions(+), 97 deletions(-)

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v16 1/6] arm64: dts: qcom: sc7180: Add cpuidle low power states
  2020-04-06  6:32 [PATCH v16 0/6] Invoke rpmh_flush for non OSI targets Maulik Shah
@ 2020-04-06  6:32 ` Maulik Shah
  2020-04-06  6:32 ` [PATCH v16 2/6] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Maulik Shah @ 2020-04-06  6:32 UTC (permalink / raw)
  To: swboyd, evgreen, dianders, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao,
	Maulik Shah, devicetree

Add device bindings for cpuidle states for cpu devices.

Cc: devicetree@vger.kernel.orgi
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
Reviewed-by: Stephen Boyd <swboyd@chromium.org>
---
 arch/arm64/boot/dts/qcom/sc7180.dtsi | 78 ++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
index 998f101..26719cd 100644
--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
@@ -94,6 +94,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x0>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_0>;
@@ -113,6 +116,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x100>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_100>;
@@ -129,6 +135,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x200>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_200>;
@@ -145,6 +154,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x300>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_300>;
@@ -161,6 +173,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x400>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_400>;
@@ -177,6 +192,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x500>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_500>;
@@ -193,6 +211,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x600>;
 			enable-method = "psci";
+			cpu-idle-states = <&BIG_CPU_SLEEP_0
+					   &BIG_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1740>;
 			dynamic-power-coefficient = <405>;
 			next-level-cache = <&L2_600>;
@@ -209,6 +230,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x700>;
 			enable-method = "psci";
+			cpu-idle-states = <&BIG_CPU_SLEEP_0
+					   &BIG_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1740>;
 			dynamic-power-coefficient = <405>;
 			next-level-cache = <&L2_700>;
@@ -255,6 +279,60 @@
 				};
 			};
 		};
+
+		idle-states {
+			entry-method = "psci";
+
+			LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
+				compatible = "arm,idle-state";
+				idle-state-name = "little-power-down";
+				arm,psci-suspend-param = <0x40000003>;
+				entry-latency-us = <549>;
+				exit-latency-us = <901>;
+				min-residency-us = <1774>;
+				local-timer-stop;
+			};
+
+			LITTLE_CPU_SLEEP_1: cpu-sleep-0-1 {
+				compatible = "arm,idle-state";
+				idle-state-name = "little-rail-power-down";
+				arm,psci-suspend-param = <0x40000004>;
+				entry-latency-us = <702>;
+				exit-latency-us = <915>;
+				min-residency-us = <4001>;
+				local-timer-stop;
+			};
+
+			BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
+				compatible = "arm,idle-state";
+				idle-state-name = "big-power-down";
+				arm,psci-suspend-param = <0x40000003>;
+				entry-latency-us = <523>;
+				exit-latency-us = <1244>;
+				min-residency-us = <2207>;
+				local-timer-stop;
+			};
+
+			BIG_CPU_SLEEP_1: cpu-sleep-1-1 {
+				compatible = "arm,idle-state";
+				idle-state-name = "big-rail-power-down";
+				arm,psci-suspend-param = <0x40000004>;
+				entry-latency-us = <526>;
+				exit-latency-us = <1854>;
+				min-residency-us = <5555>;
+				local-timer-stop;
+			};
+
+			CLUSTER_SLEEP_0: cluster-sleep-0 {
+				compatible = "arm,idle-state";
+				idle-state-name = "cluster-power-down";
+				arm,psci-suspend-param = <0x40003444>;
+				entry-latency-us = <3263>;
+				exit-latency-us = <6562>;
+				min-residency-us = <9926>;
+				local-timer-stop;
+			};
+		};
 	};
 
 	memory@80000000 {
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v16 2/6] soc: qcom: rpmh: Update dirty flag only when data changes
  2020-04-06  6:32 [PATCH v16 0/6] Invoke rpmh_flush for non OSI targets Maulik Shah
  2020-04-06  6:32 ` [PATCH v16 1/6] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
@ 2020-04-06  6:32 ` Maulik Shah
  2020-04-06  6:32 ` [PATCH v16 3/6] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data Maulik Shah
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Maulik Shah @ 2020-04-06  6:32 UTC (permalink / raw)
  To: swboyd, evgreen, dianders, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao,
	Maulik Shah

Currently rpmh ctrlr dirty flag is set for all cases regardless of data
is really changed or not. Add changes to update dirty flag when data is
changed to newer values. Update dirty flag everytime when data in batch
cache is updated since rpmh_flush() may get invoked from any CPU instead
of only last CPU going to low power mode.

Also move dirty flag updates to happen from within cache_lock and remove
unnecessary INIT_LIST_HEAD() call and a default case from switch.

Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
Reviewed-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
---
 drivers/soc/qcom/rpmh.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index eb0ded0..03630ae 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -119,6 +119,7 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
 {
 	struct cache_req *req;
 	unsigned long flags;
+	u32 old_sleep_val, old_wake_val;
 
 	spin_lock_irqsave(&ctrlr->cache_lock, flags);
 	req = __find_req(ctrlr, cmd->addr);
@@ -133,26 +134,27 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
 
 	req->addr = cmd->addr;
 	req->sleep_val = req->wake_val = UINT_MAX;
-	INIT_LIST_HEAD(&req->list);
 	list_add_tail(&req->list, &ctrlr->cache);
 
 existing:
+	old_sleep_val = req->sleep_val;
+	old_wake_val = req->wake_val;
+
 	switch (state) {
 	case RPMH_ACTIVE_ONLY_STATE:
-		if (req->sleep_val != UINT_MAX)
-			req->wake_val = cmd->data;
-		break;
 	case RPMH_WAKE_ONLY_STATE:
 		req->wake_val = cmd->data;
 		break;
 	case RPMH_SLEEP_STATE:
 		req->sleep_val = cmd->data;
 		break;
-	default:
-		break;
 	}
 
-	ctrlr->dirty = true;
+	ctrlr->dirty = (req->sleep_val != old_sleep_val ||
+			req->wake_val != old_wake_val) &&
+			req->sleep_val != UINT_MAX &&
+			req->wake_val != UINT_MAX;
+
 unlock:
 	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
@@ -287,6 +289,7 @@ static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
 
 	spin_lock_irqsave(&ctrlr->cache_lock, flags);
 	list_add_tail(&req->list, &ctrlr->batch_cache);
+	ctrlr->dirty = true;
 	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 }
 
@@ -323,6 +326,7 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
 	list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
 		kfree(req);
 	INIT_LIST_HEAD(&ctrlr->batch_cache);
+	ctrlr->dirty = true;
 	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 }
 
@@ -507,7 +511,6 @@ int rpmh_invalidate(const struct device *dev)
 	int ret;
 
 	invalidate_batch(ctrlr);
-	ctrlr->dirty = true;
 
 	do {
 		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v16 3/6] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data
  2020-04-06  6:32 [PATCH v16 0/6] Invoke rpmh_flush for non OSI targets Maulik Shah
  2020-04-06  6:32 ` [PATCH v16 1/6] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
  2020-04-06  6:32 ` [PATCH v16 2/6] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
@ 2020-04-06  6:32 ` Maulik Shah
  2020-04-09 20:23   ` Stephen Boyd
  2020-04-06  6:32 ` [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Maulik Shah @ 2020-04-06  6:32 UTC (permalink / raw)
  To: swboyd, evgreen, dianders, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao,
	Maulik Shah

TCSes have previously programmed data when rpmh_flush is called.
This can cause old data to trigger along with newly flushed.

Fix this by cleaning SLEEP and WAKE TCSes before new data is flushed.

With this there is no need to invoke rpmh_rsc_invalidate() call from
rpmh_invalidate().

Simplify rpmh_invalidate() by moving invalidate_batch() inside.

Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
---
 drivers/soc/qcom/rpmh.c | 41 ++++++++++++++++++-----------------------
 1 file changed, 18 insertions(+), 23 deletions(-)

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index 03630ae..a75f3df 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -317,19 +317,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
 	return ret;
 }
 
-static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
-{
-	struct batch_cache_req *req, *tmp;
-	unsigned long flags;
-
-	spin_lock_irqsave(&ctrlr->cache_lock, flags);
-	list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
-		kfree(req);
-	INIT_LIST_HEAD(&ctrlr->batch_cache);
-	ctrlr->dirty = true;
-	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
-}
-
 /**
  * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the
  * batch to finish.
@@ -467,6 +454,13 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 		return 0;
 	}
 
+	/* Invalidate the TCSes first to avoid stale data */
+	do {
+		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
+	} while (ret == -EAGAIN);
+	if (ret)
+		return ret;
+
 	/* First flush the cached batch requests */
 	ret = flush_batch(ctrlr);
 	if (ret)
@@ -498,24 +492,25 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 }
 
 /**
- * rpmh_invalidate: Invalidate all sleep and active sets
- * sets.
+ * rpmh_invalidate: Invalidate sleep and wake sets in batch_cache
  *
  * @dev: The device making the request
  *
- * Invalidate the sleep and active values in the TCS blocks.
+ * Invalidate the sleep and wake values in batch_cache.
  */
 int rpmh_invalidate(const struct device *dev)
 {
 	struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
-	int ret;
-
-	invalidate_batch(ctrlr);
+	struct batch_cache_req *req, *tmp;
+	unsigned long flags;
 
-	do {
-		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
-	} while (ret == -EAGAIN);
+	spin_lock_irqsave(&ctrlr->cache_lock, flags);
+	list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
+		kfree(req);
+	INIT_LIST_HEAD(&ctrlr->batch_cache);
+	ctrlr->dirty = true;
+	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL(rpmh_invalidate);
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-04-06  6:32 [PATCH v16 0/6] Invoke rpmh_flush for non OSI targets Maulik Shah
                   ` (2 preceding siblings ...)
  2020-04-06  6:32 ` [PATCH v16 3/6] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data Maulik Shah
@ 2020-04-06  6:32 ` Maulik Shah
  2020-04-08  2:50   ` Stephen Boyd
  2020-04-10  4:15   ` Stephen Boyd
  2020-04-06  6:32 ` [PATCH v16 5/6] soc: qcom: rpmh-rsc: Clear active mode configuration for wake TCS Maulik Shah
  2020-04-06  6:32 ` [PATCH v16 6/6] soc: qcom: rpmh-rsc: Allow using free WAKE TCS for active request Maulik Shah
  5 siblings, 2 replies; 17+ messages in thread
From: Maulik Shah @ 2020-04-06  6:32 UTC (permalink / raw)
  To: swboyd, evgreen, dianders, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao,
	Maulik Shah

Add changes to invoke rpmh flush() from CPU PM notification.
This is done when the last the cpu is entering power collapse and
controller is not busy.

Controllers that do have 'HW solver' mode do not need to register
for CPU PM notification. They may be in autonomous mode executing
low power mode and do not require rpmh_flush() to happen from CPU
PM notification.

Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
---
 drivers/soc/qcom/rpmh-internal.h |  25 +++++---
 drivers/soc/qcom/rpmh-rsc.c      | 123 +++++++++++++++++++++++++++++++++++----
 drivers/soc/qcom/rpmh.c          |  26 +++------
 3 files changed, 137 insertions(+), 37 deletions(-)

diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index 6eec32b..e9a90cb 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -84,23 +84,32 @@ struct rpmh_ctrlr {
  * struct rsc_drv: the Direct Resource Voter (DRV) of the
  * Resource State Coordinator controller (RSC)
  *
- * @name:       controller identifier
- * @tcs_base:   start address of the TCS registers in this controller
- * @id:         instance id in the controller (Direct Resource Voter)
- * @num_tcs:    number of TCSes in this DRV
- * @tcs:        TCS groups
- * @tcs_in_use: s/w state of the TCS
- * @lock:       synchronize state of the controller
- * @client:     handle to the DRV's client.
+ * @name:               Controller identifier
+ * @tcs_base:           Start address of the TCS registers in this controller
+ * @id:                 Instance id in the controller (Direct Resource Voter)
+ * @num_tcs:            Number of TCSes in this DRV
+ * @rsc_pm:             CPU PM notifier for controller
+ *                      Used when solver mode is not present
+ * @cpus_entered_pm:    CPU mask for cpus in idle power collapse
+ *                      Used when solver mode is not present
+ * @tcs:                TCS groups
+ * @tcs_in_use:         S/W state of the TCS
+ * @lock:               Synchronize state of the controller
+ * @pm_lock:            Synchronize during PM notifications
+ *                      Used when solver mode is not present
+ * @client:             Handle to the DRV's client.
  */
 struct rsc_drv {
 	const char *name;
 	void __iomem *tcs_base;
 	int id;
 	int num_tcs;
+	struct notifier_block rsc_pm;
+	struct cpumask cpus_entered_pm;
 	struct tcs_group tcs[TCS_TYPE_NR];
 	DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
 	spinlock_t lock;
+	spinlock_t pm_lock;
 	struct rpmh_ctrlr client;
 };
 
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index b718221..fbe1f3e 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -6,6 +6,7 @@
 #define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME
 
 #include <linux/atomic.h>
+#include <linux/cpu_pm.h>
 #include <linux/delay.h>
 #include <linux/interrupt.h>
 #include <linux/io.h>
@@ -30,7 +31,12 @@
 #define RSC_DRV_TCS_OFFSET		672
 #define RSC_DRV_CMD_OFFSET		20
 
-/* DRV Configuration Information Register */
+/* DRV HW Solver Configuration Information Register */
+#define DRV_SOLVER_CONFIG		0x04
+#define DRV_HW_SOLVER_MASK		1
+#define DRV_HW_SOLVER_SHIFT		24
+
+/* DRV TCS Configuration Information Register */
 #define DRV_PRNT_CHLD_CONFIG		0x0C
 #define DRV_NUM_TCS_MASK		0x3F
 #define DRV_NUM_TCS_SHIFT		6
@@ -521,8 +527,86 @@ int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)
 	return tcs_ctrl_write(drv, msg);
 }
 
+/**
+ * rpmh_rsc_ctrlr_is_busy() - Check if any of the AMCs are busy.
+ *
+ * @drv: The controller
+ *
+ * Checks if any of the AMCs are busy in handling ACTIVE sets.
+ * This is called from the last cpu powering down before flushing
+ * SLEEP and WAKE sets. If AMCs are busy, controller can not enter
+ * power collapse, so deny from the last cpu's pm notification.
+ *
+ * Return:
+ * * False		- AMCs are idle
+ * * True		- AMCs are busy
+ */
+static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv)
+{
+	int m;
+	struct tcs_group *tcs = get_tcs_of_type(drv, ACTIVE_TCS);
+
+	/*
+	 * If we made an active request on a RSC that does not have a
+	 * dedicated TCS for active state use, then re-purposed wake TCSes
+	 * should be checked for not busy.
+	 *
+	 * Since this is called from the last cpu, need not take drv or tcs
+	 * lock before checking tcs_is_free().
+	 */
+	if (!tcs->num_tcs)
+		tcs = get_tcs_of_type(drv, WAKE_TCS);
+
+	for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) {
+		if (!tcs_is_free(drv, m))
+			return true;
+	}
+
+	return false;
+}
+
+static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
+				    unsigned long action, void *v)
+{
+	struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
+	int ret = NOTIFY_OK;
+
+	spin_lock(&drv->pm_lock);
+
+	switch (action) {
+	case CPU_PM_ENTER:
+		cpumask_set_cpu(raw_smp_processor_id(),
+				&drv->cpus_entered_pm);
+
+		if (!cpumask_equal(&drv->cpus_entered_pm, cpu_online_mask))
+			goto exit;
+		break;
+	case CPU_PM_ENTER_FAILED:
+	case CPU_PM_EXIT:
+		cpumask_clear_cpu(raw_smp_processor_id(),
+				  &drv->cpus_entered_pm);
+		goto exit;
+	}
+
+	ret = rpmh_rsc_ctrlr_is_busy(drv);
+	if (ret) {
+		ret = NOTIFY_BAD;
+		goto exit;
+	}
+
+	ret = rpmh_flush(&drv->client);
+	if (ret)
+		ret = NOTIFY_BAD;
+	else
+		ret = NOTIFY_OK;
+
+exit:
+	spin_unlock(&drv->pm_lock);
+	return ret;
+}
+
 static int rpmh_probe_tcs_config(struct platform_device *pdev,
-				 struct rsc_drv *drv)
+				 struct rsc_drv *drv, void __iomem *base)
 {
 	struct tcs_type_config {
 		u32 type;
@@ -532,15 +616,6 @@ static int rpmh_probe_tcs_config(struct platform_device *pdev,
 	u32 config, max_tcs, ncpt, offset;
 	int i, ret, n, st = 0;
 	struct tcs_group *tcs;
-	struct resource *res;
-	void __iomem *base;
-	char drv_id[10] = {0};
-
-	snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id);
-	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, drv_id);
-	base = devm_ioremap_resource(&pdev->dev, res);
-	if (IS_ERR(base))
-		return PTR_ERR(base);
 
 	ret = of_property_read_u32(dn, "qcom,tcs-offset", &offset);
 	if (ret)
@@ -620,7 +695,11 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
 {
 	struct device_node *dn = pdev->dev.of_node;
 	struct rsc_drv *drv;
+	struct resource *res;
+	char drv_id[10] = {0};
 	int ret, irq;
+	u32 solver_config;
+	void __iomem *base;
 
 	/*
 	 * Even though RPMh doesn't directly use cmd-db, all of its children
@@ -646,7 +725,13 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
 	if (!drv->name)
 		drv->name = dev_name(&pdev->dev);
 
-	ret = rpmh_probe_tcs_config(pdev, drv);
+	snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id);
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, drv_id);
+	base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(base))
+		return PTR_ERR(base);
+
+	ret = rpmh_probe_tcs_config(pdev, drv, base);
 	if (ret)
 		return ret;
 
@@ -663,6 +748,20 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
 	if (ret)
 		return ret;
 
+	/*
+	 * CPU PM notification are not required for controllers that support
+	 * 'HW solver' mode where they can be in autonomous mode executing low
+	 * power mode to power down.
+	 */
+	solver_config = readl_relaxed(base + DRV_SOLVER_CONFIG);
+	solver_config &= DRV_HW_SOLVER_MASK << DRV_HW_SOLVER_SHIFT;
+	solver_config = solver_config >> DRV_HW_SOLVER_SHIFT;
+	if (!solver_config) {
+		drv->rsc_pm.notifier_call = rpmh_rsc_cpu_pm_callback;
+		spin_lock_init(&drv->pm_lock);
+		cpu_pm_register_notifier(&drv->rsc_pm);
+	}
+
 	/* Enable the active TCS to send requests immediately */
 	write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, drv->tcs[ACTIVE_TCS].mask);
 
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index a75f3df..88f8801 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -297,12 +297,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
 {
 	struct batch_cache_req *req;
 	const struct rpmh_request *rpm_msg;
-	unsigned long flags;
 	int ret = 0;
 	int i;
 
 	/* Send Sleep/Wake requests to the controller, expect no response */
-	spin_lock_irqsave(&ctrlr->cache_lock, flags);
 	list_for_each_entry(req, &ctrlr->batch_cache, list) {
 		for (i = 0; i < req->count; i++) {
 			rpm_msg = req->rpm_msgs + i;
@@ -312,7 +310,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
 				break;
 		}
 	}
-	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
 	return ret;
 }
@@ -433,16 +430,17 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
 }
 
 /**
- * rpmh_flush: Flushes the buffered active and sleep sets to TCS
+ * rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes
  *
- * @ctrlr: controller making request to flush cached data
+ * @ctrlr: Controller making request to flush cached data
  *
- * Return: -EBUSY if the controller is busy, probably waiting on a response
- * to a RPMH request sent earlier.
+ * This function is called from sleep code on the last CPU
+ * (thus no spinlock needed).
  *
- * This function is always called from the sleep code from the last CPU
- * that is powering down the entire system. Since no other RPMH API would be
- * executing at this time, it is safe to run lockless.
+ * Return:
+ * * 0          - Success
+ * * -EAGAIN    - Retry again
+ * * Error code - Otherwise
  */
 int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 {
@@ -455,9 +453,7 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 	}
 
 	/* Invalidate the TCSes first to avoid stale data */
-	do {
-		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
-	} while (ret == -EAGAIN);
+	ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
 	if (ret)
 		return ret;
 
@@ -466,10 +462,6 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 	if (ret)
 		return ret;
 
-	/*
-	 * Nobody else should be calling this function other than system PM,
-	 * hence we can run without locks.
-	 */
 	list_for_each_entry(p, &ctrlr->cache, list) {
 		if (!is_req_valid(p)) {
 			pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x",
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v16 5/6] soc: qcom: rpmh-rsc: Clear active mode configuration for wake TCS
  2020-04-06  6:32 [PATCH v16 0/6] Invoke rpmh_flush for non OSI targets Maulik Shah
                   ` (3 preceding siblings ...)
  2020-04-06  6:32 ` [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
@ 2020-04-06  6:32 ` Maulik Shah
  2020-04-06  6:32 ` [PATCH v16 6/6] soc: qcom: rpmh-rsc: Allow using free WAKE TCS for active request Maulik Shah
  5 siblings, 0 replies; 17+ messages in thread
From: Maulik Shah @ 2020-04-06  6:32 UTC (permalink / raw)
  To: swboyd, evgreen, dianders, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao,
	Raju P.L.S.S.S.N, Maulik Shah

From: "Raju P.L.S.S.S.N" <rplsssn@codeaurora.org>

For RSCs that have sleep & wake TCS but no dedicated active TCS, wake
TCS can be re-purposed to send active requests. Once the active requests
are sent and response is received, the active mode configuration needs
to be cleared so that controller can use wake TCS for sending wake
requests.

Introduce enable_tcs_irq() to enable completion IRQ for repurposed TCSes.

Fixes: 2de4b8d33eab (drivers: qcom: rpmh-rsc: allow active requests from wake TCS)
Signed-off-by: Raju P.L.S.S.S.N <rplsssn@codeaurora.org>
[mkshah: call enable_tcs_irq() within drv->lock, update commit message]
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
---
 drivers/soc/qcom/rpmh-rsc.c | 77 +++++++++++++++++++++++++++++++--------------
 1 file changed, 54 insertions(+), 23 deletions(-)

diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index fbe1f3e..5128e13 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -207,6 +207,42 @@ static const struct tcs_request *get_req_from_tcs(struct rsc_drv *drv,
 	return NULL;
 }
 
+static void __tcs_set_trigger(struct rsc_drv *drv, int tcs_id, bool trigger)
+{
+	u32 enable;
+
+	/*
+	 * HW req: Clear the DRV_CONTROL and enable TCS again
+	 * While clearing ensure that the AMC mode trigger is cleared
+	 * and then the mode enable is cleared.
+	 */
+	enable = read_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, 0);
+	enable &= ~TCS_AMC_MODE_TRIGGER;
+	write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
+	enable &= ~TCS_AMC_MODE_ENABLE;
+	write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
+
+	if (trigger) {
+		/* Enable the AMC mode on the TCS and then trigger the TCS */
+		enable = TCS_AMC_MODE_ENABLE;
+		write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
+		enable |= TCS_AMC_MODE_TRIGGER;
+		write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
+	}
+}
+
+static void enable_tcs_irq(struct rsc_drv *drv, int tcs_id, bool enable)
+{
+	u32 data;
+
+	data = read_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, 0);
+	if (enable)
+		data |= BIT(tcs_id);
+	else
+		data &= ~BIT(tcs_id);
+	write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, data);
+}
+
 /**
  * tcs_tx_done: TX Done interrupt handler
  */
@@ -243,6 +279,14 @@ static irqreturn_t tcs_tx_done(int irq, void *p)
 		}
 
 		trace_rpmh_tx_done(drv, i, req, err);
+
+		/*
+		 * If wake tcs was re-purposed for sending active
+		 * votes, clear AMC trigger & enable modes and
+		 * disable interrupt for this TCS
+		 */
+		if (!drv->tcs[ACTIVE_TCS].num_tcs)
+			__tcs_set_trigger(drv, i, false);
 skip:
 		/* Reclaim the TCS */
 		write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, i, 0);
@@ -250,6 +294,13 @@ static irqreturn_t tcs_tx_done(int irq, void *p)
 		write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, BIT(i));
 		spin_lock(&drv->lock);
 		clear_bit(i, drv->tcs_in_use);
+		/*
+		 * Disable interrupt for WAKE TCS to avoid being
+		 * spammed with interrupts coming when the solver
+		 * sends its wake votes.
+		 */
+		if (!drv->tcs[ACTIVE_TCS].num_tcs)
+			enable_tcs_irq(drv, i, false);
 		spin_unlock(&drv->lock);
 		if (req)
 			rpmh_tx_done(req, err);
@@ -291,28 +342,6 @@ static void __tcs_buffer_write(struct rsc_drv *drv, int tcs_id, int cmd_id,
 	write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, cmd_enable);
 }
 
-static void __tcs_trigger(struct rsc_drv *drv, int tcs_id)
-{
-	u32 enable;
-
-	/*
-	 * HW req: Clear the DRV_CONTROL and enable TCS again
-	 * While clearing ensure that the AMC mode trigger is cleared
-	 * and then the mode enable is cleared.
-	 */
-	enable = read_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, 0);
-	enable &= ~TCS_AMC_MODE_TRIGGER;
-	write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
-	enable &= ~TCS_AMC_MODE_ENABLE;
-	write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
-
-	/* Enable the AMC mode on the TCS and then trigger the TCS */
-	enable = TCS_AMC_MODE_ENABLE;
-	write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
-	enable |= TCS_AMC_MODE_TRIGGER;
-	write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
-}
-
 static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
 				  const struct tcs_request *msg)
 {
@@ -383,10 +412,12 @@ static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
 
 	tcs->req[tcs_id - tcs->offset] = msg;
 	set_bit(tcs_id, drv->tcs_in_use);
+	if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS)
+		enable_tcs_irq(drv, tcs_id, true);
 	spin_unlock(&drv->lock);
 
 	__tcs_buffer_write(drv, tcs_id, 0, msg);
-	__tcs_trigger(drv, tcs_id);
+	__tcs_set_trigger(drv, tcs_id, true);
 
 done_write:
 	spin_unlock_irqrestore(&tcs->lock, flags);
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v16 6/6] soc: qcom: rpmh-rsc: Allow using free WAKE TCS for active request
  2020-04-06  6:32 [PATCH v16 0/6] Invoke rpmh_flush for non OSI targets Maulik Shah
                   ` (4 preceding siblings ...)
  2020-04-06  6:32 ` [PATCH v16 5/6] soc: qcom: rpmh-rsc: Clear active mode configuration for wake TCS Maulik Shah
@ 2020-04-06  6:32 ` Maulik Shah
  5 siblings, 0 replies; 17+ messages in thread
From: Maulik Shah @ 2020-04-06  6:32 UTC (permalink / raw)
  To: swboyd, evgreen, dianders, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao,
	Maulik Shah

When there are more than one WAKE TCS available and there is no dedicated
ACTIVE TCS available, invalidating all WAKE TCSes and waiting for current
transfer to complete in first WAKE TCS blocks using another free WAKE TCS
to complete current request.

Remove rpmh_rsc_invalidate() to happen from tcs_write() when WAKE TCSes
is re-purposed to be used for Active mode. Clear only currently used
WAKE TCS's register configuration.

Fixes: 2de4b8d33eab (drivers: qcom: rpmh-rsc: allow active requests from wake TCS)
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
---
 drivers/soc/qcom/rpmh-rsc.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 5128e13..34b0e14 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -154,7 +154,7 @@ int rpmh_rsc_invalidate(struct rsc_drv *drv)
 static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
 					 const struct tcs_request *msg)
 {
-	int type, ret;
+	int type;
 	struct tcs_group *tcs;
 
 	switch (msg->state) {
@@ -175,19 +175,10 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
 	 * If we are making an active request on a RSC that does not have a
 	 * dedicated TCS for active state use, then re-purpose a wake TCS to
 	 * send active votes.
-	 * NOTE: The driver must be aware that this RSC does not have a
-	 * dedicated AMC, and therefore would invalidate the sleep and wake
-	 * TCSes before making an active state request.
 	 */
 	tcs = get_tcs_of_type(drv, type);
-	if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) {
+	if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs)
 		tcs = get_tcs_of_type(drv, WAKE_TCS);
-		if (tcs->num_tcs) {
-			ret = rpmh_rsc_invalidate(drv);
-			if (ret)
-				return ERR_PTR(ret);
-		}
-	}
 
 	return tcs;
 }
@@ -412,8 +403,16 @@ static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
 
 	tcs->req[tcs_id - tcs->offset] = msg;
 	set_bit(tcs_id, drv->tcs_in_use);
-	if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS)
+	if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS) {
+		/*
+		 * Clear previously programmed WAKE commands in selected
+		 * repurposed TCS to avoid triggering them. tcs->slots will be
+		 * cleaned from rpmh_flush() by invoking rpmh_rsc_invalidate()
+		 */
+		write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0);
+		write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0);
 		enable_tcs_irq(drv, tcs_id, true);
+	}
 	spin_unlock(&drv->lock);
 
 	__tcs_buffer_write(drv, tcs_id, 0, msg);
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-04-06  6:32 ` [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
@ 2020-04-08  2:50   ` Stephen Boyd
  2020-04-08  7:08     ` Maulik Shah
  2020-04-10  4:15   ` Stephen Boyd
  1 sibling, 1 reply; 17+ messages in thread
From: Stephen Boyd @ 2020-04-08  2:50 UTC (permalink / raw)
  To: Maulik Shah, bjorn.andersson, dianders, evgreen
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao,
	Maulik Shah

Quoting Maulik Shah (2020-04-05 23:32:19)
> Add changes to invoke rpmh flush() from CPU PM notification.
> This is done when the last the cpu is entering power collapse and
> controller is not busy.
> 
> Controllers that do have 'HW solver' mode do not need to register

Controllers that have 'HW solver' mode don't need to register? The 'do
have' is throwing me off.

> for CPU PM notification. They may be in autonomous mode executing
> low power mode and do not require rpmh_flush() to happen from CPU
> PM notification.
> 
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> Reviewed-by: Douglas Anderson <dianders@chromium.org>
> ---
>  drivers/soc/qcom/rpmh-internal.h |  25 +++++---
>  drivers/soc/qcom/rpmh-rsc.c      | 123 +++++++++++++++++++++++++++++++++++----
>  drivers/soc/qcom/rpmh.c          |  26 +++------
>  3 files changed, 137 insertions(+), 37 deletions(-)
> 
> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> index b718221..fbe1f3e 100644
> --- a/drivers/soc/qcom/rpmh-rsc.c
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -6,6 +6,7 @@
[...]
> +
> +static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
> +                                   unsigned long action, void *v)
> +{
> +       struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
> +       int ret = NOTIFY_OK;
> +
> +       spin_lock(&drv->pm_lock);
> +
> +       switch (action) {
> +       case CPU_PM_ENTER:

I thought CPU_PM notifiers weren't supposed to be used anymore? Or at
least, the genpd work that has gone on for cpuidle could be used here in
place of CPU_PM notifiers? And so this isn't actually any different
than what was proposed originally to use genpd for this?

> +               cpumask_set_cpu(raw_smp_processor_id(),

Why do we need to use raw_smp_processor_id()? smp_processor_id() should
work just as well?

> +                               &drv->cpus_entered_pm);
> +
> +               if (!cpumask_equal(&drv->cpus_entered_pm, cpu_online_mask))
> +                       goto exit;
> +               break;
> +       case CPU_PM_ENTER_FAILED:
> +       case CPU_PM_EXIT:
> +               cpumask_clear_cpu(raw_smp_processor_id(),
> +                                 &drv->cpus_entered_pm);
> +               goto exit;
> +       }
> +
> +       ret = rpmh_rsc_ctrlr_is_busy(drv);
> +       if (ret) {
> +               ret = NOTIFY_BAD;
> +               goto exit;
> +       }
> +
> +       ret = rpmh_flush(&drv->client);
> +       if (ret)
> +               ret = NOTIFY_BAD;
> +       else
> +               ret = NOTIFY_OK;
> +
> +exit:
> +       spin_unlock(&drv->pm_lock);
> +       return ret;
> +}
> +

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-04-08  2:50   ` Stephen Boyd
@ 2020-04-08  7:08     ` Maulik Shah
  2020-04-09  8:16       ` Stephen Boyd
  0 siblings, 1 reply; 17+ messages in thread
From: Maulik Shah @ 2020-04-08  7:08 UTC (permalink / raw)
  To: Stephen Boyd, bjorn.andersson, dianders, evgreen
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao

Hi,

On 4/8/2020 8:20 AM, Stephen Boyd wrote:
> Quoting Maulik Shah (2020-04-05 23:32:19)
>> Add changes to invoke rpmh flush() from CPU PM notification.
>> This is done when the last the cpu is entering power collapse and
>> controller is not busy.
>>
>> Controllers that do have 'HW solver' mode do not need to register
> Controllers that have 'HW solver' mode don't need to register? The 'do
> have' is throwing me off.
Okay i will remove 'do' from this line.
>> for CPU PM notification. They may be in autonomous mode executing
>> low power mode and do not require rpmh_flush() to happen from CPU
>> PM notification.
>>
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> Reviewed-by: Douglas Anderson <dianders@chromium.org>
>> ---
>>   drivers/soc/qcom/rpmh-internal.h |  25 +++++---
>>   drivers/soc/qcom/rpmh-rsc.c      | 123 +++++++++++++++++++++++++++++++++++----
>>   drivers/soc/qcom/rpmh.c          |  26 +++------
>>   3 files changed, 137 insertions(+), 37 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>> index b718221..fbe1f3e 100644
>> --- a/drivers/soc/qcom/rpmh-rsc.c
>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>> @@ -6,6 +6,7 @@
> [...]
>> +
>> +static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
>> +                                   unsigned long action, void *v)
>> +{
>> +       struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
>> +       int ret = NOTIFY_OK;
>> +
>> +       spin_lock(&drv->pm_lock);
>> +
>> +       switch (action) {
>> +       case CPU_PM_ENTER:
> I thought CPU_PM notifiers weren't supposed to be used anymore? Or at
> least, the genpd work that has gone on for cpuidle could be used here in
> place of CPU_PM notifiers?

genpd was used in v3 and v4 of this series, where from pd's .power_off  
function, rpmh_flush() was invoked.

genpd can be useful if target firmware supports PSCI's OSI mode, while 
sc7180 is non-OSI target.

The current approch (using cpu pm notification) can be used for both OSI 
and non-OSI targets to invoke rpmh_flush() when last cpu goes to power down.

> And so this isn't actually any different
> than what was proposed originally to use genpd for this?
>
>> +               cpumask_set_cpu(raw_smp_processor_id(),
> Why do we need to use raw_smp_processor_id()? smp_processor_id() should
> work just as well?
Yes, seems it will work as well. I will change to use smp_processor_id().
>
>> +                               &drv->cpus_entered_pm);
>> +
>> +               if (!cpumask_equal(&drv->cpus_entered_pm, cpu_online_mask))
>> +                       goto exit;
>> +               break;
>> +       case CPU_PM_ENTER_FAILED:
>> +       case CPU_PM_EXIT:
>> +               cpumask_clear_cpu(raw_smp_processor_id(),
>> +                                 &drv->cpus_entered_pm);
>> +               goto exit;
>> +       }
>> +
>> +       ret = rpmh_rsc_ctrlr_is_busy(drv);
>> +       if (ret) {
>> +               ret = NOTIFY_BAD;
>> +               goto exit;
>> +       }
>> +
>> +       ret = rpmh_flush(&drv->client);
>> +       if (ret)
>> +               ret = NOTIFY_BAD;
>> +       else
>> +               ret = NOTIFY_OK;
>> +
>> +exit:
>> +       spin_unlock(&drv->pm_lock);
>> +       return ret;
>> +}
>> +
Thanks,
Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-04-08  7:08     ` Maulik Shah
@ 2020-04-09  8:16       ` Stephen Boyd
  2020-04-12 13:51         ` Maulik Shah
  0 siblings, 1 reply; 17+ messages in thread
From: Stephen Boyd @ 2020-04-09  8:16 UTC (permalink / raw)
  To: Maulik Shah, bjorn.andersson, dianders, evgreen
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao

Quoting Maulik Shah (2020-04-08 00:08:48)
> Hi,
> 
> On 4/8/2020 8:20 AM, Stephen Boyd wrote:
> > Quoting Maulik Shah (2020-04-05 23:32:19)
> >> for CPU PM notification. They may be in autonomous mode executing
> >> low power mode and do not require rpmh_flush() to happen from CPU
> >> PM notification.
> >>
> >> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> >> Reviewed-by: Douglas Anderson <dianders@chromium.org>
> >> ---
> >>   drivers/soc/qcom/rpmh-internal.h |  25 +++++---
> >>   drivers/soc/qcom/rpmh-rsc.c      | 123 +++++++++++++++++++++++++++++++++++----
> >>   drivers/soc/qcom/rpmh.c          |  26 +++------
> >>   3 files changed, 137 insertions(+), 37 deletions(-)
> >>
> >> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> >> index b718221..fbe1f3e 100644
> >> --- a/drivers/soc/qcom/rpmh-rsc.c
> >> +++ b/drivers/soc/qcom/rpmh-rsc.c
> >> @@ -6,6 +6,7 @@
> > [...]
> >> +
> >> +static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
> >> +                                   unsigned long action, void *v)
> >> +{
> >> +       struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
> >> +       int ret = NOTIFY_OK;
> >> +
> >> +       spin_lock(&drv->pm_lock);
> >> +
> >> +       switch (action) {
> >> +       case CPU_PM_ENTER:
> > I thought CPU_PM notifiers weren't supposed to be used anymore? Or at
> > least, the genpd work that has gone on for cpuidle could be used here in
> > place of CPU_PM notifiers?
> 
> genpd was used in v3 and v4 of this series, where from pd's .power_off  
> function, rpmh_flush() was invoked.
> 
> genpd can be useful if target firmware supports PSCI's OSI mode, while 
> sc7180 is non-OSI target.
> 
> The current approch (using cpu pm notification) can be used for both OSI 
> and non-OSI targets to invoke rpmh_flush() when last cpu goes to power down.

Ok. Doug and I talked today and I re-read the earlier series and I think
Sudeep was suggesting that if we're doing last man down activities here
then we're better off using OSI vs. PC mode. But I can only assume
that's because the concern is something here requires software's help
for last man down activities like lowering a CPU voltage setting or
turning off some power switch to a hardware block through some i2c
message. The way I understand it the last man down activities here are
just setting up the sleep and wake TCS FIFOs to "do the right thing"
when the last CPU actually goes down and the first CPU wakes up by
running through the pile of "instructions" that we program into the
FIFOs.

The execution of those instructions is all done in hardware so any
aggregation or coordination between CPUs is not really important here.
All that matters is that we set up the sleep and wake TCS FIFOs properly
so that _if_ the whole CPU subsystem goes to sleep we're going to let
the hardware turn off the right stuff and lower voltages, etc. and
vice-versa for wake. If we didn't have to share the TCS FIFOs with
active mode control then we could just tweak the sleep and wake TCS
buckets at runtime and let the hardware state of the CPUs decide to
trigger them at the right time. Unfortunately, we don't have that luxury
and we're stuck repurposing the sleep TCS FIFO to control things like
regulator voltages when the CPU is awake. Yuck!

> 
> > And so this isn't actually any different
> > than what was proposed originally to use genpd for this?
> >

I guess this answer to this is yes. Which is fine. CPU PM notifiers are
still used by various drivers to do things like save/restore state of
devices that lose state when the CPUs power down. The use of genpd is
helpful for OSI mode because it can describe how/when big and little
clusters are powered off by putting them in different genpds. For
counting the last CPU to turn off it seems simpler to just register for
CPU PM notifiers and not care about genpd logic and nesting clusters,
etc. I'm happy to see this not be a blocker.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v16 3/6] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data
  2020-04-06  6:32 ` [PATCH v16 3/6] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data Maulik Shah
@ 2020-04-09 20:23   ` Stephen Boyd
  2020-04-12 13:25     ` Maulik Shah
  0 siblings, 1 reply; 17+ messages in thread
From: Stephen Boyd @ 2020-04-09 20:23 UTC (permalink / raw)
  To: Maulik Shah, bjorn.andersson, dianders, evgreen
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao,
	Maulik Shah

Quoting Maulik Shah (2020-04-05 23:32:18)
> TCSes have previously programmed data when rpmh_flush is called.

rpmh_flush()

> This can cause old data to trigger along with newly flushed.
> 
> Fix this by cleaning SLEEP and WAKE TCSes before new data is flushed.
> 
> With this there is no need to invoke rpmh_rsc_invalidate() call from
> rpmh_invalidate().
> 
> Simplify rpmh_invalidate() by moving invalidate_batch() inside.
> 
> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> Reviewed-by: Douglas Anderson <dianders@chromium.org>
> ---
>  drivers/soc/qcom/rpmh.c | 41 ++++++++++++++++++-----------------------
>  1 file changed, 18 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index 03630ae..a75f3df 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -498,24 +492,25 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>  }
>  
>  /**
> - * rpmh_invalidate: Invalidate all sleep and active sets
> - * sets.
> + * rpmh_invalidate: Invalidate sleep and wake sets in batch_cache
>   *
>   * @dev: The device making the request
>   *
> - * Invalidate the sleep and active values in the TCS blocks.
> + * Invalidate the sleep and wake values in batch_cache.
>   */
>  int rpmh_invalidate(const struct device *dev)
>  {
>         struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
> -       int ret;
> -
> -       invalidate_batch(ctrlr);
> +       struct batch_cache_req *req, *tmp;
> +       unsigned long flags;
>  
> -       do {
> -               ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
> -       } while (ret == -EAGAIN);
> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
> +       list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
> +               kfree(req);
> +       INIT_LIST_HEAD(&ctrlr->batch_cache);
> +       ctrlr->dirty = true;
> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>  
> -       return ret;
> +       return 0;

Now this always returns 0. Maybe it should become a void function, but
doing that requires a change in the interconnect code so maybe do it
later.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-04-06  6:32 ` [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
  2020-04-08  2:50   ` Stephen Boyd
@ 2020-04-10  4:15   ` Stephen Boyd
  2020-04-10 14:52     ` Doug Anderson
  2020-04-12 14:04     ` Maulik Shah
  1 sibling, 2 replies; 17+ messages in thread
From: Stephen Boyd @ 2020-04-10  4:15 UTC (permalink / raw)
  To: Maulik Shah, bjorn.andersson, dianders, evgreen
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao,
	Maulik Shah

Quoting Maulik Shah (2020-04-05 23:32:19)
> Add changes to invoke rpmh flush() from CPU PM notification.
> This is done when the last the cpu is entering power collapse and

'power collapse' is a qcom-ism. Maybe say something like "deep CPU idle
states"?

> controller is not busy.
> 
> Controllers that do have 'HW solver' mode do not need to register
> for CPU PM notification. They may be in autonomous mode executing
> low power mode and do not require rpmh_flush() to happen from CPU
> PM notification.

Can you provide an example of a HW solver mode controller? Presumably
the display RSC is one of these?

> 
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> Reviewed-by: Douglas Anderson <dianders@chromium.org>

> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> index b718221..fbe1f3e 100644
> --- a/drivers/soc/qcom/rpmh-rsc.c
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -521,8 +527,86 @@ int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)
>         return tcs_ctrl_write(drv, msg);
>  }
>  
> +/**
> + * rpmh_rsc_ctrlr_is_busy() - Check if any of the AMCs are busy.
> + *
> + * @drv: The controller
> + *
> + * Checks if any of the AMCs are busy in handling ACTIVE sets.
> + * This is called from the last cpu powering down before flushing
> + * SLEEP and WAKE sets. If AMCs are busy, controller can not enter
> + * power collapse, so deny from the last cpu's pm notification.
> + *
> + * Return:
> + * * False             - AMCs are idle
> + * * True              - AMCs are busy
> + */
> +static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv)

Can drv be const? Would be nice to make it const in some places in this
driver.

> +{
> +       int m;
> +       struct tcs_group *tcs = get_tcs_of_type(drv, ACTIVE_TCS);
> +
> +       /*
> +        * If we made an active request on a RSC that does not have a
> +        * dedicated TCS for active state use, then re-purposed wake TCSes
> +        * should be checked for not busy.

not busy, because we use wake TCSes for active requests in this case.

> +        *
> +        * Since this is called from the last cpu, need not take drv or tcs
> +        * lock before checking tcs_is_free().
> +        */
> +       if (!tcs->num_tcs)
> +               tcs = get_tcs_of_type(drv, WAKE_TCS);
> +
> +       for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) {
> +               if (!tcs_is_free(drv, m))
> +                       return true;
> +       }
> +
> +       return false;
> +}
[...]
> +
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index a75f3df..88f8801 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -433,16 +430,17 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>  }
>  
>  /**
> - * rpmh_flush: Flushes the buffered active and sleep sets to TCS
> + * rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes
>   *
> - * @ctrlr: controller making request to flush cached data
> + * @ctrlr: Controller making request to flush cached data
>   *
> - * Return: -EBUSY if the controller is busy, probably waiting on a response
> - * to a RPMH request sent earlier.
> + * This function is called from sleep code on the last CPU
> + * (thus no spinlock needed).

Might be good to stick a lockdep_assert_irqs_disabled() in this function
then. That would document that this function should only be called when
irqs are disabled.

>   *
> - * This function is always called from the sleep code from the last CPU
> - * that is powering down the entire system. Since no other RPMH API would be
> - * executing at this time, it is safe to run lockless.
> + * Return:
> + * * 0          - Success
> + * * -EAGAIN    - Retry again
> + * * Error code - Otherwise
>   */
>  int rpmh_flush(struct rpmh_ctrlr *ctrlr)

This function name keeps throwing me off. Can we please call it
something like rpmh_configure_tcs_sleep_wake()? The word "flush" seems
to imply there's some sort of cache going on, but that's not really the
case. We're programming a couple TCS FIFOs so that they can be used
across deep CPU sleep states.

>  {
> @@ -455,9 +453,7 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>         }
>  
>         /* Invalidate the TCSes first to avoid stale data */
> -       do {
> -               ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
> -       } while (ret == -EAGAIN);
> +       ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
>         if (ret)
>                 return ret;
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-04-10  4:15   ` Stephen Boyd
@ 2020-04-10 14:52     ` Doug Anderson
  2020-04-12 14:10       ` Maulik Shah
  2020-04-12 14:04     ` Maulik Shah
  1 sibling, 1 reply; 17+ messages in thread
From: Doug Anderson @ 2020-04-10 14:52 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Maulik Shah, Bjorn Andersson, Evan Green, LKML, linux-arm-msm,
	Andy Gross, Matthias Kaehlcke, Rajendra Nayak, Lina Iyer, lsrao

Hi,

On Thu, Apr 9, 2020 at 9:15 PM Stephen Boyd <swboyd@chromium.org> wrote:
>
> >  int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>
> This function name keeps throwing me off. Can we please call it
> something like rpmh_configure_tcs_sleep_wake()? The word "flush" seems
> to imply there's some sort of cache going on, but that's not really the
> case. We're programming a couple TCS FIFOs so that they can be used
> across deep CPU sleep states.

I'm hoping this rename can be deferred until Maulik's series and my
cleanup series land.  While I agree that rpmh_flush() is a bit of a
confusing name, it's not a new name and renaming it midway through
when there are a bunch of patches in-flight is going to be a hassle.

Assuming others agree, my thought is that Maulik will do one more v17
spin with small nits fixed up, then his series can land early next
week when (presumably) -rc1 comes out.  If my current cleanup doesn't
apply cleanly (or if Bjorn/Andy don't want to fix the two nits in
commit messages when applying) I can repost my series and that can
land in short order.  Once those land:

* Maulik can post this rpmh_flush() rename atop.

* I can post the patch to remove the "pm_lock" that was introduced in
this series.  A preview at <https://crrev.com/c/2142823>.

* Maulik can post some of the cleanups that Maulik wanted to do in
rpmh.c with regards to __fill_rpmh_msg()

...assuming those are not controversial perhaps they can be reviewed
quickly and land quickly?  I suppose I can always dream...


-Doug

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v16 3/6] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data
  2020-04-09 20:23   ` Stephen Boyd
@ 2020-04-12 13:25     ` Maulik Shah
  0 siblings, 0 replies; 17+ messages in thread
From: Maulik Shah @ 2020-04-12 13:25 UTC (permalink / raw)
  To: Stephen Boyd, bjorn.andersson, dianders, evgreen
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao

Hi,

On 4/10/2020 1:53 AM, Stephen Boyd wrote:
> Quoting Maulik Shah (2020-04-05 23:32:18)
>> TCSes have previously programmed data when rpmh_flush is called.
> rpmh_flush()
Ok, i will update in v17.
>
>> This can cause old data to trigger along with newly flushed.
>>
>> Fix this by cleaning SLEEP and WAKE TCSes before new data is flushed.
>>
>> With this there is no need to invoke rpmh_rsc_invalidate() call from
>> rpmh_invalidate().
>>
>> Simplify rpmh_invalidate() by moving invalidate_batch() inside.
>>
>> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> Reviewed-by: Douglas Anderson <dianders@chromium.org>
>> ---
>>   drivers/soc/qcom/rpmh.c | 41 ++++++++++++++++++-----------------------
>>   1 file changed, 18 insertions(+), 23 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index 03630ae..a75f3df 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -498,24 +492,25 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>   }
>>   
>>   /**
>> - * rpmh_invalidate: Invalidate all sleep and active sets
>> - * sets.
>> + * rpmh_invalidate: Invalidate sleep and wake sets in batch_cache
>>    *
>>    * @dev: The device making the request
>>    *
>> - * Invalidate the sleep and active values in the TCS blocks.
>> + * Invalidate the sleep and wake values in batch_cache.
>>    */
>>   int rpmh_invalidate(const struct device *dev)
>>   {
>>          struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
>> -       int ret;
>> -
>> -       invalidate_batch(ctrlr);
>> +       struct batch_cache_req *req, *tmp;
>> +       unsigned long flags;
>>   
>> -       do {
>> -               ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
>> -       } while (ret == -EAGAIN);
>> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> +       list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
>> +               kfree(req);
>> +       INIT_LIST_HEAD(&ctrlr->batch_cache);
>> +       ctrlr->dirty = true;
>> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>   
>> -       return ret;
>> +       return 0;
> Now this always returns 0. Maybe it should become a void function, but
> doing that requires a change in the interconnect code so maybe do it
> later.

Sure i will add in my To-Do list.

Thanks,
Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-04-09  8:16       ` Stephen Boyd
@ 2020-04-12 13:51         ` Maulik Shah
  0 siblings, 0 replies; 17+ messages in thread
From: Maulik Shah @ 2020-04-12 13:51 UTC (permalink / raw)
  To: Stephen Boyd, bjorn.andersson, dianders, evgreen
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao

Hi,

On 4/9/2020 1:46 PM, Stephen Boyd wrote:
> Quoting Maulik Shah (2020-04-08 00:08:48)
>> Hi,
>>
>> On 4/8/2020 8:20 AM, Stephen Boyd wrote:
>>> Quoting Maulik Shah (2020-04-05 23:32:19)
>>>> for CPU PM notification. They may be in autonomous mode executing
>>>> low power mode and do not require rpmh_flush() to happen from CPU
>>>> PM notification.
>>>>
>>>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>>>> Reviewed-by: Douglas Anderson <dianders@chromium.org>
>>>> ---
>>>>    drivers/soc/qcom/rpmh-internal.h |  25 +++++---
>>>>    drivers/soc/qcom/rpmh-rsc.c      | 123 +++++++++++++++++++++++++++++++++++----
>>>>    drivers/soc/qcom/rpmh.c          |  26 +++------
>>>>    3 files changed, 137 insertions(+), 37 deletions(-)
>>>>
>>>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>>>> index b718221..fbe1f3e 100644
>>>> --- a/drivers/soc/qcom/rpmh-rsc.c
>>>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>>>> @@ -6,6 +6,7 @@
>>> [...]
>>>> +
>>>> +static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
>>>> +                                   unsigned long action, void *v)
>>>> +{
>>>> +       struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
>>>> +       int ret = NOTIFY_OK;
>>>> +
>>>> +       spin_lock(&drv->pm_lock);
>>>> +
>>>> +       switch (action) {
>>>> +       case CPU_PM_ENTER:
>>> I thought CPU_PM notifiers weren't supposed to be used anymore? Or at
>>> least, the genpd work that has gone on for cpuidle could be used here in
>>> place of CPU_PM notifiers?
>> genpd was used in v3 and v4 of this series, where from pd's .power_off
>> function, rpmh_flush() was invoked.
>>
>> genpd can be useful if target firmware supports PSCI's OSI mode, while
>> sc7180 is non-OSI target.
>>
>> The current approch (using cpu pm notification) can be used for both OSI
>> and non-OSI targets to invoke rpmh_flush() when last cpu goes to power down.
> Ok. Doug and I talked today and I re-read the earlier series and I think
> Sudeep was suggesting that if we're doing last man down activities here
> then we're better off using OSI vs. PC mode. But I can only assume
> that's because the concern is something here requires software's help
> for last man down activities like lowering a CPU voltage setting or
> turning off some power switch to a hardware block through some i2c
> message. The way I understand it the last man down activities here are
> just setting up the sleep and wake TCS FIFOs to "do the right thing"
> when the last CPU actually goes down and the first CPU wakes up by
> running through the pile of "instructions" that we program into the
> FIFOs.
>
> The execution of those instructions is all done in hardware so any
> aggregation or coordination between CPUs is not really important here.
> All that matters is that we set up the sleep and wake TCS FIFOs properly
> so that _if_ the whole CPU subsystem goes to sleep we're going to let
> the hardware turn off the right stuff and lower voltages, etc. and
> vice-versa for wake. If we didn't have to share the TCS FIFOs with
> active mode control then we could just tweak the sleep and wake TCS
> buckets at runtime and let the hardware state of the CPUs decide to
> trigger them at the right time.
Correct.
>   Unfortunately, we don't have that luxury
> and we're stuck repurposing the sleep TCS FIFO to control things like
> regulator voltages when the CPU is awake. Yuck!
RSC is having TCS HW and it is currently divided in ACTIVE/ SLEEP/ WAKE 
TCSes configuration.
The ACTICE TCS HW and SLEEP + WAKE TCS HW usecases are mutually 
exclusive, in the sense that when
ACTIVE TCS HW is in-use the SLEEP + WAKE TCSes are not (since CPU is 
already out of low power mode
doing active only transfer, firmware can not trigger deepest low power 
modes where SLEEP and WAKE TCses is used)

Similarly when SLEEP + WAKE TCSes HW are in-use, the ACTIVE TCS HW is 
not (since none of the CPU is in Linux
to send active message) With above, some of the RSCs already don't have 
dedicated ACTIVE TCS HW, and when we
want to send active-only message we borrow one TCS from WAKE TCS pool, 
configure it for ACTIVE use (like enable
completion IRQ for the TCS) and once done re-configure it to use as WAKE 
TCS only.

So with reduced HW (removing ACTIVE TCSes), the same functionality is 
achieved.
The rpmh driver is designed support such scenarios.

Thanks,
Maulik
>
>>> And so this isn't actually any different
>>> than what was proposed originally to use genpd for this?
>>>
> I guess this answer to this is yes. Which is fine. CPU PM notifiers are
> still used by various drivers to do things like save/restore state of
> devices that lose state when the CPUs power down. The use of genpd is
> helpful for OSI mode because it can describe how/when big and little
> clusters are powered off by putting them in different genpds. For
> counting the last CPU to turn off it seems simpler to just register for
> CPU PM notifiers and not care about genpd logic and nesting clusters,
> etc. I'm happy to see this not be a blocker.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-04-10  4:15   ` Stephen Boyd
  2020-04-10 14:52     ` Doug Anderson
@ 2020-04-12 14:04     ` Maulik Shah
  1 sibling, 0 replies; 17+ messages in thread
From: Maulik Shah @ 2020-04-12 14:04 UTC (permalink / raw)
  To: Stephen Boyd, bjorn.andersson, dianders, evgreen
  Cc: linux-kernel, linux-arm-msm, agross, mka, rnayak, ilina, lsrao

Hi,

On 4/10/2020 9:45 AM, Stephen Boyd wrote:
> Quoting Maulik Shah (2020-04-05 23:32:19)
>> Add changes to invoke rpmh flush() from CPU PM notification.
>> This is done when the last the cpu is entering power collapse and
> 'power collapse' is a qcom-ism. Maybe say something like "deep CPU idle
> states"?
Okay, i updated in v17.
>
>> controller is not busy.
>>
>> Controllers that do have 'HW solver' mode do not need to register
>> for CPU PM notification. They may be in autonomous mode executing
>> low power mode and do not require rpmh_flush() to happen from CPU
>> PM notification.
> Can you provide an example of a HW solver mode controller? Presumably
> the display RSC is one of these?
Sure, Added in v17 to mention display RSC.
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> Reviewed-by: Douglas Anderson <dianders@chromium.org>
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>> index b718221..fbe1f3e 100644
>> --- a/drivers/soc/qcom/rpmh-rsc.c
>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>> @@ -521,8 +527,86 @@ int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)
>>          return tcs_ctrl_write(drv, msg);
>>   }
>>   
>> +/**
>> + * rpmh_rsc_ctrlr_is_busy() - Check if any of the AMCs are busy.
>> + *
>> + * @drv: The controller
>> + *
>> + * Checks if any of the AMCs are busy in handling ACTIVE sets.
>> + * This is called from the last cpu powering down before flushing
>> + * SLEEP and WAKE sets. If AMCs are busy, controller can not enter
>> + * power collapse, so deny from the last cpu's pm notification.
>> + *
>> + * Return:
>> + * * False             - AMCs are idle
>> + * * True              - AMCs are busy
>> + */
>> +static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv)
> Can drv be const? Would be nice to make it const in some places in this
> driver.

It can, but it will require changing multiple places to make it const. 
some of those functions are

dropped/modified in doug's cleanup series. I can do in separate patch 
once this series is pulled in.

>
>> +{
>> +       int m;
>> +       struct tcs_group *tcs = get_tcs_of_type(drv, ACTIVE_TCS);
>> +
>> +       /*
>> +        * If we made an active request on a RSC that does not have a
>> +        * dedicated TCS for active state use, then re-purposed wake TCSes
>> +        * should be checked for not busy.
> not busy, because we use wake TCSes for active requests in this case.
Ok, added in v17.
>
>> +        *
>> +        * Since this is called from the last cpu, need not take drv or tcs
>> +        * lock before checking tcs_is_free().
>> +        */
>> +       if (!tcs->num_tcs)
>> +               tcs = get_tcs_of_type(drv, WAKE_TCS);
>> +
>> +       for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) {
>> +               if (!tcs_is_free(drv, m))
>> +                       return true;
>> +       }
>> +
>> +       return false;
>> +}
> [...]
>> +
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index a75f3df..88f8801 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -433,16 +430,17 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>>   }
>>   
>>   /**
>> - * rpmh_flush: Flushes the buffered active and sleep sets to TCS
>> + * rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes
>>    *
>> - * @ctrlr: controller making request to flush cached data
>> + * @ctrlr: Controller making request to flush cached data
>>    *
>> - * Return: -EBUSY if the controller is busy, probably waiting on a response
>> - * to a RPMH request sent earlier.
>> + * This function is called from sleep code on the last CPU
>> + * (thus no spinlock needed).
> Might be good to stick a lockdep_assert_irqs_disabled() in this function
> then. That would document that this function should only be called when
> irqs are disabled.

Okay. Added  lockdep_assert_irqs_disabled(). But this may need to be 
dropped when

we add support for display RSC.

>
>>    *
>> - * This function is always called from the sleep code from the last CPU
>> - * that is powering down the entire system. Since no other RPMH API would be
>> - * executing at this time, it is safe to run lockless.
>> + * Return:
>> + * * 0          - Success
>> + * * -EAGAIN    - Retry again
>> + * * Error code - Otherwise
>>    */
>>   int rpmh_flush(struct rpmh_ctrlr *ctrlr)
> This function name keeps throwing me off. Can we please call it
> something like rpmh_configure_tcs_sleep_wake()? The word "flush" seems
> to imply there's some sort of cache going on, but that's not really the
> case. We're programming a couple TCS FIFOs so that they can be used
> across deep CPU sleep states.
>
>>   {
>> @@ -455,9 +453,7 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>          }
>>   
>>          /* Invalidate the TCSes first to avoid stale data */
>> -       do {
>> -               ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
>> -       } while (ret == -EAGAIN);
>> +       ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
>>          if (ret)
>>                  return ret;

Thanks,

Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-04-10 14:52     ` Doug Anderson
@ 2020-04-12 14:10       ` Maulik Shah
  0 siblings, 0 replies; 17+ messages in thread
From: Maulik Shah @ 2020-04-12 14:10 UTC (permalink / raw)
  To: Doug Anderson, Stephen Boyd
  Cc: Bjorn Andersson, Evan Green, LKML, linux-arm-msm, Andy Gross,
	Matthias Kaehlcke, Rajendra Nayak, Lina Iyer, lsrao

Hi,

On 4/10/2020 8:22 PM, Doug Anderson wrote:
> Hi,
>
> On Thu, Apr 9, 2020 at 9:15 PM Stephen Boyd <swboyd@chromium.org> wrote:
>>>   int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>> This function name keeps throwing me off. Can we please call it
>> something like rpmh_configure_tcs_sleep_wake()? The word "flush" seems
>> to imply there's some sort of cache going on, but that's not really the
>> case. We're programming a couple TCS FIFOs so that they can be used
>> across deep CPU sleep states.
> I'm hoping this rename can be deferred until Maulik's series and my
> cleanup series land.  While I agree that rpmh_flush() is a bit of a
> confusing name, it's not a new name and renaming it midway through
> when there are a bunch of patches in-flight is going to be a hassle.
>
> Assuming others agree, my thought is that Maulik will do one more v17
> spin with small nits fixed up, then his series can land early next
> week when (presumably) -rc1 comes out.  If my current cleanup doesn't
> apply cleanly (or if Bjorn/Andy don't want to fix the two nits in
> commit messages when applying) I can repost my series and that can
> land in short order.  Once those land:
>
> * Maulik can post this rpmh_flush() rename atop.
>
> * I can post the patch to remove the "pm_lock" that was introduced in
> this series.  A preview at <https://crrev.com/c/2142823>.
>
> * Maulik can post some of the cleanups that Maulik wanted to do in
> rpmh.c with regards to __fill_rpmh_msg()
>
> ...assuming those are not controversial perhaps they can be reviewed
> quickly and land quickly?  I suppose I can always dream...
>
>
> -Doug

Agree, I can defer rename until this series lands and then post above 
listed changes.

Thanks,
Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-04-12 14:11 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-06  6:32 [PATCH v16 0/6] Invoke rpmh_flush for non OSI targets Maulik Shah
2020-04-06  6:32 ` [PATCH v16 1/6] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
2020-04-06  6:32 ` [PATCH v16 2/6] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
2020-04-06  6:32 ` [PATCH v16 3/6] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data Maulik Shah
2020-04-09 20:23   ` Stephen Boyd
2020-04-12 13:25     ` Maulik Shah
2020-04-06  6:32 ` [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
2020-04-08  2:50   ` Stephen Boyd
2020-04-08  7:08     ` Maulik Shah
2020-04-09  8:16       ` Stephen Boyd
2020-04-12 13:51         ` Maulik Shah
2020-04-10  4:15   ` Stephen Boyd
2020-04-10 14:52     ` Doug Anderson
2020-04-12 14:10       ` Maulik Shah
2020-04-12 14:04     ` Maulik Shah
2020-04-06  6:32 ` [PATCH v16 5/6] soc: qcom: rpmh-rsc: Clear active mode configuration for wake TCS Maulik Shah
2020-04-06  6:32 ` [PATCH v16 6/6] soc: qcom: rpmh-rsc: Allow using free WAKE TCS for active request Maulik Shah

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).