linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets
@ 2020-03-09  9:30 Maulik Shah
  2020-03-09  9:30 ` [PATCH v13 1/5] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
                   ` (5 more replies)
  0 siblings, 6 replies; 24+ messages in thread
From: Maulik Shah @ 2020-03-09  9:30 UTC (permalink / raw)
  To: swboyd, mka, evgreen, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, dianders, rnayak, ilina,
	lsrao, Maulik Shah

Changes in v13:
- Address Stephen's comment to maintain COMPILE_TEST
- Address Doug's comments and add new APIs for start and end transaction

Changes in v12:
- Kconfig change to remove COMPILE_TEST was dropped in v11, reinclude it.

Changes in v11:
- Address Doug's comments on change 2 and 3
- Include change to invalidate TCSes before flush from [4]

Changes in v10:
- Address Evan's comments to update commit message on change 2
- Add Evan's Reviewed by on change 2
- Remove comment from rpmh_flush() related to last CPU invoking it
- Rebase all changes on top of next-20200302

Changes in v9:
- Keep rpmh_flush() to invoke from within cache_lock
- Remove comments related to only last cpu invoking rpmh_flush()

Changes in v8:
- Address Stephen's comments on changes 2 and 3
- Add Reviewed by from Stephen on change 1

Changes in v7:
- Address Srinivas's comments to update commit text
- Add Reviewed by from Srinivas

Changes in v6:
- Drop 1 & 2 changes from v5 as they already landed in maintainer tree
- Drop 3 & 4 changes from v5 as no user at present for power domain in rsc
- Rename subject to appropriate since power domain changes are dropped
- Rebase other changes on top of next-20200221

Changes in v5:
- Add Rob's Acked by on dt-bindings change
- Drop firmware psci change
- Update cpuidle stats in dtsi to follow PC mode
- Include change to update dirty flag when data is updated from [4]
- Add change to invoke rpmh_flush when caches are dirty

Changes in v4:
- Add change to allow hierarchical topology in PC mode
- Drop hierarchical domain idle states converter from v3
- Address Merge sc7180 dtsi change to add low power modes

Changes in v3:
- Address Rob's comment on dt property value
- Address Stephen's comments on rpmh-rsc driver change
- Include sc7180 cpuidle low power mode changes from [1]
- Include hierarchical domain idle states converter change from [2]

Changes in v2:
- Add Stephen's Reviewed-By to the first three patches
- Addressed Stephen's comments on fourth patch
- Include changes to connect rpmh domain to cpuidle and genpds

Resource State Coordinator (RSC) is responsible for powering off/lowering
the requirements from CPU subsystem for the associated hardware like buses,
clocks, and regulators when all CPUs and cluster is powered down.

RSC power domain uses last-man activities provided by genpd framework based
on Ulf Hansoon's patch series[3], when the cluster of CPUs enter deepest
idle states. As a part of domain poweroff, RSC can lower resource state
requirements by flushing the cached sleep and wake state votes for various
resources.

[1] https://patchwork.kernel.org/patch/11218965
[2] https://patchwork.kernel.org/patch/10941671
[3] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=222355
[4] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=236503

Maulik Shah (3):
  arm64: dts: qcom: sc7180: Add cpuidle low power states
  soc: qcom: rpmh: Update dirty flag only when data changes
  soc: qcom: rpmh: Invoke rpmh_flush for dirty caches

 arch/arm64/boot/dts/qcom/sc7180.dtsi | 78 ++++++++++++++++++++++++++++++++++++
 drivers/soc/qcom/rpmh.c              | 27 ++++++++++---
 2 files changed, 100 insertions(+), 5 deletions(-)

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v13 1/5] arm64: dts: qcom: sc7180: Add cpuidle low power states
  2020-03-09  9:30 [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Maulik Shah
@ 2020-03-09  9:30 ` Maulik Shah
  2020-03-09  9:30 ` [PATCH v13 2/5] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 24+ messages in thread
From: Maulik Shah @ 2020-03-09  9:30 UTC (permalink / raw)
  To: swboyd, mka, evgreen, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, dianders, rnayak, ilina,
	lsrao, Maulik Shah, devicetree

Add device bindings for cpuidle states for cpu devices.

Cc: devicetree@vger.kernel.orgi
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
Reviewed-by: Stephen Boyd <swboyd@chromium.org>
---
 arch/arm64/boot/dts/qcom/sc7180.dtsi | 78 ++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
index 253274d..f5c08ce 100644
--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
@@ -94,6 +94,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x0>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_0>;
@@ -113,6 +116,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x100>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_100>;
@@ -129,6 +135,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x200>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_200>;
@@ -145,6 +154,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x300>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_300>;
@@ -161,6 +173,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x400>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_400>;
@@ -177,6 +192,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x500>;
 			enable-method = "psci";
+			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+					   &LITTLE_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1024>;
 			dynamic-power-coefficient = <100>;
 			next-level-cache = <&L2_500>;
@@ -193,6 +211,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x600>;
 			enable-method = "psci";
+			cpu-idle-states = <&BIG_CPU_SLEEP_0
+					   &BIG_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1740>;
 			dynamic-power-coefficient = <405>;
 			next-level-cache = <&L2_600>;
@@ -209,6 +230,9 @@
 			compatible = "arm,armv8";
 			reg = <0x0 0x700>;
 			enable-method = "psci";
+			cpu-idle-states = <&BIG_CPU_SLEEP_0
+					   &BIG_CPU_SLEEP_1
+					   &CLUSTER_SLEEP_0>;
 			capacity-dmips-mhz = <1740>;
 			dynamic-power-coefficient = <405>;
 			next-level-cache = <&L2_700>;
@@ -255,6 +279,60 @@
 				};
 			};
 		};
+
+		idle-states {
+			entry-method = "psci";
+
+			LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
+				compatible = "arm,idle-state";
+				idle-state-name = "little-power-down";
+				arm,psci-suspend-param = <0x40000003>;
+				entry-latency-us = <549>;
+				exit-latency-us = <901>;
+				min-residency-us = <1774>;
+				local-timer-stop;
+			};
+
+			LITTLE_CPU_SLEEP_1: cpu-sleep-0-1 {
+				compatible = "arm,idle-state";
+				idle-state-name = "little-rail-power-down";
+				arm,psci-suspend-param = <0x40000004>;
+				entry-latency-us = <702>;
+				exit-latency-us = <915>;
+				min-residency-us = <4001>;
+				local-timer-stop;
+			};
+
+			BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
+				compatible = "arm,idle-state";
+				idle-state-name = "big-power-down";
+				arm,psci-suspend-param = <0x40000003>;
+				entry-latency-us = <523>;
+				exit-latency-us = <1244>;
+				min-residency-us = <2207>;
+				local-timer-stop;
+			};
+
+			BIG_CPU_SLEEP_1: cpu-sleep-1-1 {
+				compatible = "arm,idle-state";
+				idle-state-name = "big-rail-power-down";
+				arm,psci-suspend-param = <0x40000004>;
+				entry-latency-us = <526>;
+				exit-latency-us = <1854>;
+				min-residency-us = <5555>;
+				local-timer-stop;
+			};
+
+			CLUSTER_SLEEP_0: cluster-sleep-0 {
+				compatible = "arm,idle-state";
+				idle-state-name = "cluster-power-down";
+				arm,psci-suspend-param = <0x40003444>;
+				entry-latency-us = <3263>;
+				exit-latency-us = <6562>;
+				min-residency-us = <9926>;
+				local-timer-stop;
+			};
+		};
 	};
 
 	memory@80000000 {
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v13 2/5] soc: qcom: rpmh: Update dirty flag only when data changes
  2020-03-09  9:30 [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Maulik Shah
  2020-03-09  9:30 ` [PATCH v13 1/5] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
@ 2020-03-09  9:30 ` Maulik Shah
  2020-03-09 23:42   ` Doug Anderson
  2020-03-09  9:30 ` [PATCH v13 3/5] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data Maulik Shah
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 24+ messages in thread
From: Maulik Shah @ 2020-03-09  9:30 UTC (permalink / raw)
  To: swboyd, mka, evgreen, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, dianders, rnayak, ilina,
	lsrao, Maulik Shah

Currently rpmh ctrlr dirty flag is set for all cases regardless of data
is really changed or not. Add changes to update dirty flag when data is
changed to newer values. Update dirty flag everytime when data in batch
cache is updated since rpmh_flush() may get invoked from any CPU instead
of only last CPU going to low power mode.

Also move dirty flag updates to happen from within cache_lock and remove
unnecessary INIT_LIST_HEAD() call and a default case from switch.

Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
Reviewed-by: Evan Green <evgreen@chromium.org>
---
 drivers/soc/qcom/rpmh.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index eb0ded0..03630ae 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -119,6 +119,7 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
 {
 	struct cache_req *req;
 	unsigned long flags;
+	u32 old_sleep_val, old_wake_val;
 
 	spin_lock_irqsave(&ctrlr->cache_lock, flags);
 	req = __find_req(ctrlr, cmd->addr);
@@ -133,26 +134,27 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
 
 	req->addr = cmd->addr;
 	req->sleep_val = req->wake_val = UINT_MAX;
-	INIT_LIST_HEAD(&req->list);
 	list_add_tail(&req->list, &ctrlr->cache);
 
 existing:
+	old_sleep_val = req->sleep_val;
+	old_wake_val = req->wake_val;
+
 	switch (state) {
 	case RPMH_ACTIVE_ONLY_STATE:
-		if (req->sleep_val != UINT_MAX)
-			req->wake_val = cmd->data;
-		break;
 	case RPMH_WAKE_ONLY_STATE:
 		req->wake_val = cmd->data;
 		break;
 	case RPMH_SLEEP_STATE:
 		req->sleep_val = cmd->data;
 		break;
-	default:
-		break;
 	}
 
-	ctrlr->dirty = true;
+	ctrlr->dirty = (req->sleep_val != old_sleep_val ||
+			req->wake_val != old_wake_val) &&
+			req->sleep_val != UINT_MAX &&
+			req->wake_val != UINT_MAX;
+
 unlock:
 	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
@@ -287,6 +289,7 @@ static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
 
 	spin_lock_irqsave(&ctrlr->cache_lock, flags);
 	list_add_tail(&req->list, &ctrlr->batch_cache);
+	ctrlr->dirty = true;
 	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 }
 
@@ -323,6 +326,7 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
 	list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
 		kfree(req);
 	INIT_LIST_HEAD(&ctrlr->batch_cache);
+	ctrlr->dirty = true;
 	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 }
 
@@ -507,7 +511,6 @@ int rpmh_invalidate(const struct device *dev)
 	int ret;
 
 	invalidate_batch(ctrlr);
-	ctrlr->dirty = true;
 
 	do {
 		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v13 3/5] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data
  2020-03-09  9:30 [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Maulik Shah
  2020-03-09  9:30 ` [PATCH v13 1/5] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
  2020-03-09  9:30 ` [PATCH v13 2/5] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
@ 2020-03-09  9:30 ` Maulik Shah
  2020-03-09 23:42   ` Doug Anderson
  2020-03-09  9:30 ` [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 24+ messages in thread
From: Maulik Shah @ 2020-03-09  9:30 UTC (permalink / raw)
  To: swboyd, mka, evgreen, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, dianders, rnayak, ilina,
	lsrao, Maulik Shah

TCSes have previously programmed data when rpmh_flush is called.
This can cause old data to trigger along with newly flushed.

Fix this by cleaning SLEEP and WAKE TCSes before new data is flushed.

With this there is no need to invoke rpmh_rsc_invalidate() call from
rpmh_invalidate().

Simplify rpmh_invalidate() by moving invalidate_batch() inside.

Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
---
 drivers/soc/qcom/rpmh.c | 36 +++++++++++++++---------------------
 1 file changed, 15 insertions(+), 21 deletions(-)

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index 03630ae..5bed8f4 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -317,19 +317,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
 	return ret;
 }
 
-static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
-{
-	struct batch_cache_req *req, *tmp;
-	unsigned long flags;
-
-	spin_lock_irqsave(&ctrlr->cache_lock, flags);
-	list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
-		kfree(req);
-	INIT_LIST_HEAD(&ctrlr->batch_cache);
-	ctrlr->dirty = true;
-	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
-}
-
 /**
  * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the
  * batch to finish.
@@ -467,6 +454,11 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 		return 0;
 	}
 
+	/* Invalidate the TCSes first to avoid stale data */
+	do {
+		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
+	} while (ret == -EAGAIN);
+
 	/* First flush the cached batch requests */
 	ret = flush_batch(ctrlr);
 	if (ret)
@@ -503,19 +495,21 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
  *
  * @dev: The device making the request
  *
- * Invalidate the sleep and active values in the TCS blocks.
+ * Invalidate the sleep and wake values in batch_cache.
  */
 int rpmh_invalidate(const struct device *dev)
 {
 	struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
-	int ret;
-
-	invalidate_batch(ctrlr);
+	struct batch_cache_req *req, *tmp;
+	unsigned long flags;
 
-	do {
-		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
-	} while (ret == -EAGAIN);
+	spin_lock_irqsave(&ctrlr->cache_lock, flags);
+	list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
+		kfree(req);
+	INIT_LIST_HEAD(&ctrlr->batch_cache);
+	ctrlr->dirty = true;
+	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL(rpmh_invalidate);
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-03-09  9:30 [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Maulik Shah
                   ` (2 preceding siblings ...)
  2020-03-09  9:30 ` [PATCH v13 3/5] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data Maulik Shah
@ 2020-03-09  9:30 ` Maulik Shah
  2020-03-09 23:43   ` Doug Anderson
  2020-03-09  9:30 ` [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions Maulik Shah
  2020-03-09 23:42 ` [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Doug Anderson
  5 siblings, 1 reply; 24+ messages in thread
From: Maulik Shah @ 2020-03-09  9:30 UTC (permalink / raw)
  To: swboyd, mka, evgreen, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, dianders, rnayak, ilina,
	lsrao, Maulik Shah

Add changes to invoke rpmh flush() from within cache_lock when the data in
cache is dirty.

Introduce two new APIs for this. Clients can use rpmh_start_transaction()
before any rpmh transaction once done invoke rpmh_end_transaction() which
internally invokes rpmh_flush() if the caches has become dirty.

Add support to control this with flush_dirty flag.

Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
---
 drivers/soc/qcom/rpmh-internal.h |  4 +++
 drivers/soc/qcom/rpmh-rsc.c      |  6 +++-
 drivers/soc/qcom/rpmh.c          | 64 ++++++++++++++++++++++++++++++++--------
 include/soc/qcom/rpmh.h          | 10 +++++++
 4 files changed, 71 insertions(+), 13 deletions(-)

diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index 6eec32b..d36be3d 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -70,13 +70,17 @@ struct rpmh_request {
  *
  * @cache: the list of cached requests
  * @cache_lock: synchronize access to the cache data
+ * @active_clients: count of rpmh transaction in progress
  * @dirty: was the cache updated since flush
+ * @flush_dirty: if the dirty cache need immediate flush
  * @batch_cache: Cache sleep and wake requests sent as batch
  */
 struct rpmh_ctrlr {
 	struct list_head cache;
 	spinlock_t cache_lock;
+	u32 active_clients;
 	bool dirty;
+	bool flush_dirty;
 	struct list_head batch_cache;
 };
 
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index e278fc1..b6391e1 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -61,6 +61,8 @@
 #define CMD_STATUS_ISSUED		BIT(8)
 #define CMD_STATUS_COMPL		BIT(16)
 
+#define FLUSH_DIRTY			1
+
 static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id)
 {
 	return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id +
@@ -670,13 +672,15 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
 	INIT_LIST_HEAD(&drv->client.cache);
 	INIT_LIST_HEAD(&drv->client.batch_cache);
 
+	drv->client.flush_dirty = device_get_match_data(&pdev->dev);
+
 	dev_set_drvdata(&pdev->dev, drv);
 
 	return devm_of_platform_populate(&pdev->dev);
 }
 
 static const struct of_device_id rpmh_drv_match[] = {
-	{ .compatible = "qcom,rpmh-rsc", },
+	{ .compatible = "qcom,rpmh-rsc", .data = (void *)FLUSH_DIRTY },
 	{ }
 };
 
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index 5bed8f4..9d40209 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -297,12 +297,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
 {
 	struct batch_cache_req *req;
 	const struct rpmh_request *rpm_msg;
-	unsigned long flags;
 	int ret = 0;
 	int i;
 
 	/* Send Sleep/Wake requests to the controller, expect no response */
-	spin_lock_irqsave(&ctrlr->cache_lock, flags);
 	list_for_each_entry(req, &ctrlr->batch_cache, list) {
 		for (i = 0; i < req->count; i++) {
 			rpm_msg = req->rpm_msgs + i;
@@ -312,7 +310,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
 				break;
 		}
 	}
-	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
 	return ret;
 }
@@ -433,16 +430,63 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
 }
 
 /**
+ * rpmh_start_transaction: Indicates start of rpmh transactions, this
+ * must be ended by invoking rpmh_end_transaction().
+ *
+ * @dev: the device making the request
+ */
+void rpmh_start_transaction(const struct device *dev)
+{
+	struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
+	unsigned long flags;
+
+	if (!ctrlr->flush_dirty)
+		return;
+
+	spin_lock_irqsave(&ctrlr->cache_lock, flags);
+	ctrlr->active_clients++;
+	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
+}
+EXPORT_SYMBOL(rpmh_start_transaction);
+
+/**
+ * rpmh_end_transaction: Indicates end of rpmh transactions. All dirty data
+ * in cache can be flushed immediately when ctrlr->flush_dirty is set
+ *
+ * @dev: the device making the request
+ *
+ * Return: 0 on success, error number otherwise.
+ */
+int rpmh_end_transaction(const struct device *dev)
+{
+	struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
+	unsigned long flags;
+	int ret = 0;
+
+	if (!ctrlr->flush_dirty)
+		return ret;
+
+	spin_lock_irqsave(&ctrlr->cache_lock, flags);
+
+	ctrlr->active_clients--;
+	if (ctrlr->dirty && !ctrlr->active_clients)
+		ret = rpmh_flush(ctrlr);
+
+	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
+
+	return ret;
+}
+EXPORT_SYMBOL(rpmh_end_transaction);
+
+/**
  * rpmh_flush: Flushes the buffered active and sleep sets to TCS
  *
  * @ctrlr: controller making request to flush cached data
  *
- * Return: -EBUSY if the controller is busy, probably waiting on a response
- * to a RPMH request sent earlier.
+ * Return: 0 on success, error number otherwise.
  *
- * This function is always called from the sleep code from the last CPU
- * that is powering down the entire system. Since no other RPMH API would be
- * executing at this time, it is safe to run lockless.
+ * This function can either be called from sleep code on the last CPU
+ * (thus no spinlock needed) or with the ctrlr->cache_lock already held.
  */
 int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 {
@@ -464,10 +508,6 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 	if (ret)
 		return ret;
 
-	/*
-	 * Nobody else should be calling this function other than system PM,
-	 * hence we can run without locks.
-	 */
 	list_for_each_entry(p, &ctrlr->cache, list) {
 		if (!is_req_valid(p)) {
 			pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x",
diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
index f9ec353..85e1ab2 100644
--- a/include/soc/qcom/rpmh.h
+++ b/include/soc/qcom/rpmh.h
@@ -22,6 +22,10 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
 
 int rpmh_invalidate(const struct device *dev);
 
+void rpmh_start_transaction(const struct device *dev);
+
+int rpmh_end_transaction(const struct device *dev);
+
 #else
 
 static inline int rpmh_write(const struct device *dev, enum rpmh_state state,
@@ -41,6 +45,12 @@ static inline int rpmh_write_batch(const struct device *dev,
 static inline int rpmh_invalidate(const struct device *dev)
 { return -ENODEV; }
 
+void rpmh_start_transaction(const struct device *dev)
+{ return -ENODEV; }
+
+int rpmh_end_transaction(const struct device *dev)
+{ return -ENODEV; }
+
 #endif /* CONFIG_QCOM_RPMH */
 
 #endif /* __SOC_QCOM_RPMH_H__ */
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions
  2020-03-09  9:30 [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Maulik Shah
                   ` (3 preceding siblings ...)
  2020-03-09  9:30 ` [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
@ 2020-03-09  9:30 ` Maulik Shah
  2020-03-09 23:44   ` Doug Anderson
  2020-03-11 23:02   ` Doug Anderson
  2020-03-09 23:42 ` [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Doug Anderson
  5 siblings, 2 replies; 24+ messages in thread
From: Maulik Shah @ 2020-03-09  9:30 UTC (permalink / raw)
  To: swboyd, mka, evgreen, bjorn.andersson
  Cc: linux-kernel, linux-arm-msm, agross, dianders, rnayak, ilina,
	lsrao, Maulik Shah, Taniya Das, Odelu Kukatla, Kiran Gunda,
	Sibi Sankar

Update all rpmh clients to start using rpmh_start_transaction() and
rpmh_end_transaction().

Cc: Taniya Das <tdas@codeaurora.org>
Cc: Odelu Kukatla <okukatla@codeaurora.org>
Cc: Kiran Gunda <kgunda@codeaurora.org>
Cc: Sibi Sankar <sibis@codeaurora.org>
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
---
 drivers/clk/qcom/clk-rpmh.c             | 21 ++++++++++++++-------
 drivers/interconnect/qcom/bcm-voter.c   | 13 +++++++++----
 drivers/regulator/qcom-rpmh-regulator.c |  4 ++++
 drivers/soc/qcom/rpmhpd.c               | 11 +++++++++--
 4 files changed, 36 insertions(+), 13 deletions(-)

diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
index 12bd871..16f68d4 100644
--- a/drivers/clk/qcom/clk-rpmh.c
+++ b/drivers/clk/qcom/clk-rpmh.c
@@ -154,22 +154,27 @@ static int clk_rpmh_send_aggregate_command(struct clk_rpmh *c)
 	cmd_state = c->aggr_state;
 	on_val = c->res_on_val;
 
+	rpmh_start_transaction(c->dev);
+
 	for (; state <= RPMH_ACTIVE_ONLY_STATE; state++) {
 		if (has_state_changed(c, state)) {
 			if (cmd_state & BIT(state))
 				cmd.data = on_val;
 
 			ret = rpmh_write_async(c->dev, state, &cmd, 1);
-			if (ret) {
-				dev_err(c->dev, "set %s state of %s failed: (%d)\n",
-					!state ? "sleep" :
-					state == RPMH_WAKE_ONLY_STATE	?
-					"wake" : "active", c->res_name, ret);
-				return ret;
-			}
+			if (ret)
+				break;
 		}
 	}
 
+	ret |= rpmh_end_transaction(c->dev);
+	if (ret) {
+		dev_err(c->dev, "set %s state of %s failed: (%d)\n",
+			!state ? "sleep" : state == RPMH_WAKE_ONLY_STATE ?
+			"wake" : "active", c->res_name, ret);
+		return ret;
+	}
+
 	c->last_sent_aggr_state = c->aggr_state;
 	c->peer->last_sent_aggr_state =  c->last_sent_aggr_state;
 
@@ -267,7 +272,9 @@ static int clk_rpmh_bcm_send_cmd(struct clk_rpmh *c, bool enable)
 	cmd.addr = c->res_addr;
 	cmd.data = BCM_TCS_CMD(1, enable, 0, cmd_state);
 
+	rpmh_start_transaction(c->dev);
 	ret = rpmh_write_async(c->dev, RPMH_ACTIVE_ONLY_STATE, &cmd, 1);
+	ret |= rpmh_end_transaction(c->dev);
 	if (ret) {
 		dev_err(c->dev, "set active state of %s failed: (%d)\n",
 			c->res_name, ret);
diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c
index 2adfde8..fbe18b2 100644
--- a/drivers/interconnect/qcom/bcm-voter.c
+++ b/drivers/interconnect/qcom/bcm-voter.c
@@ -263,7 +263,9 @@ int qcom_icc_bcm_voter_commit(struct bcm_voter *voter)
 	tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_AMC, cmds, commit_idx);
 
 	if (!commit_idx[0])
-		goto out;
+		goto end;
+
+	rpmh_start_transaction(voter-dev);
 
 	ret = rpmh_invalidate(voter->dev);
 	if (ret) {
@@ -312,12 +314,15 @@ int qcom_icc_bcm_voter_commit(struct bcm_voter *voter)
 	tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_SLEEP, cmds, commit_idx);
 
 	ret = rpmh_write_batch(voter->dev, RPMH_SLEEP_STATE, cmds, commit_idx);
-	if (ret) {
+	if (ret)
 		pr_err("Error sending SLEEP RPMH requests (%d)\n", ret);
-		goto out;
-	}
 
 out:
+	ret = rpmh_end_transaction(voter-dev);
+	if (ret)
+		pr_err("Error ending rpmh transaction (%d)\n", ret);
+
+end:
 	list_for_each_entry_safe(bcm, bcm_tmp, &voter->commit_list, list)
 		list_del_init(&bcm->list);
 
diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
index c86ad40..f4b9176 100644
--- a/drivers/regulator/qcom-rpmh-regulator.c
+++ b/drivers/regulator/qcom-rpmh-regulator.c
@@ -163,12 +163,16 @@ static int rpmh_regulator_send_request(struct rpmh_vreg *vreg,
 {
 	int ret;
 
+	rpmh_start_transaction(vreg->dev);
+
 	if (wait_for_ack || vreg->always_wait_for_ack)
 		ret = rpmh_write(vreg->dev, RPMH_ACTIVE_ONLY_STATE, cmd, 1);
 	else
 		ret = rpmh_write_async(vreg->dev, RPMH_ACTIVE_ONLY_STATE, cmd,
 					1);
 
+	ret |= rpmh_end_transaction(vreg->dev);
+
 	return ret;
 }
 
diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c
index 4d264d0..0e9d204 100644
--- a/drivers/soc/qcom/rpmhpd.c
+++ b/drivers/soc/qcom/rpmhpd.c
@@ -193,19 +193,26 @@ static const struct of_device_id rpmhpd_match_table[] = {
 static int rpmhpd_send_corner(struct rpmhpd *pd, int state,
 			      unsigned int corner, bool sync)
 {
+	int ret;
 	struct tcs_cmd cmd = {
 		.addr = pd->addr,
 		.data = corner,
 	};
 
+	rpmh_start_transaction(pd->dev);
+
 	/*
 	 * Wait for an ack only when we are increasing the
 	 * perf state of the power domain
 	 */
 	if (sync)
-		return rpmh_write(pd->dev, state, &cmd, 1);
+		ret = rpmh_write(pd->dev, state, &cmd, 1);
 	else
-		return rpmh_write_async(pd->dev, state, &cmd, 1);
+		ret = rpmh_write_async(pd->dev, state, &cmd, 1);
+
+	ret |= rpmh_end_transaction(pd->dev);
+
+	return ret;
 }
 
 static void to_active_sleep(struct rpmhpd *pd, unsigned int corner,
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets
  2020-03-09  9:30 [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Maulik Shah
                   ` (4 preceding siblings ...)
  2020-03-09  9:30 ` [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions Maulik Shah
@ 2020-03-09 23:42 ` Doug Anderson
  2020-03-10 11:49   ` Maulik Shah
  5 siblings, 1 reply; 24+ messages in thread
From: Doug Anderson @ 2020-03-09 23:42 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> Changes in v13:
> - Address Stephen's comment to maintain COMPILE_TEST
> - Address Doug's comments and add new APIs for start and end transaction
>
> Changes in v12:
> - Kconfig change to remove COMPILE_TEST was dropped in v11, reinclude it.
>
> Changes in v11:
> - Address Doug's comments on change 2 and 3
> - Include change to invalidate TCSes before flush from [4]
>
> Changes in v10:
> - Address Evan's comments to update commit message on change 2
> - Add Evan's Reviewed by on change 2
> - Remove comment from rpmh_flush() related to last CPU invoking it
> - Rebase all changes on top of next-20200302
>
> Changes in v9:
> - Keep rpmh_flush() to invoke from within cache_lock
> - Remove comments related to only last cpu invoking rpmh_flush()
>
> Changes in v8:
> - Address Stephen's comments on changes 2 and 3
> - Add Reviewed by from Stephen on change 1
>
> Changes in v7:
> - Address Srinivas's comments to update commit text
> - Add Reviewed by from Srinivas
>
> Changes in v6:
> - Drop 1 & 2 changes from v5 as they already landed in maintainer tree
> - Drop 3 & 4 changes from v5 as no user at present for power domain in rsc
> - Rename subject to appropriate since power domain changes are dropped
> - Rebase other changes on top of next-20200221
>
> Changes in v5:
> - Add Rob's Acked by on dt-bindings change
> - Drop firmware psci change
> - Update cpuidle stats in dtsi to follow PC mode
> - Include change to update dirty flag when data is updated from [4]
> - Add change to invoke rpmh_flush when caches are dirty
>
> Changes in v4:
> - Add change to allow hierarchical topology in PC mode
> - Drop hierarchical domain idle states converter from v3
> - Address Merge sc7180 dtsi change to add low power modes
>
> Changes in v3:
> - Address Rob's comment on dt property value
> - Address Stephen's comments on rpmh-rsc driver change
> - Include sc7180 cpuidle low power mode changes from [1]
> - Include hierarchical domain idle states converter change from [2]
>
> Changes in v2:
> - Add Stephen's Reviewed-By to the first three patches
> - Addressed Stephen's comments on fourth patch
> - Include changes to connect rpmh domain to cpuidle and genpds
>
> Resource State Coordinator (RSC) is responsible for powering off/lowering
> the requirements from CPU subsystem for the associated hardware like buses,
> clocks, and regulators when all CPUs and cluster is powered down.
>
> RSC power domain uses last-man activities provided by genpd framework based
> on Ulf Hansoon's patch series[3], when the cluster of CPUs enter deepest
> idle states. As a part of domain poweroff, RSC can lower resource state
> requirements by flushing the cached sleep and wake state votes for various
> resources.
>
> [1] https://patchwork.kernel.org/patch/11218965
> [2] https://patchwork.kernel.org/patch/10941671
> [3] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=222355
> [4] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=236503
>
> Maulik Shah (3):
>   arm64: dts: qcom: sc7180: Add cpuidle low power states
>   soc: qcom: rpmh: Update dirty flag only when data changes
>   soc: qcom: rpmh: Invoke rpmh_flush for dirty caches
>
>  arch/arm64/boot/dts/qcom/sc7180.dtsi | 78 ++++++++++++++++++++++++++++++++++++
>  drivers/soc/qcom/rpmh.c              | 27 ++++++++++---
>  2 files changed, 100 insertions(+), 5 deletions(-)

Did you happen to get a chance to test your patch against my cleanup /
documentation patch?  AKA:

https://lore.kernel.org/r/20200306235951.214678-1-dianders@chromium.org

-Doug

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 2/5] soc: qcom: rpmh: Update dirty flag only when data changes
  2020-03-09  9:30 ` [PATCH v13 2/5] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
@ 2020-03-09 23:42   ` Doug Anderson
  2020-03-13  8:53     ` Maulik Shah
  0 siblings, 1 reply; 24+ messages in thread
From: Doug Anderson @ 2020-03-09 23:42 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> Currently rpmh ctrlr dirty flag is set for all cases regardless of data
> is really changed or not. Add changes to update dirty flag when data is
> changed to newer values. Update dirty flag everytime when data in batch
> cache is updated since rpmh_flush() may get invoked from any CPU instead
> of only last CPU going to low power mode.
>
> Also move dirty flag updates to happen from within cache_lock and remove
> unnecessary INIT_LIST_HEAD() call and a default case from switch.
>
> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
> Reviewed-by: Evan Green <evgreen@chromium.org>
> ---
>  drivers/soc/qcom/rpmh.c | 19 +++++++++++--------
>  1 file changed, 11 insertions(+), 8 deletions(-)

Reviewed-by: Douglas Anderson <dianders@chromium.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 3/5] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data
  2020-03-09  9:30 ` [PATCH v13 3/5] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data Maulik Shah
@ 2020-03-09 23:42   ` Doug Anderson
  2020-03-10 11:09     ` Maulik Shah
  0 siblings, 1 reply; 24+ messages in thread
From: Doug Anderson @ 2020-03-09 23:42 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> TCSes have previously programmed data when rpmh_flush is called.
> This can cause old data to trigger along with newly flushed.
>
> Fix this by cleaning SLEEP and WAKE TCSes before new data is flushed.
>
> With this there is no need to invoke rpmh_rsc_invalidate() call from
> rpmh_invalidate().
>
> Simplify rpmh_invalidate() by moving invalidate_batch() inside.
>
> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> ---
>  drivers/soc/qcom/rpmh.c | 36 +++++++++++++++---------------------
>  1 file changed, 15 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index 03630ae..5bed8f4 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -317,19 +317,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
>         return ret;
>  }
>
> -static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
> -{
> -       struct batch_cache_req *req, *tmp;
> -       unsigned long flags;
> -
> -       spin_lock_irqsave(&ctrlr->cache_lock, flags);
> -       list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
> -               kfree(req);
> -       INIT_LIST_HEAD(&ctrlr->batch_cache);
> -       ctrlr->dirty = true;
> -       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> -}
> -
>  /**
>   * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the
>   * batch to finish.
> @@ -467,6 +454,11 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>                 return 0;
>         }
>
> +       /* Invalidate the TCSes first to avoid stale data */
> +       do {
> +               ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
> +       } while (ret == -EAGAIN);
> +

You forgot to actually check the return value.

if (ret)
  return ret;


>         /* First flush the cached batch requests */
>         ret = flush_batch(ctrlr);
>         if (ret)
> @@ -503,19 +495,21 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>   *
>   * @dev: The device making the request
>   *
> - * Invalidate the sleep and active values in the TCS blocks.
> + * Invalidate the sleep and wake values in batch_cache.

Thanks for updating this.  It was on my todo list.  Can you also
update the function description, which still says "Invalidate all
sleep and active sets sets."  While you're at it, remove the double
"sets".

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-03-09  9:30 ` [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
@ 2020-03-09 23:43   ` Doug Anderson
  2020-03-10 11:19     ` Maulik Shah
  0 siblings, 1 reply; 24+ messages in thread
From: Doug Anderson @ 2020-03-09 23:43 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> Add changes to invoke rpmh flush() from within cache_lock when the data in
> cache is dirty.
>
> Introduce two new APIs for this. Clients can use rpmh_start_transaction()
> before any rpmh transaction once done invoke rpmh_end_transaction() which
> internally invokes rpmh_flush() if the caches has become dirty.
>
> Add support to control this with flush_dirty flag.
>
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
> ---
>  drivers/soc/qcom/rpmh-internal.h |  4 +++
>  drivers/soc/qcom/rpmh-rsc.c      |  6 +++-
>  drivers/soc/qcom/rpmh.c          | 64 ++++++++++++++++++++++++++++++++--------
>  include/soc/qcom/rpmh.h          | 10 +++++++
>  4 files changed, 71 insertions(+), 13 deletions(-)

As mentioned previously but not addressed [3], I believe your series
breaks things if there are zero ACTIVE TCSs and you're using the
immediate-flush solution.  Specifically any attempt to set something's
"active" state will clobber the sleep/wake.  I believe this is hard to
fix, especially if you want rpmh_write_async() to work properly and
need to be robust to the last man going down while rpmh_write_async()
is running but hasn't finished.  My suggestion was to consider it to
be an error at probe time for now.

Actually, though, I'd be super surprised if the "active == 0" case
works anyway.  Aside from subtle problems of not handling -EAGAIN (see
another previous message that you didn't respond to [2]), I think
you'll also get failures because you never enable interrupts in
RSC_DRV_IRQ_ENABLE for anything other than the ACTIVE_TCS.  Thus
you'll never get interrupts saying when your transactions on the
borrowed "wake" TCS finish.

Speaking of previous emails that you didn't respond to, I think you
still have these action items:

* Document that rpmh_write(active) and rpmh_write_async(active) also
updates wake state. [1]

* Change is_req_valid() to still return true if (sleep == wake), or
keep track of "active" and return true if (sleep != wake || wake !=
active). [1]

* Document that for batch a write to active doesn't update wake. [1]


> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
> index 6eec32b..d36be3d 100644
> --- a/drivers/soc/qcom/rpmh-internal.h
> +++ b/drivers/soc/qcom/rpmh-internal.h
> @@ -70,13 +70,17 @@ struct rpmh_request {
>   *
>   * @cache: the list of cached requests
>   * @cache_lock: synchronize access to the cache data
> + * @active_clients: count of rpmh transaction in progress
>   * @dirty: was the cache updated since flush
> + * @flush_dirty: if the dirty cache need immediate flush
>   * @batch_cache: Cache sleep and wake requests sent as batch
>   */
>  struct rpmh_ctrlr {
>         struct list_head cache;
>         spinlock_t cache_lock;
> +       u32 active_clients;
>         bool dirty;
> +       bool flush_dirty;
>         struct list_head batch_cache;
>  };
>
> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> index e278fc1..b6391e1 100644
> --- a/drivers/soc/qcom/rpmh-rsc.c
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -61,6 +61,8 @@
>  #define CMD_STATUS_ISSUED              BIT(8)
>  #define CMD_STATUS_COMPL               BIT(16)
>
> +#define FLUSH_DIRTY                    1
> +
>  static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id)
>  {
>         return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id +
> @@ -670,13 +672,15 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
>         INIT_LIST_HEAD(&drv->client.cache);
>         INIT_LIST_HEAD(&drv->client.batch_cache);
>
> +       drv->client.flush_dirty = device_get_match_data(&pdev->dev);
> +
>         dev_set_drvdata(&pdev->dev, drv);
>
>         return devm_of_platform_populate(&pdev->dev);
>  }
>
>  static const struct of_device_id rpmh_drv_match[] = {
> -       { .compatible = "qcom,rpmh-rsc", },
> +       { .compatible = "qcom,rpmh-rsc", .data = (void *)FLUSH_DIRTY },

Ick.  This is just confusing.  IMO better to set
'drv->client.flush_dirty = true' directly in probe with a comment
saying that it could be removed if we had OSI.

...and while you're at it, why not fire off a separate patch (not in
your series) adding the stub to 'include/linux/psci.h'.  Then when we
revisit this in a year it'll be there and it'll be super easy to set
the value properly.


>         { }
>  };
>
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index 5bed8f4..9d40209 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -297,12 +297,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
>  {
>         struct batch_cache_req *req;
>         const struct rpmh_request *rpm_msg;
> -       unsigned long flags;
>         int ret = 0;
>         int i;
>
>         /* Send Sleep/Wake requests to the controller, expect no response */
> -       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>         list_for_each_entry(req, &ctrlr->batch_cache, list) {
>                 for (i = 0; i < req->count; i++) {
>                         rpm_msg = req->rpm_msgs + i;
> @@ -312,7 +310,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
>                                 break;
>                 }
>         }
> -       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>
>         return ret;
>  }
> @@ -433,16 +430,63 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>  }
>
>  /**
> + * rpmh_start_transaction: Indicates start of rpmh transactions, this
> + * must be ended by invoking rpmh_end_transaction().
> + *
> + * @dev: the device making the request
> + */
> +void rpmh_start_transaction(const struct device *dev)
> +{
> +       struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
> +       unsigned long flags;
> +
> +       if (!ctrlr->flush_dirty)
> +               return;
> +
> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
> +       ctrlr->active_clients++;

Wouldn't hurt to have something like:

/*
 * Detect likely leak; we shouldn't have 1000
 * people making in-flight changes at the same time.
 */
WARN_ON(ctrlr->active_clients > 1000)


> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> +}
> +EXPORT_SYMBOL(rpmh_start_transaction);
> +
> +/**
> + * rpmh_end_transaction: Indicates end of rpmh transactions. All dirty data
> + * in cache can be flushed immediately when ctrlr->flush_dirty is set
> + *
> + * @dev: the device making the request
> + *
> + * Return: 0 on success, error number otherwise.
> + */
> +int rpmh_end_transaction(const struct device *dev)
> +{
> +       struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
> +       unsigned long flags;
> +       int ret = 0;
> +
> +       if (!ctrlr->flush_dirty)
> +               return ret;
> +
> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);

WARN_ON(!active_clients);


> +
> +       ctrlr->active_clients--;
> +       if (ctrlr->dirty && !ctrlr->active_clients)
> +               ret = rpmh_flush(ctrlr);

As mentioned previously [2], I don't think it's valid to call
rpmh_flush() with interrupts disabled.  Specifically (as of your
previous patch) rpmh_flush now loops if rpmh_rsc_invalidate() returns
-EAGAIN.  I believe that the caller needs to enable interrupts for a
little bit before trying again.  If the caller doesn't need to enable
interrupts for a little bit before trying again then why was -EAGAIN
even returned?  tcs_invalidate() could have just looped itself and all
the code would be much simpler.


> +
> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> +
> +       return ret;
> +}
> +EXPORT_SYMBOL(rpmh_end_transaction);
> +
> +/**
>   * rpmh_flush: Flushes the buffered active and sleep sets to TCS
>   *
>   * @ctrlr: controller making request to flush cached data
>   *
> - * Return: -EBUSY if the controller is busy, probably waiting on a response
> - * to a RPMH request sent earlier.
> + * Return: 0 on success, error number otherwise.
>   *
> - * This function is always called from the sleep code from the last CPU
> - * that is powering down the entire system. Since no other RPMH API would be
> - * executing at this time, it is safe to run lockless.
> + * This function can either be called from sleep code on the last CPU
> + * (thus no spinlock needed) or with the ctrlr->cache_lock already held.
>   */
>  int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>  {
> @@ -464,10 +508,6 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>         if (ret)
>                 return ret;
>
> -       /*
> -        * Nobody else should be calling this function other than system PM,
> -        * hence we can run without locks.
> -        */
>         list_for_each_entry(p, &ctrlr->cache, list) {
>                 if (!is_req_valid(p)) {
>                         pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x",
> diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
> index f9ec353..85e1ab2 100644
> --- a/include/soc/qcom/rpmh.h
> +++ b/include/soc/qcom/rpmh.h
> @@ -22,6 +22,10 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
>
>  int rpmh_invalidate(const struct device *dev);
>
> +void rpmh_start_transaction(const struct device *dev);
> +
> +int rpmh_end_transaction(const struct device *dev);
> +
>  #else
>
>  static inline int rpmh_write(const struct device *dev, enum rpmh_state state,
> @@ -41,6 +45,12 @@ static inline int rpmh_write_batch(const struct device *dev,
>  static inline int rpmh_invalidate(const struct device *dev)
>  { return -ENODEV; }
>
> +void rpmh_start_transaction(const struct device *dev)
> +{ return -ENODEV; }

Unexpected return from void function.


> +
> +int rpmh_end_transaction(const struct device *dev)
> +{ return -ENODEV; }
> +
>  #endif /* CONFIG_QCOM_RPMH */
>
>  #endif /* __SOC_QCOM_RPMH_H__ */

[1] https://lore.kernel.org/r/CAD=FV=VzNnRdDN5uPYskJ6kQHq2bAi2ysEqt0=taagdd_qZb-g@mail.gmail.com
[2] https://lore.kernel.org/r/CAD=FV=UYpO2rSOoF-OdZd3jKfSZGKnpQJPoiE5fzH+u1uafS6g@mail.gmail.com
[3] https://lore.kernel.org/r/CAD=FV=VNaqwiti+UB8fLgjF5r2CD2xeF_p7qHS-_yXqf+ZDrBg@mail.gmail.com



-Doug

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions
  2020-03-09  9:30 ` [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions Maulik Shah
@ 2020-03-09 23:44   ` Doug Anderson
  2020-03-10 11:46     ` Maulik Shah
  2020-03-11 23:02   ` Doug Anderson
  1 sibling, 1 reply; 24+ messages in thread
From: Doug Anderson @ 2020-03-09 23:44 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao, Taniya Das, Odelu Kukatla, Kiran Gunda, Sibi Sankar

Hi,

On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> Update all rpmh clients to start using rpmh_start_transaction() and
> rpmh_end_transaction().
>
> Cc: Taniya Das <tdas@codeaurora.org>
> Cc: Odelu Kukatla <okukatla@codeaurora.org>
> Cc: Kiran Gunda <kgunda@codeaurora.org>
> Cc: Sibi Sankar <sibis@codeaurora.org>
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> ---
>  drivers/clk/qcom/clk-rpmh.c             | 21 ++++++++++++++-------
>  drivers/interconnect/qcom/bcm-voter.c   | 13 +++++++++----
>  drivers/regulator/qcom-rpmh-regulator.c |  4 ++++
>  drivers/soc/qcom/rpmhpd.c               | 11 +++++++++--

This needs to be 4 separate patches since the change to each subsystem
will go through a different maintainer.

Also: it'll be a lot easier to land this if you make the new
rpmh_start_transaction() and rpmh_end_transaction() calls _optional_
for now, especially since they are just a speed optimization and not
for correctness.  That is, if a driver makes a call to rpmh_write(),
rpmh_write_async(), rpmh_write_batch(), or rpmh_invalidate() without
doing rpmh_start_transaction() then it should still work--just flush
right away.  Since you have rpmh_start_transaction() refcounted that's
as simple as making a call to rpmh_start_transaction() at the
beginning of all public calls and rpmh_end_transaction() at the end.
If there was already a refcount then no harm done.  If there wasn't
you'll get a flush at the end.

Once you make the call optional, you can actually leave changing the
callers until after your series lands.  Then you don't end up
bothering all the other maintainers with the back-and-forth.

Once all callers are updated you can make the call required.  ...or
(as noted below) maybe we should just keep it optional...

One last note here: you have a regulator change here but aren't
sending it to the regulator maintainer.  That won't work.  You also
have an interconnect change without sending it to the interconnect
maintainer.


>  4 files changed, 36 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
> index 12bd871..16f68d4 100644
> --- a/drivers/clk/qcom/clk-rpmh.c
> +++ b/drivers/clk/qcom/clk-rpmh.c
> @@ -154,22 +154,27 @@ static int clk_rpmh_send_aggregate_command(struct clk_rpmh *c)
>         cmd_state = c->aggr_state;
>         on_val = c->res_on_val;
>
> +       rpmh_start_transaction(c->dev);
> +
>         for (; state <= RPMH_ACTIVE_ONLY_STATE; state++) {
>                 if (has_state_changed(c, state)) {
>                         if (cmd_state & BIT(state))
>                                 cmd.data = on_val;
>
>                         ret = rpmh_write_async(c->dev, state, &cmd, 1);
> -                       if (ret) {
> -                               dev_err(c->dev, "set %s state of %s failed: (%d)\n",
> -                                       !state ? "sleep" :
> -                                       state == RPMH_WAKE_ONLY_STATE   ?
> -                                       "wake" : "active", c->res_name, ret);
> -                               return ret;
> -                       }
> +                       if (ret)
> +                               break;
>                 }
>         }
>
> +       ret |= rpmh_end_transaction(c->dev);

You can't do this.  "ret" is an integer and you're munging two error
codes into one int.  I don't think there is any clever way to do this,
but probably this would be fine (the compiler should optimize):

if (ret)
  rpmh_end_transaction(c->dev);
else
  ret = rpmh_end_transaction(c->dev);

...or just leave the "dev_err" and "return ret" where they were and
call rpmh_end_transaction() above without looking at the return value.


> +       if (ret) {
> +               dev_err(c->dev, "set %s state of %s failed: (%d)\n",
> +                       !state ? "sleep" : state == RPMH_WAKE_ONLY_STATE ?
> +                       "wake" : "active", c->res_name, ret);
> +               return ret;
> +       }

Technically the error message above is now misleading if the
"end_transaction" failed.  Namely it will blame things on the active
only state whereas that wasn't the problem.


> +
>         c->last_sent_aggr_state = c->aggr_state;
>         c->peer->last_sent_aggr_state =  c->last_sent_aggr_state;
>
> @@ -267,7 +272,9 @@ static int clk_rpmh_bcm_send_cmd(struct clk_rpmh *c, bool enable)
>         cmd.addr = c->res_addr;
>         cmd.data = BCM_TCS_CMD(1, enable, 0, cmd_state);
>
> +       rpmh_start_transaction(c->dev);
>         ret = rpmh_write_async(c->dev, RPMH_ACTIVE_ONLY_STATE, &cmd, 1);
> +       ret |= rpmh_end_transaction(c->dev);

Again, no |=

Also: one argument for keeping start_transaction and end_transaction
optional long term is that you could completely eliminate this change.


>         if (ret) {
>                 dev_err(c->dev, "set active state of %s failed: (%d)\n",
>                         c->res_name, ret);
> diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c
> index 2adfde8..fbe18b2 100644
> --- a/drivers/interconnect/qcom/bcm-voter.c
> +++ b/drivers/interconnect/qcom/bcm-voter.c
> @@ -263,7 +263,9 @@ int qcom_icc_bcm_voter_commit(struct bcm_voter *voter)
>         tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_AMC, cmds, commit_idx);
>
>         if (!commit_idx[0])
> -               goto out;
> +               goto end;
> +
> +       rpmh_start_transaction(voter-dev);
>
>         ret = rpmh_invalidate(voter->dev);
>         if (ret) {
> @@ -312,12 +314,15 @@ int qcom_icc_bcm_voter_commit(struct bcm_voter *voter)
>         tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_SLEEP, cmds, commit_idx);
>
>         ret = rpmh_write_batch(voter->dev, RPMH_SLEEP_STATE, cmds, commit_idx);
> -       if (ret) {
> +       if (ret)
>                 pr_err("Error sending SLEEP RPMH requests (%d)\n", ret);
> -               goto out;
> -       }
>
>  out:
> +       ret = rpmh_end_transaction(voter-dev);
> +       if (ret)
> +               pr_err("Error ending rpmh transaction (%d)\n", ret);
> +
> +end:

Personally I don't think "out" and "end" are very descriptive.  My own
favorite is to name these types of labels based on what has been done
so far.  So:

exit_started_rpmh_transaction:
exit_constructed_list:


>         list_for_each_entry_safe(bcm, bcm_tmp, &voter->commit_list, list)
>                 list_del_init(&bcm->list);
>
> diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
> index c86ad40..f4b9176 100644
> --- a/drivers/regulator/qcom-rpmh-regulator.c
> +++ b/drivers/regulator/qcom-rpmh-regulator.c
> @@ -163,12 +163,16 @@ static int rpmh_regulator_send_request(struct rpmh_vreg *vreg,
>  {
>         int ret;
>
> +       rpmh_start_transaction(vreg->dev);
> +
>         if (wait_for_ack || vreg->always_wait_for_ack)
>                 ret = rpmh_write(vreg->dev, RPMH_ACTIVE_ONLY_STATE, cmd, 1);
>         else
>                 ret = rpmh_write_async(vreg->dev, RPMH_ACTIVE_ONLY_STATE, cmd,
>                                         1);
>
> +       ret |= rpmh_end_transaction(vreg->dev);

Again, no |=.

...and again, if starting/ending was optional you wouldn't need this change.


> +
>         return ret;
>  }
>
> diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c
> index 4d264d0..0e9d204 100644
> --- a/drivers/soc/qcom/rpmhpd.c
> +++ b/drivers/soc/qcom/rpmhpd.c
> @@ -193,19 +193,26 @@ static const struct of_device_id rpmhpd_match_table[] = {
>  static int rpmhpd_send_corner(struct rpmhpd *pd, int state,
>                               unsigned int corner, bool sync)
>  {
> +       int ret;
>         struct tcs_cmd cmd = {
>                 .addr = pd->addr,
>                 .data = corner,
>         };
>
> +       rpmh_start_transaction(pd->dev);
> +
>         /*
>          * Wait for an ack only when we are increasing the
>          * perf state of the power domain
>          */
>         if (sync)
> -               return rpmh_write(pd->dev, state, &cmd, 1);
> +               ret = rpmh_write(pd->dev, state, &cmd, 1);
>         else
> -               return rpmh_write_async(pd->dev, state, &cmd, 1);
> +               ret = rpmh_write_async(pd->dev, state, &cmd, 1);
> +
> +       ret |= rpmh_end_transaction(pd->dev);

Again, no |=.

...and again, if starting/ending was optional you wouldn't need this change.



-Doug

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 3/5] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data
  2020-03-09 23:42   ` Doug Anderson
@ 2020-03-10 11:09     ` Maulik Shah
  0 siblings, 0 replies; 24+ messages in thread
From: Maulik Shah @ 2020-03-10 11:09 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao


On 3/10/2020 5:12 AM, Doug Anderson wrote:
> Hi,
>
> On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>> TCSes have previously programmed data when rpmh_flush is called.
>> This can cause old data to trigger along with newly flushed.
>>
>> Fix this by cleaning SLEEP and WAKE TCSes before new data is flushed.
>>
>> With this there is no need to invoke rpmh_rsc_invalidate() call from
>> rpmh_invalidate().
>>
>> Simplify rpmh_invalidate() by moving invalidate_batch() inside.
>>
>> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> ---
>>  drivers/soc/qcom/rpmh.c | 36 +++++++++++++++---------------------
>>  1 file changed, 15 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index 03630ae..5bed8f4 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -317,19 +317,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
>>         return ret;
>>  }
>>
>> -static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
>> -{
>> -       struct batch_cache_req *req, *tmp;
>> -       unsigned long flags;
>> -
>> -       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> -       list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
>> -               kfree(req);
>> -       INIT_LIST_HEAD(&ctrlr->batch_cache);
>> -       ctrlr->dirty = true;
>> -       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> -}
>> -
>>  /**
>>   * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the
>>   * batch to finish.
>> @@ -467,6 +454,11 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>                 return 0;
>>         }
>>
>> +       /* Invalidate the TCSes first to avoid stale data */
>> +       do {
>> +               ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
>> +       } while (ret == -EAGAIN);
>> +
> You forgot to actually check the return value.
>
> if (ret)
>   return ret;
Done.
>
>>         /* First flush the cached batch requests */
>>         ret = flush_batch(ctrlr);
>>         if (ret)
>> @@ -503,19 +495,21 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>   *
>>   * @dev: The device making the request
>>   *
>> - * Invalidate the sleep and active values in the TCS blocks.
>> + * Invalidate the sleep and wake values in batch_cache.
> Thanks for updating this.  It was on my todo list.  Can you also
> update the function description, which still says "Invalidate all
> sleep and active sets sets."  While you're at it, remove the double
> "sets".

Done.

Thanks,
Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-03-09 23:43   ` Doug Anderson
@ 2020-03-10 11:19     ` Maulik Shah
  2020-03-10 15:46       ` Doug Anderson
  0 siblings, 1 reply; 24+ messages in thread
From: Maulik Shah @ 2020-03-10 11:19 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao


On 3/10/2020 5:13 AM, Doug Anderson wrote:
> Hi,
>
> On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>> Add changes to invoke rpmh flush() from within cache_lock when the data in
>> cache is dirty.
>>
>> Introduce two new APIs for this. Clients can use rpmh_start_transaction()
>> before any rpmh transaction once done invoke rpmh_end_transaction() which
>> internally invokes rpmh_flush() if the caches has become dirty.
>>
>> Add support to control this with flush_dirty flag.
>>
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
>> ---
>>  drivers/soc/qcom/rpmh-internal.h |  4 +++
>>  drivers/soc/qcom/rpmh-rsc.c      |  6 +++-
>>  drivers/soc/qcom/rpmh.c          | 64 ++++++++++++++++++++++++++++++++--------
>>  include/soc/qcom/rpmh.h          | 10 +++++++
>>  4 files changed, 71 insertions(+), 13 deletions(-)
> As mentioned previously but not addressed [3], I believe your series
> breaks things if there are zero ACTIVE TCSs and you're using the
> immediate-flush solution.  Specifically any attempt to set something's
> "active" state will clobber the sleep/wake.  I believe this is hard to
> fix, especially if you want rpmh_write_async() to work properly and
> need to be robust to the last man going down while rpmh_write_async()
> is running but hasn't finished.  My suggestion was to consider it to
> be an error at probe time for now.
>
> Actually, though, I'd be super surprised if the "active == 0" case
> works anyway.  Aside from subtle problems of not handling -EAGAIN (see
> another previous message that you didn't respond to [2]), I think
> you'll also get failures because you never enable interrupts in
> RSC_DRV_IRQ_ENABLE for anything other than the ACTIVE_TCS.  Thus
> you'll never get interrupts saying when your transactions on the
> borrowed "wake" TCS finish.

No, it shouldn’t effect even with "non-OSI-mode + 0 ACTIVE TCS"

i just replied on v9, pasting same on v13 as well.

After taking your suggestion to do rpmh start/end transaction() in v13, rpmh_end_transaction()
invokes rpmh_flush() only for the last client and by this time expecting all of rpmh_write()
and rpmh_write_batch() will be already “finished” as client first waits for them to finish
and then only invokes end.

So driver is good to handle rpmh_write() and rpmh_write_batch() calls.

Regarding rpmh_write_async() call, which is a fire-n-forget request from SW and client driver
may immediately invoke rpmh_end_transaction() after this.

this case is also handled properly…
Lets again take an example for understanding this..

1.    Client invokes rpmh_write_async() to send ACTIVE cmds for targets which has zero ACTIVE TCS

    Rpmh driver Re-purposes one of SLEEP/WAKE TCS to use as ACTIVE, internally this also sets
    drv->tcs_in_use to true for respective SLEEP/WAKE TCS.

2.    Client now without waiting for above to finish, goes ahead and invokes rpmh_end_transaction()
    which calls rpmh_flush() (in case cache become dirty)

    Now if re-purposed TCS is still in use in HW (transaction in progress), we still have
    drv->tcs_in_use set. So the rpmh_rsc_invalidate() (invoked from rpmh_flush()) will keep on
    returning -EAGAIN until that TCS becomes free to use and then goes ahead to finish its job.  


...i will add "suggested-by" you in next revision.


> Speaking of previous emails that you didn't respond to, I think you
> still have these action items:
>
> * Document that rpmh_write(active) and rpmh_write_async(active) also
> updates wake state. [1]
I will update in v14.
>
> * Change is_req_valid() to still return true if (sleep == wake), or
> keep track of "active" and return true if (sleep != wake || wake !=
> active). [1]
Not required, as replied in v10 now only.
> * Document that for batch a write to active doesn't update wake. [1]
I will update in v14.
>
>> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
>> index 6eec32b..d36be3d 100644
>> --- a/drivers/soc/qcom/rpmh-internal.h
>> +++ b/drivers/soc/qcom/rpmh-internal.h
>> @@ -70,13 +70,17 @@ struct rpmh_request {
>>   *
>>   * @cache: the list of cached requests
>>   * @cache_lock: synchronize access to the cache data
>> + * @active_clients: count of rpmh transaction in progress
>>   * @dirty: was the cache updated since flush
>> + * @flush_dirty: if the dirty cache need immediate flush
>>   * @batch_cache: Cache sleep and wake requests sent as batch
>>   */
>>  struct rpmh_ctrlr {
>>         struct list_head cache;
>>         spinlock_t cache_lock;
>> +       u32 active_clients;
>>         bool dirty;
>> +       bool flush_dirty;
>>         struct list_head batch_cache;
>>  };
>>
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>> index e278fc1..b6391e1 100644
>> --- a/drivers/soc/qcom/rpmh-rsc.c
>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>> @@ -61,6 +61,8 @@
>>  #define CMD_STATUS_ISSUED              BIT(8)
>>  #define CMD_STATUS_COMPL               BIT(16)
>>
>> +#define FLUSH_DIRTY                    1
>> +
>>  static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id)
>>  {
>>         return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id +
>> @@ -670,13 +672,15 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
>>         INIT_LIST_HEAD(&drv->client.cache);
>>         INIT_LIST_HEAD(&drv->client.batch_cache);
>>
>> +       drv->client.flush_dirty = device_get_match_data(&pdev->dev);
>> +
>>         dev_set_drvdata(&pdev->dev, drv);
>>
>>         return devm_of_platform_populate(&pdev->dev);
>>  }
>>
>>  static const struct of_device_id rpmh_drv_match[] = {
>> -       { .compatible = "qcom,rpmh-rsc", },
>> +       { .compatible = "qcom,rpmh-rsc", .data = (void *)FLUSH_DIRTY },
> Ick.  This is just confusing.  IMO better to set
> 'drv->client.flush_dirty = true' directly in probe with a comment
> saying that it could be removed if we had OSI.
Done.
i will keep this bit earlier in probe with commet, so later if we detect rsc to be in hierarchy
from [1], we can override this to be 0 within rpmh_probe_power_domain().

[1] https://patchwork.kernel.org/patch/11391229/

>
> ...and while you're at it, why not fire off a separate patch (not in
> your series) adding the stub to 'include/linux/psci.h'.  Then when we
> revisit this in a year it'll be there and it'll be super easy to set
> the value properly.

With above approch to set it in probe accordingly PSCI change won't be required.

it will be simple, cleaner and without any resistance from PSCI perspective.

>
>>         { }
>>  };
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index 5bed8f4..9d40209 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -297,12 +297,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
>>  {
>>         struct batch_cache_req *req;
>>         const struct rpmh_request *rpm_msg;
>> -       unsigned long flags;
>>         int ret = 0;
>>         int i;
>>
>>         /* Send Sleep/Wake requests to the controller, expect no response */
>> -       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>>         list_for_each_entry(req, &ctrlr->batch_cache, list) {
>>                 for (i = 0; i < req->count; i++) {
>>                         rpm_msg = req->rpm_msgs + i;
>> @@ -312,7 +310,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
>>                                 break;
>>                 }
>>         }
>> -       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>
>>         return ret;
>>  }
>> @@ -433,16 +430,63 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>>  }
>>
>>  /**
>> + * rpmh_start_transaction: Indicates start of rpmh transactions, this
>> + * must be ended by invoking rpmh_end_transaction().
>> + *
>> + * @dev: the device making the request
>> + */
>> +void rpmh_start_transaction(const struct device *dev)
>> +{
>> +       struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
>> +       unsigned long flags;
>> +
>> +       if (!ctrlr->flush_dirty)
>> +               return;
>> +
>> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> +       ctrlr->active_clients++;
> Wouldn't hurt to have something like:
>
> /*
>  * Detect likely leak; we shouldn't have 1000
>  * people making in-flight changes at the same time.
>  */
> WARN_ON(ctrlr->active_clients > 1000)
Not necessary change.
>
>
>> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> +}
>> +EXPORT_SYMBOL(rpmh_start_transaction);
>> +
>> +/**
>> + * rpmh_end_transaction: Indicates end of rpmh transactions. All dirty data
>> + * in cache can be flushed immediately when ctrlr->flush_dirty is set
>> + *
>> + * @dev: the device making the request
>> + *
>> + * Return: 0 on success, error number otherwise.
>> + */
>> +int rpmh_end_transaction(const struct device *dev)
>> +{
>> +       struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
>> +       unsigned long flags;
>> +       int ret = 0;
>> +
>> +       if (!ctrlr->flush_dirty)
>> +               return ret;
>> +
>> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
> WARN_ON(!active_clients);
Why? when active_clients become zero, we want to finally call rpmh_flush(), i don't see a reason to warn and then flush.

Or do you want to make a check if client really called rpmh_start_transaction() first before calling rpmh_end_transaction() then when we do
ctrlr->active_clients--;

it shouldn't go to negative value at the end. in that case let me know, i will make those changes.

>
>
>> +
>> +       ctrlr->active_clients--;
>> +       if (ctrlr->dirty && !ctrlr->active_clients)
>> +               ret = rpmh_flush(ctrlr);
> As mentioned previously [2], I don't think it's valid to call
> rpmh_flush() with interrupts disabled.  Specifically (as of your
> previous patch) rpmh_flush now loops if rpmh_rsc_invalidate() returns
> -EAGAIN.  I believe that the caller needs to enable interrupts for a
> little bit before trying again.  If the caller doesn't need to enable
> interrupts for a little bit before trying again then why was -EAGAIN
> even returned?  tcs_invalidate() could have just looped itself and all
> the code would be much simpler.
I will check and address this.
>
>
>> +
>> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> +
>> +       return ret;
>> +}
>> +EXPORT_SYMBOL(rpmh_end_transaction);
>> +
>> +/**
>>   * rpmh_flush: Flushes the buffered active and sleep sets to TCS
>>   *
>>   * @ctrlr: controller making request to flush cached data
>>   *
>> - * Return: -EBUSY if the controller is busy, probably waiting on a response
>> - * to a RPMH request sent earlier.
>> + * Return: 0 on success, error number otherwise.
>>   *
>> - * This function is always called from the sleep code from the last CPU
>> - * that is powering down the entire system. Since no other RPMH API would be
>> - * executing at this time, it is safe to run lockless.
>> + * This function can either be called from sleep code on the last CPU
>> + * (thus no spinlock needed) or with the ctrlr->cache_lock already held.
>>   */
>>  int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>  {
>> @@ -464,10 +508,6 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>         if (ret)
>>                 return ret;
>>
>> -       /*
>> -        * Nobody else should be calling this function other than system PM,
>> -        * hence we can run without locks.
>> -        */
>>         list_for_each_entry(p, &ctrlr->cache, list) {
>>                 if (!is_req_valid(p)) {
>>                         pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x",
>> diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
>> index f9ec353..85e1ab2 100644
>> --- a/include/soc/qcom/rpmh.h
>> +++ b/include/soc/qcom/rpmh.h
>> @@ -22,6 +22,10 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
>>
>>  int rpmh_invalidate(const struct device *dev);
>>
>> +void rpmh_start_transaction(const struct device *dev);
>> +
>> +int rpmh_end_transaction(const struct device *dev);
>> +
>>  #else
>>
>>  static inline int rpmh_write(const struct device *dev, enum rpmh_state state,
>> @@ -41,6 +45,12 @@ static inline int rpmh_write_batch(const struct device *dev,
>>  static inline int rpmh_invalidate(const struct device *dev)
>>  { return -ENODEV; }
>>
>> +void rpmh_start_transaction(const struct device *dev)
>> +{ return -ENODEV; }
> Unexpected return from void function.
>
Thanks, done.
>> +
>> +int rpmh_end_transaction(const struct device *dev)
>> +{ return -ENODEV; }
>> +
>>  #endif /* CONFIG_QCOM_RPMH */
>>
>>  #endif /* __SOC_QCOM_RPMH_H__ */
> [1] https://lore.kernel.org/r/CAD=FV=VzNnRdDN5uPYskJ6kQHq2bAi2ysEqt0=taagdd_qZb-g@mail.gmail.com
> [2] https://lore.kernel.org/r/CAD=FV=UYpO2rSOoF-OdZd3jKfSZGKnpQJPoiE5fzH+u1uafS6g@mail.gmail.com
> [3] https://lore.kernel.org/r/CAD=FV=VNaqwiti+UB8fLgjF5r2CD2xeF_p7qHS-_yXqf+ZDrBg@mail.gmail.com
>
>
>
> -Doug

Thanks,

Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions
  2020-03-09 23:44   ` Doug Anderson
@ 2020-03-10 11:46     ` Maulik Shah
  2020-03-10 15:46       ` Doug Anderson
  0 siblings, 1 reply; 24+ messages in thread
From: Maulik Shah @ 2020-03-10 11:46 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao, Taniya Das, Odelu Kukatla, Kiran Gunda, Sibi Sankar


On 3/10/2020 5:14 AM, Doug Anderson wrote:
> Hi,
>
> On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>> Update all rpmh clients to start using rpmh_start_transaction() and
>> rpmh_end_transaction().
>>
>> Cc: Taniya Das <tdas@codeaurora.org>
>> Cc: Odelu Kukatla <okukatla@codeaurora.org>
>> Cc: Kiran Gunda <kgunda@codeaurora.org>
>> Cc: Sibi Sankar <sibis@codeaurora.org>
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> ---
>>  drivers/clk/qcom/clk-rpmh.c             | 21 ++++++++++++++-------
>>  drivers/interconnect/qcom/bcm-voter.c   | 13 +++++++++----
>>  drivers/regulator/qcom-rpmh-regulator.c |  4 ++++
>>  drivers/soc/qcom/rpmhpd.c               | 11 +++++++++--
> This needs to be 4 separate patches since the change to each subsystem
> will go through a different maintainer.
I will split to 4 changes, and send each one to its respective mailing lists and maintainer/reviewer.
>
> Also: it'll be a lot easier to land this if you make the new
> rpmh_start_transaction() and rpmh_end_transaction() calls _optional_
> for now, especially since they are just a speed optimization and not
> for correctness.  That is, if a driver makes a call to rpmh_write(),
> rpmh_write_async(), rpmh_write_batch(), or rpmh_invalidate() without
> doing rpmh_start_transaction() then it should still work

yes, this is already taken care.

All the calls from driver will go through as it is and won't fail even without calling new APIs.
So they are already optional.

The comment in rpmh_start_transaction() is already saying if client "choose" to invoke this
then this must be ended by calling rpmh_end_transaction(). if client don't invoke
rpmh_start_transaction() in the first place then everything is expected work as if no change.


> --just flush
> right away.  

No, currently also in driver no one is calling rpmh_flush().

so nothing breaks with series and no point in adding changes to flush right away and then remove them in same series.

when the clients starts invoking new APIs, rpmh_flush() will start getting invoked for the first time in driver.

> Since you have rpmh_start_transaction() refcounted that's
> as simple as making a call to rpmh_start_transaction() at the
> beginning of all public calls and rpmh_end_transaction() at the end.
> If there was already a refcount then no harm done.  If there wasn't
> you'll get a flush at the end.
>
> Once you make the call optional, you can actually leave changing the
> callers until after your series lands.  Then you don't end up
> bothering all the other maintainers with the back-and-forth.

We don't need to end up syncing up with all other maintainers. the calls are already optional.

These new changes (as they be split in to 4) can go in any order in various maintainer's trees,
once this series goes in rpmh driver.

>
> Once all callers are updated you can make the call required.  ...or
> (as noted below) maybe we should just keep it optional...
>
> One last note here: you have a regulator change here but aren't
> sending it to the regulator maintainer.  That won't work.  You also
> have an interconnect change without sending it to the interconnect
> maintainer.
Done.
>
>
>>  4 files changed, 36 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
>> index 12bd871..16f68d4 100644
>> --- a/drivers/clk/qcom/clk-rpmh.c
>> +++ b/drivers/clk/qcom/clk-rpmh.c
>> @@ -154,22 +154,27 @@ static int clk_rpmh_send_aggregate_command(struct clk_rpmh *c)
>>         cmd_state = c->aggr_state;
>>         on_val = c->res_on_val;
>>
>> +       rpmh_start_transaction(c->dev);
>> +
>>         for (; state <= RPMH_ACTIVE_ONLY_STATE; state++) {
>>                 if (has_state_changed(c, state)) {
>>                         if (cmd_state & BIT(state))
>>                                 cmd.data = on_val;
>>
>>                         ret = rpmh_write_async(c->dev, state, &cmd, 1);
>> -                       if (ret) {
>> -                               dev_err(c->dev, "set %s state of %s failed: (%d)\n",
>> -                                       !state ? "sleep" :
>> -                                       state == RPMH_WAKE_ONLY_STATE   ?
>> -                                       "wake" : "active", c->res_name, ret);
>> -                               return ret;
>> -                       }
>> +                       if (ret)
>> +                               break;
>>                 }
>>         }
>>
>> +       ret |= rpmh_end_transaction(c->dev);
> You can't do this.  "ret" is an integer and you're munging two error
> codes into one int.  I don't think there is any clever way to do this,
> but probably this would be fine (the compiler should optimize):
>
> if (ret)
>   rpmh_end_transaction(c->dev);
> else
>   ret = rpmh_end_transaction(c->dev);
>
> ...or just leave the "dev_err" and "return ret" where they were and
> call rpmh_end_transaction() above without looking at the return value.
Done.
>
>
>> +       if (ret) {
>> +               dev_err(c->dev, "set %s state of %s failed: (%d)\n",
>> +                       !state ? "sleep" : state == RPMH_WAKE_ONLY_STATE ?
>> +                       "wake" : "active", c->res_name, ret);
>> +               return ret;
>> +       }
> Technically the error message above is now misleading if the
> "end_transaction" failed.  Namely it will blame things on the active
> only state whereas that wasn't the problem.
>
Done.
>> +
>>         c->last_sent_aggr_state = c->aggr_state;
>>         c->peer->last_sent_aggr_state =  c->last_sent_aggr_state;
>>
>> @@ -267,7 +272,9 @@ static int clk_rpmh_bcm_send_cmd(struct clk_rpmh *c, bool enable)
>>         cmd.addr = c->res_addr;
>>         cmd.data = BCM_TCS_CMD(1, enable, 0, cmd_state);
>>
>> +       rpmh_start_transaction(c->dev);
>>         ret = rpmh_write_async(c->dev, RPMH_ACTIVE_ONLY_STATE, &cmd, 1);
>> +       ret |= rpmh_end_transaction(c->dev);
> Again, no |=
Done.
>
> Also: one argument for keeping start_transaction and end_transaction
> optional long term is that you could completely eliminate this change.
its already optional as described above.
>
>>         if (ret) {
>>                 dev_err(c->dev, "set active state of %s failed: (%d)\n",
>>                         c->res_name, ret);
>> diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c
>> index 2adfde8..fbe18b2 100644
>> --- a/drivers/interconnect/qcom/bcm-voter.c
>> +++ b/drivers/interconnect/qcom/bcm-voter.c
>> @@ -263,7 +263,9 @@ int qcom_icc_bcm_voter_commit(struct bcm_voter *voter)
>>         tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_AMC, cmds, commit_idx);
>>
>>         if (!commit_idx[0])
>> -               goto out;
>> +               goto end;
>> +
>> +       rpmh_start_transaction(voter-dev);
>>
>>         ret = rpmh_invalidate(voter->dev);
>>         if (ret) {
>> @@ -312,12 +314,15 @@ int qcom_icc_bcm_voter_commit(struct bcm_voter *voter)
>>         tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_SLEEP, cmds, commit_idx);
>>
>>         ret = rpmh_write_batch(voter->dev, RPMH_SLEEP_STATE, cmds, commit_idx);
>> -       if (ret) {
>> +       if (ret)
>>                 pr_err("Error sending SLEEP RPMH requests (%d)\n", ret);
>> -               goto out;
>> -       }
>>
>>  out:
>> +       ret = rpmh_end_transaction(voter-dev);
>> +       if (ret)
>> +               pr_err("Error ending rpmh transaction (%d)\n", ret);
>> +
>> +end:
> Personally I don't think "out" and "end" are very descriptive.  My own
> favorite is to name these types of labels based on what has been done
> so far.  So:
>
> exit_started_rpmh_transaction:
> exit_constructed_list:
>
Done.
>>         list_for_each_entry_safe(bcm, bcm_tmp, &voter->commit_list, list)
>>                 list_del_init(&bcm->list);
>>
>> diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
>> index c86ad40..f4b9176 100644
>> --- a/drivers/regulator/qcom-rpmh-regulator.c
>> +++ b/drivers/regulator/qcom-rpmh-regulator.c
>> @@ -163,12 +163,16 @@ static int rpmh_regulator_send_request(struct rpmh_vreg *vreg,
>>  {
>>         int ret;
>>
>> +       rpmh_start_transaction(vreg->dev);
>> +
>>         if (wait_for_ack || vreg->always_wait_for_ack)
>>                 ret = rpmh_write(vreg->dev, RPMH_ACTIVE_ONLY_STATE, cmd, 1);
>>         else
>>                 ret = rpmh_write_async(vreg->dev, RPMH_ACTIVE_ONLY_STATE, cmd,
>>                                         1);
>>
>> +       ret |= rpmh_end_transaction(vreg->dev);
> Again, no |=.
Done.
> ...and again, if starting/ending was optional you wouldn't need this change.
>
its already optional as described above.
>> +
>>         return ret;
>>  }
>>
>> diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c
>> index 4d264d0..0e9d204 100644
>> --- a/drivers/soc/qcom/rpmhpd.c
>> +++ b/drivers/soc/qcom/rpmhpd.c
>> @@ -193,19 +193,26 @@ static const struct of_device_id rpmhpd_match_table[] = {
>>  static int rpmhpd_send_corner(struct rpmhpd *pd, int state,
>>                               unsigned int corner, bool sync)
>>  {
>> +       int ret;
>>         struct tcs_cmd cmd = {
>>                 .addr = pd->addr,
>>                 .data = corner,
>>         };
>>
>> +       rpmh_start_transaction(pd->dev);
>> +
>>         /*
>>          * Wait for an ack only when we are increasing the
>>          * perf state of the power domain
>>          */
>>         if (sync)
>> -               return rpmh_write(pd->dev, state, &cmd, 1);
>> +               ret = rpmh_write(pd->dev, state, &cmd, 1);
>>         else
>> -               return rpmh_write_async(pd->dev, state, &cmd, 1);
>> +               ret = rpmh_write_async(pd->dev, state, &cmd, 1);
>> +
>> +       ret |= rpmh_end_transaction(pd->dev);
> Again, no |=.
Done.
>
> ...and again, if starting/ending was optional you wouldn't need this change.

its already optional as described above.

Thanks,
Maulik

>
>
> -Doug

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets
  2020-03-09 23:42 ` [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Doug Anderson
@ 2020-03-10 11:49   ` Maulik Shah
  0 siblings, 0 replies; 24+ messages in thread
From: Maulik Shah @ 2020-03-10 11:49 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao


On 3/10/2020 5:12 AM, Doug Anderson wrote:
> Hi,
>
> On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>> Changes in v13:
>> - Address Stephen's comment to maintain COMPILE_TEST
>> - Address Doug's comments and add new APIs for start and end transaction
>>
>> Changes in v12:
>> - Kconfig change to remove COMPILE_TEST was dropped in v11, reinclude it.
>>
>> Changes in v11:
>> - Address Doug's comments on change 2 and 3
>> - Include change to invalidate TCSes before flush from [4]
>>
>> Changes in v10:
>> - Address Evan's comments to update commit message on change 2
>> - Add Evan's Reviewed by on change 2
>> - Remove comment from rpmh_flush() related to last CPU invoking it
>> - Rebase all changes on top of next-20200302
>>
>> Changes in v9:
>> - Keep rpmh_flush() to invoke from within cache_lock
>> - Remove comments related to only last cpu invoking rpmh_flush()
>>
>> Changes in v8:
>> - Address Stephen's comments on changes 2 and 3
>> - Add Reviewed by from Stephen on change 1
>>
>> Changes in v7:
>> - Address Srinivas's comments to update commit text
>> - Add Reviewed by from Srinivas
>>
>> Changes in v6:
>> - Drop 1 & 2 changes from v5 as they already landed in maintainer tree
>> - Drop 3 & 4 changes from v5 as no user at present for power domain in rsc
>> - Rename subject to appropriate since power domain changes are dropped
>> - Rebase other changes on top of next-20200221
>>
>> Changes in v5:
>> - Add Rob's Acked by on dt-bindings change
>> - Drop firmware psci change
>> - Update cpuidle stats in dtsi to follow PC mode
>> - Include change to update dirty flag when data is updated from [4]
>> - Add change to invoke rpmh_flush when caches are dirty
>>
>> Changes in v4:
>> - Add change to allow hierarchical topology in PC mode
>> - Drop hierarchical domain idle states converter from v3
>> - Address Merge sc7180 dtsi change to add low power modes
>>
>> Changes in v3:
>> - Address Rob's comment on dt property value
>> - Address Stephen's comments on rpmh-rsc driver change
>> - Include sc7180 cpuidle low power mode changes from [1]
>> - Include hierarchical domain idle states converter change from [2]
>>
>> Changes in v2:
>> - Add Stephen's Reviewed-By to the first three patches
>> - Addressed Stephen's comments on fourth patch
>> - Include changes to connect rpmh domain to cpuidle and genpds
>>
>> Resource State Coordinator (RSC) is responsible for powering off/lowering
>> the requirements from CPU subsystem for the associated hardware like buses,
>> clocks, and regulators when all CPUs and cluster is powered down.
>>
>> RSC power domain uses last-man activities provided by genpd framework based
>> on Ulf Hansoon's patch series[3], when the cluster of CPUs enter deepest
>> idle states. As a part of domain poweroff, RSC can lower resource state
>> requirements by flushing the cached sleep and wake state votes for various
>> resources.
>>
>> [1] https://patchwork.kernel.org/patch/11218965
>> [2] https://patchwork.kernel.org/patch/10941671
>> [3] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=222355
>> [4] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=236503
>>
>> Maulik Shah (3):
>>   arm64: dts: qcom: sc7180: Add cpuidle low power states
>>   soc: qcom: rpmh: Update dirty flag only when data changes
>>   soc: qcom: rpmh: Invoke rpmh_flush for dirty caches
>>
>>  arch/arm64/boot/dts/qcom/sc7180.dtsi | 78 ++++++++++++++++++++++++++++++++++++
>>  drivers/soc/qcom/rpmh.c              | 27 ++++++++++---
>>  2 files changed, 100 insertions(+), 5 deletions(-)
> Did you happen to get a chance to test your patch against my cleanup /
> documentation patch?  AKA:
>
> https://lore.kernel.org/r/20200306235951.214678-1-dianders@chromium.org
>
> -Doug
Hi Doug,

Not yet. i will update on your patches.

Thanks,
Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-03-10 11:19     ` Maulik Shah
@ 2020-03-10 15:46       ` Doug Anderson
  2020-03-11  6:36         ` Maulik Shah
  0 siblings, 1 reply; 24+ messages in thread
From: Doug Anderson @ 2020-03-10 15:46 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

On Tue, Mar 10, 2020 at 4:19 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
>
> On 3/10/2020 5:13 AM, Doug Anderson wrote:
> > Hi,
> >
> > On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
> >> Add changes to invoke rpmh flush() from within cache_lock when the data in
> >> cache is dirty.
> >>
> >> Introduce two new APIs for this. Clients can use rpmh_start_transaction()
> >> before any rpmh transaction once done invoke rpmh_end_transaction() which
> >> internally invokes rpmh_flush() if the caches has become dirty.
> >>
> >> Add support to control this with flush_dirty flag.
> >>
> >> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> >> Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
> >> ---
> >>  drivers/soc/qcom/rpmh-internal.h |  4 +++
> >>  drivers/soc/qcom/rpmh-rsc.c      |  6 +++-
> >>  drivers/soc/qcom/rpmh.c          | 64 ++++++++++++++++++++++++++++++++--------
> >>  include/soc/qcom/rpmh.h          | 10 +++++++
> >>  4 files changed, 71 insertions(+), 13 deletions(-)
> > As mentioned previously but not addressed [3], I believe your series
> > breaks things if there are zero ACTIVE TCSs and you're using the
> > immediate-flush solution.  Specifically any attempt to set something's
> > "active" state will clobber the sleep/wake.  I believe this is hard to
> > fix, especially if you want rpmh_write_async() to work properly and
> > need to be robust to the last man going down while rpmh_write_async()
> > is running but hasn't finished.  My suggestion was to consider it to
> > be an error at probe time for now.
> >
> > Actually, though, I'd be super surprised if the "active == 0" case
> > works anyway.  Aside from subtle problems of not handling -EAGAIN (see
> > another previous message that you didn't respond to [2]), I think
> > you'll also get failures because you never enable interrupts in
> > RSC_DRV_IRQ_ENABLE for anything other than the ACTIVE_TCS.  Thus
> > you'll never get interrupts saying when your transactions on the
> > borrowed "wake" TCS finish.
>
> No, it shouldn’t effect even with "non-OSI-mode + 0 ACTIVE TCS"
>
> i just replied on v9, pasting same on v13 as well.
>
> After taking your suggestion to do rpmh start/end transaction() in v13, rpmh_end_transaction()
> invokes rpmh_flush() only for the last client and by this time expecting all of rpmh_write()
> and rpmh_write_batch() will be already “finished” as client first waits for them to finish
> and then only invokes end.
>
> So driver is good to handle rpmh_write() and rpmh_write_batch() calls.

Ah, right.  In the previous version of the patch it was a problem
because you flushed in cache_rpm_request() and then clobbered it right
away in __rpmh_write() when you did rpmh_rsc_send_data().  With v13
that is not the case anymore.  So this case should be OK.


> Regarding rpmh_write_async() call, which is a fire-n-forget request from SW and client driver
> may immediately invoke rpmh_end_transaction() after this.
>
> this case is also handled properly…
> Lets again take an example for understanding this..
>
> 1.    Client invokes rpmh_write_async() to send ACTIVE cmds for targets which has zero ACTIVE TCS
>
>     Rpmh driver Re-purposes one of SLEEP/WAKE TCS to use as ACTIVE, internally this also sets
>     drv->tcs_in_use to true for respective SLEEP/WAKE TCS.
>
> 2.    Client now without waiting for above to finish, goes ahead and invokes rpmh_end_transaction()
>     which calls rpmh_flush() (in case cache become dirty)

I guess we'd have to confirm that there's no way for the cache to
_not_ become dirty but you do an "active" transaction.  Let's imagine
this case:

start transaction
rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x11);
rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);
end transaction

start transaction
rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x11);
rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);
end transaction

In other words the client does the same sequence twice in a row with
no change in data.  After the first set I think you'd be fine.  ...but
after the second set you'll be in trouble.  That's because none of the
calls in the second set would cause the "dirty" to be set.  ...but for
"active only" calls we don't have any sort of cache--we just fire it
off.  When we fire off the active only we'll clobber the sleep/wake
TCS.  ...and then nothing will write them again because the cache
isn't dirty.


>     Now if re-purposed TCS is still in use in HW (transaction in progress), we still have
>     drv->tcs_in_use set. So the rpmh_rsc_invalidate() (invoked from rpmh_flush()) will keep on
>     returning -EAGAIN until that TCS becomes free to use and then goes ahead to finish its job.

Ah, interesting.  I still think you have problems I pointed out in
another response because you never enable interrupts for the "WAKE
TCS", but I can see how this could be made to work.  In this case
"async" becomes a little silly here because the flush will essentially
be forced to wait until the transaction is totally done (so the TCS is
free again), but it should be able to work.  This might actually point
out something that needs to be changed in my "clean up" series since I
guess "tcs_in_use" could sometimes be present in a wake TCS now.


> >> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> >> index e278fc1..b6391e1 100644
> >> --- a/drivers/soc/qcom/rpmh-rsc.c
> >> +++ b/drivers/soc/qcom/rpmh-rsc.c
> >> @@ -61,6 +61,8 @@
> >>  #define CMD_STATUS_ISSUED              BIT(8)
> >>  #define CMD_STATUS_COMPL               BIT(16)
> >>
> >> +#define FLUSH_DIRTY                    1
> >> +
> >>  static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id)
> >>  {
> >>         return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id +
> >> @@ -670,13 +672,15 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
> >>         INIT_LIST_HEAD(&drv->client.cache);
> >>         INIT_LIST_HEAD(&drv->client.batch_cache);
> >>
> >> +       drv->client.flush_dirty = device_get_match_data(&pdev->dev);
> >> +
> >>         dev_set_drvdata(&pdev->dev, drv);
> >>
> >>         return devm_of_platform_populate(&pdev->dev);
> >>  }
> >>
> >>  static const struct of_device_id rpmh_drv_match[] = {
> >> -       { .compatible = "qcom,rpmh-rsc", },
> >> +       { .compatible = "qcom,rpmh-rsc", .data = (void *)FLUSH_DIRTY },
> > Ick.  This is just confusing.  IMO better to set
> > 'drv->client.flush_dirty = true' directly in probe with a comment
> > saying that it could be removed if we had OSI.
> Done.
> i will keep this bit earlier in probe with commet, so later if we detect rsc to be in hierarchy
> from [1], we can override this to be 0 within rpmh_probe_power_domain().
>
> [1] https://patchwork.kernel.org/patch/11391229/

I don't really understand, but maybe it'll be obvious when I see the code.



> > ...and while you're at it, why not fire off a separate patch (not in
> > your series) adding the stub to 'include/linux/psci.h'.  Then when we
> > revisit this in a year it'll be there and it'll be super easy to set
> > the value properly.
>
> With above approch to set it in probe accordingly PSCI change won't be required.
>
> it will be simple, cleaner and without any resistance from PSCI perspective.
>
> >
> >>         { }
> >>  };
> >>
> >> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> >> index 5bed8f4..9d40209 100644
> >> --- a/drivers/soc/qcom/rpmh.c
> >> +++ b/drivers/soc/qcom/rpmh.c
> >> @@ -297,12 +297,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
> >>  {
> >>         struct batch_cache_req *req;
> >>         const struct rpmh_request *rpm_msg;
> >> -       unsigned long flags;
> >>         int ret = 0;
> >>         int i;
> >>
> >>         /* Send Sleep/Wake requests to the controller, expect no response */
> >> -       spin_lock_irqsave(&ctrlr->cache_lock, flags);
> >>         list_for_each_entry(req, &ctrlr->batch_cache, list) {
> >>                 for (i = 0; i < req->count; i++) {
> >>                         rpm_msg = req->rpm_msgs + i;
> >> @@ -312,7 +310,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
> >>                                 break;
> >>                 }
> >>         }
> >> -       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> >>
> >>         return ret;
> >>  }
> >> @@ -433,16 +430,63 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
> >>  }
> >>
> >>  /**
> >> + * rpmh_start_transaction: Indicates start of rpmh transactions, this
> >> + * must be ended by invoking rpmh_end_transaction().
> >> + *
> >> + * @dev: the device making the request
> >> + */
> >> +void rpmh_start_transaction(const struct device *dev)
> >> +{
> >> +       struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
> >> +       unsigned long flags;
> >> +
> >> +       if (!ctrlr->flush_dirty)
> >> +               return;
> >> +
> >> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
> >> +       ctrlr->active_clients++;
> > Wouldn't hurt to have something like:
> >
> > /*
> >  * Detect likely leak; we shouldn't have 1000
> >  * people making in-flight changes at the same time.
> >  */
> > WARN_ON(ctrlr->active_clients > 1000)
> Not necessary change.

Yes, but it will catch buggy clients much more quickly.  What are the downsides?


> >> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> >> +}
> >> +EXPORT_SYMBOL(rpmh_start_transaction);
> >> +
> >> +/**
> >> + * rpmh_end_transaction: Indicates end of rpmh transactions. All dirty data
> >> + * in cache can be flushed immediately when ctrlr->flush_dirty is set
> >> + *
> >> + * @dev: the device making the request
> >> + *
> >> + * Return: 0 on success, error number otherwise.
> >> + */
> >> +int rpmh_end_transaction(const struct device *dev)
> >> +{
> >> +       struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
> >> +       unsigned long flags;
> >> +       int ret = 0;
> >> +
> >> +       if (!ctrlr->flush_dirty)
> >> +               return ret;
> >> +
> >> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
> > WARN_ON(!active_clients);
> Why? when active_clients become zero, we want to finally call rpmh_flush(), i don't see a reason to warn and then flush.
>
> Or do you want to make a check if client really called rpmh_start_transaction() first before calling rpmh_end_transaction() then when we do
> ctrlr->active_clients--;
>
> it shouldn't go to negative value at the end. in that case let me know, i will make those changes.

Right, I want to warn on underflow.  AKA:

WARN_ON(!ctrlr->active_clients);
ctrlr->active_clients--;

Generally it's handy to detect mismatches of start/end.  It could be
that someone accidentally starts twice and ends once.  Starts zero
times and ends once.  Starts once and ends twice.  All of these are
interesting cases to warn about.



-Doug

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions
  2020-03-10 11:46     ` Maulik Shah
@ 2020-03-10 15:46       ` Doug Anderson
  0 siblings, 0 replies; 24+ messages in thread
From: Doug Anderson @ 2020-03-10 15:46 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao, Taniya Das, Odelu Kukatla, Kiran Gunda, Sibi Sankar

Hi,

On Tue, Mar 10, 2020 at 4:47 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
>
> On 3/10/2020 5:14 AM, Doug Anderson wrote:
> > Hi,
> >
> > On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
> >> Update all rpmh clients to start using rpmh_start_transaction() and
> >> rpmh_end_transaction().
> >>
> >> Cc: Taniya Das <tdas@codeaurora.org>
> >> Cc: Odelu Kukatla <okukatla@codeaurora.org>
> >> Cc: Kiran Gunda <kgunda@codeaurora.org>
> >> Cc: Sibi Sankar <sibis@codeaurora.org>
> >> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> >> ---
> >>  drivers/clk/qcom/clk-rpmh.c             | 21 ++++++++++++++-------
> >>  drivers/interconnect/qcom/bcm-voter.c   | 13 +++++++++----
> >>  drivers/regulator/qcom-rpmh-regulator.c |  4 ++++
> >>  drivers/soc/qcom/rpmhpd.c               | 11 +++++++++--
> > This needs to be 4 separate patches since the change to each subsystem
> > will go through a different maintainer.
> I will split to 4 changes, and send each one to its respective mailing lists and maintainer/reviewer.
> >
> > Also: it'll be a lot easier to land this if you make the new
> > rpmh_start_transaction() and rpmh_end_transaction() calls _optional_
> > for now, especially since they are just a speed optimization and not
> > for correctness.  That is, if a driver makes a call to rpmh_write(),
> > rpmh_write_async(), rpmh_write_batch(), or rpmh_invalidate() without
> > doing rpmh_start_transaction() then it should still work
>
> yes, this is already taken care.
>
> All the calls from driver will go through as it is and won't fail even without calling new APIs.
> So they are already optional.
>
> The comment in rpmh_start_transaction() is already saying if client "choose" to invoke this
> then this must be ended by calling rpmh_end_transaction(). if client don't invoke
> rpmh_start_transaction() in the first place then everything is expected work as if no change.

I think you may have misunderstood.  With your v13 it's not optional
because the flush won't happen unless the clients call
rpmh_start_transaction() and rpmh_end_transaction().  ...but the flush
is necessary for correctness, right?


> > --just flush
> > right away.
>
> No, currently also in driver no one is calling rpmh_flush().
>
> so nothing breaks with series and no point in adding changes to flush right away and then remove them in same series.
>
> when the clients starts invoking new APIs, rpmh_flush() will start getting invoked for the first time in driver.

I'm not saying to expose flush to clients.  I'm saying that you should
modify the implementation of the calls in rpmh.c.  AKA in rpmh.c you
have:

rpmh_write():
  start_transaction()
  ... the contents of rpmh_write() from your v13 ...
  end_transaction()

rpmh_write_async():
  start_transaction()
  ... the contents of rpmh_write_async() from your v13 ...
  end_transaction()

rpmh_write_batch()
  start_transaction()
  ... the contents of rpmh_write_batch() from your v13 ...
  end_transaction()

rpmh_invalidate()
  start_transaction()
  ... the contents of rpmh_invalidate() from your v13 ...
  end_transaction()


If a client calls rpmh_write() without anything else, they will get an
implicit flush.

If a client does this:

start_transaction()
rpmh_invalidate()
rpmh_write_batch()
rpmh_write_batch()
end_transaction()

That translates to:

start_transaction()
  start_transaction()
  ... the contents of rpmh_invalidate() from your v13 ...
  end_transaction()
  start_transaction()
  ... the contents of rpmh_write_batch() from your v13 ...
  end_transaction()
  start_transaction()
  ... the contents of rpmh_write_batch() from your v13 ...
  end_transaction()
end_transaction()

...then you'll get one flush at the end.  That's because the
start_transaction() embedded in rpmh_invalidate() and
rpmh_write_batch() will increase the usage count to 2 and then
decrease back to 1.  Since it won't be 0 you won't get flushes from
the embedded start/end.  You'll just get one flush at the end.


Now start_transaction() and end_transaction() are truly optional.  If
a client doesn't call them they'll get an implicit flush at the end of
every call.  If they do call start/end themselves they can postpone
the flush till the true end.

...and because it's truly optional, you can drop several of the
changes in your series.



-Doug

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-03-10 15:46       ` Doug Anderson
@ 2020-03-11  6:36         ` Maulik Shah
  2020-03-11 23:06           ` Doug Anderson
  0 siblings, 1 reply; 24+ messages in thread
From: Maulik Shah @ 2020-03-11  6:36 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

On 3/10/2020 9:16 PM, Doug Anderson wrote:
> Hi,
>
> On Tue, Mar 10, 2020 at 4:19 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>>
>> On 3/10/2020 5:13 AM, Doug Anderson wrote:
>>> Hi,
>>>
>>> On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>>>> Add changes to invoke rpmh flush() from within cache_lock when the data in
>>>> cache is dirty.
>>>>
>>>> Introduce two new APIs for this. Clients can use rpmh_start_transaction()
>>>> before any rpmh transaction once done invoke rpmh_end_transaction() which
>>>> internally invokes rpmh_flush() if the caches has become dirty.
>>>>
>>>> Add support to control this with flush_dirty flag.
>>>>
>>>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>>>> Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
>>>> ---
>>>>  drivers/soc/qcom/rpmh-internal.h |  4 +++
>>>>  drivers/soc/qcom/rpmh-rsc.c      |  6 +++-
>>>>  drivers/soc/qcom/rpmh.c          | 64 ++++++++++++++++++++++++++++++++--------
>>>>  include/soc/qcom/rpmh.h          | 10 +++++++
>>>>  4 files changed, 71 insertions(+), 13 deletions(-)
>>> As mentioned previously but not addressed [3], I believe your series
>>> breaks things if there are zero ACTIVE TCSs and you're using the
>>> immediate-flush solution.  Specifically any attempt to set something's
>>> "active" state will clobber the sleep/wake.  I believe this is hard to
>>> fix, especially if you want rpmh_write_async() to work properly and
>>> need to be robust to the last man going down while rpmh_write_async()
>>> is running but hasn't finished.  My suggestion was to consider it to
>>> be an error at probe time for now.
>>>
>>> Actually, though, I'd be super surprised if the "active == 0" case
>>> works anyway.  Aside from subtle problems of not handling -EAGAIN (see
>>> another previous message that you didn't respond to [2]), I think
>>> you'll also get failures because you never enable interrupts in
>>> RSC_DRV_IRQ_ENABLE for anything other than the ACTIVE_TCS.  Thus
>>> you'll never get interrupts saying when your transactions on the
>>> borrowed "wake" TCS finish.
>> No, it shouldn’t effect even with "non-OSI-mode + 0 ACTIVE TCS"
>>
>> i just replied on v9, pasting same on v13 as well.
>>
>> After taking your suggestion to do rpmh start/end transaction() in v13, rpmh_end_transaction()
>> invokes rpmh_flush() only for the last client and by this time expecting all of rpmh_write()
>> and rpmh_write_batch() will be already “finished” as client first waits for them to finish
>> and then only invokes end.
>>
>> So driver is good to handle rpmh_write() and rpmh_write_batch() calls.
> Ah, right.  In the previous version of the patch it was a problem
> because you flushed in cache_rpm_request() and then clobbered it right
> away in __rpmh_write() when you did rpmh_rsc_send_data().  With v13
> that is not the case anymore.  So this case should be OK.
>
>
>> Regarding rpmh_write_async() call, which is a fire-n-forget request from SW and client driver
>> may immediately invoke rpmh_end_transaction() after this.
>>
>> this case is also handled properly…
>> Lets again take an example for understanding this..
>>
>> 1.    Client invokes rpmh_write_async() to send ACTIVE cmds for targets which has zero ACTIVE TCS
>>
>>     Rpmh driver Re-purposes one of SLEEP/WAKE TCS to use as ACTIVE, internally this also sets
>>     drv->tcs_in_use to true for respective SLEEP/WAKE TCS.
>>
>> 2.    Client now without waiting for above to finish, goes ahead and invokes rpmh_end_transaction()
>>     which calls rpmh_flush() (in case cache become dirty)
> I guess we'd have to confirm that there's no way for the cache to
> _not_ become dirty but you do an "active" transaction.  Let's imagine
> this case:
>
> start transaction
> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x11);
> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);
> end transaction
>
> start transaction
> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x11);
> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);
> end transaction
>
> In other words the client does the same sequence twice in a row with
> no change in data.  After the first set I think you'd be fine.  ...but
> after the second set you'll be in trouble.  That's because none of the
> calls in the second set would cause the "dirty" to be set.  ...but for
> "active only" calls we don't have any sort of cache--we just fire it
> off.  When we fire off the active only we'll clobber the sleep/wake
> TCS.  ...and then nothing will write them again because the cache
> isn't dirty.
Agree with above scenario, but i don't see a reason why would a rpmh client send the same request twice.

Let me explain...

while first request is a proper one (already handled in rpmh driver), second is again duplicate
of first without any change.

when this duplicate request is triggered, resource may be in its own low power mode, when this
extra/duplicate command is sent, it needs to come out of low power mode and apply the newly requested
level but it is already at that level, so it will immediatly ack back without doing any real level
transition, and it will again go back to sleep. so there can be a small power hit.

and also for "ACTIVE" we need to handle this unnecessary ack interrupt at CPU, so CPU
(if it is in some low power mode where this interrupt is affined to) need to wake up and
handle this interrupt, again taking possible power hit from CPU point.

whats more?

while this whole unnecessary ping-pong happens in HW and SW, client may be waiting if its a blocking call.

rpmh client is expected to "aggregate" and finally do rpmh transaction "only if"
aggregated final level differs from what resource is already having.

if they are not doing this, then IMO, this is something to be addressed from client side.

>
>>     Now if re-purposed TCS is still in use in HW (transaction in progress), we still have
>>     drv->tcs_in_use set. So the rpmh_rsc_invalidate() (invoked from rpmh_flush()) will keep on
>>     returning -EAGAIN until that TCS becomes free to use and then goes ahead to finish its job.
> Ah, interesting.  I still think you have problems I pointed out in
> another response because you never enable interrupts for the "WAKE
> TCS", but I can see how this could be made to work.  In this case
> "async" becomes a little silly here because the flush will essentially
> be forced to wait until the transaction is totally done (so the TCS is
> free again), but it should be able to work.  
Agree, but i guess, this is a hit expected with non-OSI to do rpm_flush() immediately.
> This might actually point
> out something that needs to be changed in my "clean up" series since I
> guess "tcs_in_use" could sometimes be present in a wake TCS now.
yes this still need to keep.
>
>
>>>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>>>> index e278fc1..b6391e1 100644
>>>> --- a/drivers/soc/qcom/rpmh-rsc.c
>>>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>>>> @@ -61,6 +61,8 @@
>>>>  #define CMD_STATUS_ISSUED              BIT(8)
>>>>  #define CMD_STATUS_COMPL               BIT(16)
>>>>
>>>> +#define FLUSH_DIRTY                    1
>>>> +
>>>>  static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id)
>>>>  {
>>>>         return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id +
>>>> @@ -670,13 +672,15 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
>>>>         INIT_LIST_HEAD(&drv->client.cache);
>>>>         INIT_LIST_HEAD(&drv->client.batch_cache);
>>>>
>>>> +       drv->client.flush_dirty = device_get_match_data(&pdev->dev);
>>>> +
>>>>         dev_set_drvdata(&pdev->dev, drv);
>>>>
>>>>         return devm_of_platform_populate(&pdev->dev);
>>>>  }
>>>>
>>>>  static const struct of_device_id rpmh_drv_match[] = {
>>>> -       { .compatible = "qcom,rpmh-rsc", },
>>>> +       { .compatible = "qcom,rpmh-rsc", .data = (void *)FLUSH_DIRTY },
>>> Ick.  This is just confusing.  IMO better to set
>>> 'drv->client.flush_dirty = true' directly in probe with a comment
>>> saying that it could be removed if we had OSI.
>> Done.
>> i will keep this bit earlier in probe with commet, so later if we detect rsc to be in hierarchy
>> from [1], we can override this to be 0 within rpmh_probe_power_domain().
>>
>> [1] https://patchwork.kernel.org/patch/11391229/
> I don't really understand, but maybe it'll be obvious when I see the code.
okay.
>
>
>
>>> ...and while you're at it, why not fire off a separate patch (not in
>>> your series) adding the stub to 'include/linux/psci.h'.  Then when we
>>> revisit this in a year it'll be there and it'll be super easy to set
>>> the value properly.
>> With above approch to set it in probe accordingly PSCI change won't be required.
>>
>> it will be simple, cleaner and without any resistance from PSCI perspective.
>>
>>>>         { }
>>>>  };
>>>>
>>>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>>>> index 5bed8f4..9d40209 100644
>>>> --- a/drivers/soc/qcom/rpmh.c
>>>> +++ b/drivers/soc/qcom/rpmh.c
>>>> @@ -297,12 +297,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
>>>>  {
>>>>         struct batch_cache_req *req;
>>>>         const struct rpmh_request *rpm_msg;
>>>> -       unsigned long flags;
>>>>         int ret = 0;
>>>>         int i;
>>>>
>>>>         /* Send Sleep/Wake requests to the controller, expect no response */
>>>> -       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>>>>         list_for_each_entry(req, &ctrlr->batch_cache, list) {
>>>>                 for (i = 0; i < req->count; i++) {
>>>>                         rpm_msg = req->rpm_msgs + i;
>>>> @@ -312,7 +310,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
>>>>                                 break;
>>>>                 }
>>>>         }
>>>> -       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>>>
>>>>         return ret;
>>>>  }
>>>> @@ -433,16 +430,63 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>>>>  }
>>>>
>>>>  /**
>>>> + * rpmh_start_transaction: Indicates start of rpmh transactions, this
>>>> + * must be ended by invoking rpmh_end_transaction().
>>>> + *
>>>> + * @dev: the device making the request
>>>> + */
>>>> +void rpmh_start_transaction(const struct device *dev)
>>>> +{
>>>> +       struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
>>>> +       unsigned long flags;
>>>> +
>>>> +       if (!ctrlr->flush_dirty)
>>>> +               return;
>>>> +
>>>> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>>>> +       ctrlr->active_clients++;
>>> Wouldn't hurt to have something like:
>>>
>>> /*
>>>  * Detect likely leak; we shouldn't have 1000
>>>  * people making in-flight changes at the same time.
>>>  */
>>> WARN_ON(ctrlr->active_clients > 1000)
>> Not necessary change.
> Yes, but it will catch buggy clients much more quickly.  What are the downsides?
IMO, its uncessary warning that too with arbitrary number (1000).
rpmh clients are not expected to bombard it with thousands of requests, as i explained
above, they need to aggregate and make rpmh request only when real level transistion
needed.

and there is already a message (in rpmh_rsc_send_data()) to tell if all TCS are occupied
and rpmh will retry to send pending request. "TCS Busy, retrying RPMH message send"
>
>
>>>> +       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>>> +}
>>>> +EXPORT_SYMBOL(rpmh_start_transaction);
>>>> +
>>>> +/**
>>>> + * rpmh_end_transaction: Indicates end of rpmh transactions. All dirty data
>>>> + * in cache can be flushed immediately when ctrlr->flush_dirty is set
>>>> + *
>>>> + * @dev: the device making the request
>>>> + *
>>>> + * Return: 0 on success, error number otherwise.
>>>> + */
>>>> +int rpmh_end_transaction(const struct device *dev)
>>>> +{
>>>> +       struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
>>>> +       unsigned long flags;
>>>> +       int ret = 0;
>>>> +
>>>> +       if (!ctrlr->flush_dirty)
>>>> +               return ret;
>>>> +
>>>> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
>>> WARN_ON(!active_clients);
>> Why? when active_clients become zero, we want to finally call rpmh_flush(), i don't see a reason to warn and then flush.
>>
>> Or do you want to make a check if client really called rpmh_start_transaction() first before calling rpmh_end_transaction() then when we do
>> ctrlr->active_clients--;
>>
>> it shouldn't go to negative value at the end. in that case let me know, i will make those changes.
> Right, I want to warn on underflow.  AKA:
>
> WARN_ON(!ctrlr->active_clients);
> ctrlr->active_clients--;
>
> Generally it's handy to detect mismatches of start/end.  It could be
> that someone accidentally starts twice and ends once.  Starts zero
> times and ends once.  Starts once and ends twice.  All of these are
> interesting cases to warn about.
>
>
>
> -Doug

Agree, i will address this.

Thanks,
Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions
  2020-03-09  9:30 ` [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions Maulik Shah
  2020-03-09 23:44   ` Doug Anderson
@ 2020-03-11 23:02   ` Doug Anderson
  1 sibling, 0 replies; 24+ messages in thread
From: Doug Anderson @ 2020-03-11 23:02 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao, Taniya Das, Odelu Kukatla, Kiran Gunda, Sibi Sankar

Hi,

On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> --- a/drivers/interconnect/qcom/bcm-voter.c
> +++ b/drivers/interconnect/qcom/bcm-voter.c
> @@ -263,7 +263,9 @@ int qcom_icc_bcm_voter_commit(struct bcm_voter *voter)
>         tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_AMC, cmds, commit_idx);
>
>         if (!commit_idx[0])
> -               goto out;
> +               goto end;
> +
> +       rpmh_start_transaction(voter-dev);
>
>         ret = rpmh_invalidate(voter->dev);
>         if (ret) {
> @@ -312,12 +314,15 @@ int qcom_icc_bcm_voter_commit(struct bcm_voter *voter)
>         tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_SLEEP, cmds, commit_idx);
>
>         ret = rpmh_write_batch(voter->dev, RPMH_SLEEP_STATE, cmds, commit_idx);
> -       if (ret) {
> +       if (ret)
>                 pr_err("Error sending SLEEP RPMH requests (%d)\n", ret);
> -               goto out;
> -       }
>
>  out:
> +       ret = rpmh_end_transaction(voter-dev);

One last note is that your code doesn't actually compile for me since
you need "voter->dev" here (and in the start case), not "voter-dev".

-Doug

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-03-11  6:36         ` Maulik Shah
@ 2020-03-11 23:06           ` Doug Anderson
       [not found]             ` <5a5274ac-41f4-b06d-ff49-c00cef67aa7f@codeaurora.org>
  0 siblings, 1 reply; 24+ messages in thread
From: Doug Anderson @ 2020-03-11 23:06 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

On Tue, Mar 10, 2020 at 11:36 PM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> Hi,
>
> On 3/10/2020 9:16 PM, Doug Anderson wrote:
> > Hi,
> >
> > On Tue, Mar 10, 2020 at 4:19 AM Maulik Shah <mkshah@codeaurora.org> wrote:
> >>
> >> On 3/10/2020 5:13 AM, Doug Anderson wrote:
> >>> Hi,
> >>>
> >>> On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
> >>>> Add changes to invoke rpmh flush() from within cache_lock when the data in
> >>>> cache is dirty.
> >>>>
> >>>> Introduce two new APIs for this. Clients can use rpmh_start_transaction()
> >>>> before any rpmh transaction once done invoke rpmh_end_transaction() which
> >>>> internally invokes rpmh_flush() if the caches has become dirty.
> >>>>
> >>>> Add support to control this with flush_dirty flag.
> >>>>
> >>>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> >>>> Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
> >>>> ---
> >>>>  drivers/soc/qcom/rpmh-internal.h |  4 +++
> >>>>  drivers/soc/qcom/rpmh-rsc.c      |  6 +++-
> >>>>  drivers/soc/qcom/rpmh.c          | 64 ++++++++++++++++++++++++++++++++--------
> >>>>  include/soc/qcom/rpmh.h          | 10 +++++++
> >>>>  4 files changed, 71 insertions(+), 13 deletions(-)
> >>> As mentioned previously but not addressed [3], I believe your series
> >>> breaks things if there are zero ACTIVE TCSs and you're using the
> >>> immediate-flush solution.  Specifically any attempt to set something's
> >>> "active" state will clobber the sleep/wake.  I believe this is hard to
> >>> fix, especially if you want rpmh_write_async() to work properly and
> >>> need to be robust to the last man going down while rpmh_write_async()
> >>> is running but hasn't finished.  My suggestion was to consider it to
> >>> be an error at probe time for now.
> >>>
> >>> Actually, though, I'd be super surprised if the "active == 0" case
> >>> works anyway.  Aside from subtle problems of not handling -EAGAIN (see
> >>> another previous message that you didn't respond to [2]), I think
> >>> you'll also get failures because you never enable interrupts in
> >>> RSC_DRV_IRQ_ENABLE for anything other than the ACTIVE_TCS.  Thus
> >>> you'll never get interrupts saying when your transactions on the
> >>> borrowed "wake" TCS finish.
> >> No, it shouldn’t effect even with "non-OSI-mode + 0 ACTIVE TCS"
> >>
> >> i just replied on v9, pasting same on v13 as well.
> >>
> >> After taking your suggestion to do rpmh start/end transaction() in v13, rpmh_end_transaction()
> >> invokes rpmh_flush() only for the last client and by this time expecting all of rpmh_write()
> >> and rpmh_write_batch() will be already “finished” as client first waits for them to finish
> >> and then only invokes end.
> >>
> >> So driver is good to handle rpmh_write() and rpmh_write_batch() calls.
> > Ah, right.  In the previous version of the patch it was a problem
> > because you flushed in cache_rpm_request() and then clobbered it right
> > away in __rpmh_write() when you did rpmh_rsc_send_data().  With v13
> > that is not the case anymore.  So this case should be OK.
> >
> >
> >> Regarding rpmh_write_async() call, which is a fire-n-forget request from SW and client driver
> >> may immediately invoke rpmh_end_transaction() after this.
> >>
> >> this case is also handled properly…
> >> Lets again take an example for understanding this..
> >>
> >> 1.    Client invokes rpmh_write_async() to send ACTIVE cmds for targets which has zero ACTIVE TCS
> >>
> >>     Rpmh driver Re-purposes one of SLEEP/WAKE TCS to use as ACTIVE, internally this also sets
> >>     drv->tcs_in_use to true for respective SLEEP/WAKE TCS.
> >>
> >> 2.    Client now without waiting for above to finish, goes ahead and invokes rpmh_end_transaction()
> >>     which calls rpmh_flush() (in case cache become dirty)
> > I guess we'd have to confirm that there's no way for the cache to
> > _not_ become dirty but you do an "active" transaction.  Let's imagine
> > this case:
> >
> > start transaction
> > rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> > rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x11);
> > rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);
> > end transaction
> >
> > start transaction
> > rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> > rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x11);
> > rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);
> > end transaction
> >
> > In other words the client does the same sequence twice in a row with
> > no change in data.  After the first set I think you'd be fine.  ...but
> > after the second set you'll be in trouble.  That's because none of the
> > calls in the second set would cause the "dirty" to be set.  ...but for
> > "active only" calls we don't have any sort of cache--we just fire it
> > off.  When we fire off the active only we'll clobber the sleep/wake
> > TCS.  ...and then nothing will write them again because the cache
> > isn't dirty.
> Agree with above scenario, but i don't see a reason why would a rpmh client send the same request twice.
>
> Let me explain...
>
> while first request is a proper one (already handled in rpmh driver), second is again duplicate
> of first without any change.
>
> when this duplicate request is triggered, resource may be in its own low power mode, when this
> extra/duplicate command is sent, it needs to come out of low power mode and apply the newly requested
> level but it is already at that level, so it will immediatly ack back without doing any real level
> transition, and it will again go back to sleep. so there can be a small power hit.
>
> and also for "ACTIVE" we need to handle this unnecessary ack interrupt at CPU, so CPU
> (if it is in some low power mode where this interrupt is affined to) need to wake up and
> handle this interrupt, again taking possible power hit from CPU point.
>
> whats more?
>
> while this whole unnecessary ping-pong happens in HW and SW, client may be waiting if its a blocking call.
>
> rpmh client is expected to "aggregate" and finally do rpmh transaction "only if"
> aggregated final level differs from what resource is already having.
>
> if they are not doing this, then IMO, this is something to be addressed from client side.

It feels like "rpmh.c" needs to add "active_val" to its 'struct
cache_req' or truly enforce that "active_val == wake_val" all the time
(in other words if someone sets "wake_val" in rpmh_write() /
rpmh_write_async() you should also set "active_val").  Now you can
skip the call to rpmh-rsc if the active doesn't change and the problem
is solved.

Specifically I wouldn't trust all callers of rpmh_write() /
rpmh_write_async() to never send the same data.  If it was just an
speed/power optimization then sure you could trust them, but this is
for correctness.


> >>     Now if re-purposed TCS is still in use in HW (transaction in progress), we still have
> >>     drv->tcs_in_use set. So the rpmh_rsc_invalidate() (invoked from rpmh_flush()) will keep on
> >>     returning -EAGAIN until that TCS becomes free to use and then goes ahead to finish its job.
> > Ah, interesting.  I still think you have problems I pointed out in
> > another response because you never enable interrupts for the "WAKE
> > TCS", but I can see how this could be made to work.  In this case
> > "async" becomes a little silly here because the flush will essentially
> > be forced to wait until the transaction is totally done (so the TCS is
> > free again), but it should be able to work.
> Agree, but i guess, this is a hit expected with non-OSI to do rpm_flush() immediately.

Right, though the hit is much much more if there is no active TCS.
Said another way: if there is an active TCS than non-OSI mode just
causes a bunch of extra register writes.  ...but if there is no active
TCS then non-OSI mode essentially makes rpmh_write_async() useless.

Hrmm, thinking about this again, though, I'm still not convinced I
understand what prevents the firmware from triggering "sleep mode"
while the sleep/wake TCS is being borrowed for an active-only
transaction.  If we're sitting in rpmh_write() and sleeping in
wait_for_completion_timeout() couldn't the system go idle and trigger
sleep mode?  In OSI-mode (with last man down) you'll always have a
rpmh_flush() called by the last man down so you know you can make sure
you're in a consistent state (one final flush and no more active-only
transactions will happen).  Without last man down you have to assume
it can happen at any time don't you?

...so I guess I'll go back to asserting that zero-active-TCS is
incompatible with non-OSI unless you have some way to prevent the
sleep mode from being triggered while you've borrowed the wake TCS.


> > This might actually point
> > out something that needs to be changed in my "clean up" series since I
> > guess "tcs_in_use" could sometimes be present in a wake TCS now.
> yes this still need to keep.

As I've looked at this more, I now believe that "tcs_in_use" is not
sufficient for that case.  Specifically nothing prevents another
thread writing the sleep/wake TCS right after it was invalidated but
before the active-only command is programmed into it.  Specifically:

- write sleep/wake
- write active only
-> see zero-active only and invalidate sleep/wake
-> another thread comes in and write sleep/wake
   ...NOTE: "tcs_in_use" isn't updated for sleep/wake
-> thread writing active_only will then program active_only
   atop the sleep/wake requests.

Maybe it's not a huge deal in the "OSI" case because you only ever
write sleep/wake in the last man down and there can be no new active
transactions when you're doing this.  ...but it seems like it'd be a
problem for non-OSI.

This whole "no active TCS" is really quite a mess.  Given how broken
it seems to me it almost feels like we should remove "no active TCS"
from the driver for now and then re-add it in a later patch when we
can really validate everything.  I tried addressing this in my
rpmh-rsc cleanup series and every time I thought I had a good solution
I could find another way to break it.

Do you actually have the "no active TCS" case working on the current
code, or does it only work on some downstream variant of the driver?


> >>>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> >>>> index e278fc1..b6391e1 100644
> >>>> --- a/drivers/soc/qcom/rpmh-rsc.c
> >>>> +++ b/drivers/soc/qcom/rpmh-rsc.c
> >>>> @@ -61,6 +61,8 @@
> >>>>  #define CMD_STATUS_ISSUED              BIT(8)
> >>>>  #define CMD_STATUS_COMPL               BIT(16)
> >>>>
> >>>> +#define FLUSH_DIRTY                    1
> >>>> +
> >>>>  static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id)
> >>>>  {
> >>>>         return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id +
> >>>> @@ -670,13 +672,15 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
> >>>>         INIT_LIST_HEAD(&drv->client.cache);
> >>>>         INIT_LIST_HEAD(&drv->client.batch_cache);
> >>>>
> >>>> +       drv->client.flush_dirty = device_get_match_data(&pdev->dev);
> >>>> +
> >>>>         dev_set_drvdata(&pdev->dev, drv);
> >>>>
> >>>>         return devm_of_platform_populate(&pdev->dev);
> >>>>  }
> >>>>
> >>>>  static const struct of_device_id rpmh_drv_match[] = {
> >>>> -       { .compatible = "qcom,rpmh-rsc", },
> >>>> +       { .compatible = "qcom,rpmh-rsc", .data = (void *)FLUSH_DIRTY },
> >>> Ick.  This is just confusing.  IMO better to set
> >>> 'drv->client.flush_dirty = true' directly in probe with a comment
> >>> saying that it could be removed if we had OSI.
> >> Done.
> >> i will keep this bit earlier in probe with commet, so later if we detect rsc to be in hierarchy
> >> from [1], we can override this to be 0 within rpmh_probe_power_domain().
> >>
> >> [1] https://patchwork.kernel.org/patch/11391229/
> > I don't really understand, but maybe it'll be obvious when I see the code.
> okay.
> >
> >
> >
> >>> ...and while you're at it, why not fire off a separate patch (not in
> >>> your series) adding the stub to 'include/linux/psci.h'.  Then when we
> >>> revisit this in a year it'll be there and it'll be super easy to set
> >>> the value properly.
> >> With above approch to set it in probe accordingly PSCI change won't be required.
> >>
> >> it will be simple, cleaner and without any resistance from PSCI perspective.
> >>
> >>>>         { }
> >>>>  };
> >>>>
> >>>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> >>>> index 5bed8f4..9d40209 100644
> >>>> --- a/drivers/soc/qcom/rpmh.c
> >>>> +++ b/drivers/soc/qcom/rpmh.c
> >>>> @@ -297,12 +297,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
> >>>>  {
> >>>>         struct batch_cache_req *req;
> >>>>         const struct rpmh_request *rpm_msg;
> >>>> -       unsigned long flags;
> >>>>         int ret = 0;
> >>>>         int i;
> >>>>
> >>>>         /* Send Sleep/Wake requests to the controller, expect no response */
> >>>> -       spin_lock_irqsave(&ctrlr->cache_lock, flags);
> >>>>         list_for_each_entry(req, &ctrlr->batch_cache, list) {
> >>>>                 for (i = 0; i < req->count; i++) {
> >>>>                         rpm_msg = req->rpm_msgs + i;
> >>>> @@ -312,7 +310,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
> >>>>                                 break;
> >>>>                 }
> >>>>         }
> >>>> -       spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> >>>>
> >>>>         return ret;
> >>>>  }
> >>>> @@ -433,16 +430,63 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
> >>>>  }
> >>>>
> >>>>  /**
> >>>> + * rpmh_start_transaction: Indicates start of rpmh transactions, this
> >>>> + * must be ended by invoking rpmh_end_transaction().
> >>>> + *
> >>>> + * @dev: the device making the request
> >>>> + */
> >>>> +void rpmh_start_transaction(const struct device *dev)
> >>>> +{
> >>>> +       struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
> >>>> +       unsigned long flags;
> >>>> +
> >>>> +       if (!ctrlr->flush_dirty)
> >>>> +               return;
> >>>> +
> >>>> +       spin_lock_irqsave(&ctrlr->cache_lock, flags);
> >>>> +       ctrlr->active_clients++;
> >>> Wouldn't hurt to have something like:
> >>>
> >>> /*
> >>>  * Detect likely leak; we shouldn't have 1000
> >>>  * people making in-flight changes at the same time.
> >>>  */
> >>> WARN_ON(ctrlr->active_clients > 1000)
> >> Not necessary change.
> > Yes, but it will catch buggy clients much more quickly.  What are the downsides?
> IMO, its uncessary warning that too with arbitrary number (1000).
> rpmh clients are not expected to bombard it with thousands of requests, as i explained
> above, they need to aggregate and make rpmh request only when real level transistion
> needed.

I'm not saying we should limit the total number of requests to 1000.
I'm saying that if there are 1000 active clients then that's a
problem.  Right now there are something like 4 clients.  It doesn't
matter how fast those clients are sending, active_clients will only be
at most 4 and that would only be if they were all running their code
at the exact same time.

I want to be able to quickly detect this type of bug:

start_transaction()
ret = rpmh_write()
if (ret)
  return ret;
ret = rpmh_write()
end_transaction()
return ret;

...in other words: someone has a code path where start_transaction()
is called but never end_transaction().  I'm proposing that if we ever
see the ridiculous value of 1000 active clients the only way it could
happen is if one of the clients started more times than they ended.


I guess maybe by the time there were 1000 it would be too late,
though, because we'd have skipped A LOT of flushes by then?  Maybe
instead we should add something where if RPMH is "ditry" for more than
a certain amount of time we put a warning?


-Doug

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
       [not found]             ` <5a5274ac-41f4-b06d-ff49-c00cef67aa7f@codeaurora.org>
@ 2020-03-12 15:11               ` Doug Anderson
  2020-03-25 17:15                 ` Maulik Shah
  0 siblings, 1 reply; 24+ messages in thread
From: Doug Anderson @ 2020-03-12 15:11 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

Quoting below may look funny since you replied with HTML mail and all
old quoting was lost.  :(  I tried my best.

On Thu, Mar 12, 2020 at 3:32 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> > Specifically I wouldn't trust all callers of rpmh_write() /
> > rpmh_write_async() to never send the same data.  If it was just an
> > speed/power optimization then sure you could trust them, but this is
> > for correctness.
>
> yes we should trust callers not to send duplicate data.

I guess we'll see how this works out.  Since this only affects the
"zero active-only" case and I'm pretty sure that case has more
important issues, I'm OK w/ ignoring for now.


> > Hrmm, thinking about this again, though, I'm still not convinced I
> > understand what prevents the firmware from triggering "sleep mode"
> > while the sleep/wake TCS is being borrowed for an active-only
> > transaction.  If we're sitting in rpmh_write() and sleeping in
> > wait_for_completion_timeout() couldn't the system go idle and trigger
> > sleep mode?  In OSI-mode (with last man down) you'll always have a
> > rpmh_flush() called by the last man down so you know you can make sure
> > you're in a consistent state (one final flush and no more active-only
> transactions will happen).  Without last man down you have to assume
> > it can happen at any time don't you?
>>
> > ...so I guess I'll go back to asserting that zero-active-TCS is
> > incompatible with non-OSI unless you have some way to prevent the
> > sleep mode from being triggered while you've borrowed the wake TCS.
>
> i had change for this in v4 to handle same.
>
> Link: https://patchwork.kernel.org/patch/11366205/
>
> + /*
> + * RPMh domain can not be powered off when there is pending ACK for
> + * ACTIVE_TCS request. Exit when controller is busy.
> + */
>
> before calling rpmh_flush() we check ctrlr is idle (ensuring
>
> tcs_is_free() check passes)  this will ensure that
>
> previous active transaction is completed before going ahead.
>
> i will add this check in v14.
>
> since this curretntly check for ACTIVE TCS only, i will update it to check the repurposed "WAKE TCS" is also free.

The difficulty isn't in adding a simple check, it's in adding a check
that is race free and handles locking properly.  Specifically looking
at your the v4 you pointed to, I see things like this:

  if (!rpmh_rsc_ctrlr_is_idle(drv))
    return -EBUSY;
  return rpmh_flush(&drv->client);

The rpmh_rsc_ctrlr_is_idle() grabs a spin lock implying that there
could be other people acting on RPMh at the same time (otherwise, why
do you even need locks?).  ...but when it returns the lock is
released.  Once the lock is dropped then other clients are free to
start using RPMH because nothing prevents them.  ...then you go ahead
and start flushing.

Said another way: due to the way you've structured locking in that
patch rpmh_rsc_ctrlr_is_idle() is inherently dangerous because it
returns an instantaneous state that may well have changed between the
spin_unlock_irqrestore() at the end of the function and when the
function returns.

You could, of course, fix this by requiring that the caller hold the
'drv->lock' for both the calls to rpmh_rsc_ctrlr_is_idle() and
rpmh_flush() (ignoring the fact the drv->lock is nominally part of
rpmh-rsc.c and not rpmh.c).  Now it would be safe from the problem I
described.  ...but now you get into a new problem.  If you ever hold
two locks you always need to make sure you grab them in the same order
any time you grab them both.  ...but tcs_write() we first grab a
tcs_lock and _then_ drv->lock.  That means the "fix" of holding
drv->lock for both the calls to rpmh_rsc_ctrlr_is_idle() and
rpmh_flush() won't work because rpmh_flush() will need to grab a
tcs_lock.  Possibly we could make this work by eliminating the "tcs
lock" and just having the one big "drv->lock" protect everything (or
introducing a new "super lock" making the other two meaningless).  It
would certainly be easier to reason about...

NOTE: the only way I'm able to reason about all the above things is
because I spent all the time to document what rpmh-rsc is doing and
what assumptions the various functions had [1].  It'd be great if that
could get a review.


> > This whole "no active TCS" is really quite a mess.  Given how broken
> > it seems to me it almost feels like we should remove "no active TCS"
> > from the driver for now and then re-add it in a later patch when we
> > can really validate everything.  I tried addressing this in my
> > rpmh-rsc cleanup series and every time I thought I had a good solution
> > I could find another way to break it.
> >
> > Do you actually have the "no active TCS" case working on the current
> > code, or does it only work on some downstream variant of the driver?
> >
> > It works fine on downstream variant. Some checks are still needed like above from v4
> >
> > since we do rpmh_flush() immediatly for dirty caches now.

OK.  So I take it you haven't tested the "zero active" case with the
upstream code?  In theory it should be easy to test, right?  Just hack
the driver to pretend there are zero active TCSs?

Which SoCs in specific need the zero active TCSs?  We are spending a
lot of time talking about this and reviewing the code with this in
mind.  It adds a lot of complexity to the driver.  If nothing under
active development needs it I think we should do ourselves a favor and
remove it for now, then add it back later.  Otherwise this whole
process is just going to take a lot more time.


> > I'm not saying we should limit the total number of requests to 1000.
> > I'm saying that if there are 1000 active clients then that's a
> > problem.  Right now there are something like 4 clients.  It doesn't
> > matter how fast those clients are sending, active_clients will only be
> > at most 4 and that would only be if they were all running their code
> > at the exact same time.
> >
> > I want to be able to quickly detect this type of bug:
> >
> > start_transaction()
> > ret = rpmh_write()
> > if (ret)
> >   return ret;
> > ret = rpmh_write()
> > end_transaction()
> > return ret;
> >
> > ...in other words: someone has a code path where start_transaction()
> > is called but never end_transaction().  I'm proposing that if we ever
> > see the ridiculous value of 1000 active clients the only way it could
> > happen is if one of the clients started more times than they ended.
> >
> >
> > I guess maybe by the time there were 1000 it would be too late,
> > though, because we'd have skipped A LOT of flushes by then?  Maybe
> > instead we should add something where if RPMH is "ditry" for more than
> > a certain amount of time we put a warning?
>
> IMO, we should not add any such warning with any number.
> There are only 4 clients and unlikely to have any new ones. those 4 we should be able to ensure
> that they invoke end_transaction(), if they have already done start_transaction().beside,
> Function description also says that "this must be ended by invoking rpmh_end_transaction()"
> i am ok to also add  saying that "rpmh do not check this, so its caller's responsibility to
> end it"

I don't agree but I won't argue further.  If you want to leave out the
WARN() then so be it.

-Doug

[1] https://lore.kernel.org/r/20200311161104.RFT.v2.5.I52653eb85d7dc8981ee0dafcd0b6cc0f273e9425@changeid

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v13 2/5] soc: qcom: rpmh: Update dirty flag only when data changes
  2020-03-09 23:42   ` Doug Anderson
@ 2020-03-13  8:53     ` Maulik Shah
  0 siblings, 0 replies; 24+ messages in thread
From: Maulik Shah @ 2020-03-13  8:53 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

On 3/10/2020 5:12 AM, Doug Anderson wrote:
> Hi,
>
> On Mon, Mar 9, 2020 at 2:31 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>> Currently rpmh ctrlr dirty flag is set for all cases regardless of data
>> is really changed or not. Add changes to update dirty flag when data is
>> changed to newer values. Update dirty flag everytime when data in batch
>> cache is updated since rpmh_flush() may get invoked from any CPU instead
>> of only last CPU going to low power mode.
>>
>> Also move dirty flag updates to happen from within cache_lock and remove
>> unnecessary INIT_LIST_HEAD() call and a default case from switch.
>>
>> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
>> Reviewed-by: Evan Green <evgreen@chromium.org>
>> ---
>>  drivers/soc/qcom/rpmh.c | 19 +++++++++++--------
>>  1 file changed, 11 insertions(+), 8 deletions(-)
> Reviewed-by: Douglas Anderson <dianders@chromium.org>

Thanks for the review doug.

Hi bjorn,

can you please pull in first 2 changes in the series.

while the others are still under discussion, pulling in these first 2 will help avoid send them each time.

Thanks,

Maulik

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Re: [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-03-12 15:11               ` Doug Anderson
@ 2020-03-25 17:15                 ` Maulik Shah
  2020-03-26 18:08                   ` Doug Anderson
  0 siblings, 1 reply; 24+ messages in thread
From: Maulik Shah @ 2020-03-25 17:15 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

On 3/12/2020 8:41 PM, Doug Anderson wrote:
> Hi,
>
> Quoting below may look funny since you replied with HTML mail and all
> old quoting was lost.  :(  I tried my best.
>
> On Thu, Mar 12, 2020 at 3:32 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>>> Specifically I wouldn't trust all callers of rpmh_write() /
>>> rpmh_write_async() to never send the same data.  If it was just an
>>> speed/power optimization then sure you could trust them, but this is
>>> for correctness.
>> yes we should trust callers not to send duplicate data.
> I guess we'll see how this works out.  Since this only affects the
> "zero active-only" case and I'm pretty sure that case has more
> important issues, I'm OK w/ ignoring for now.
>
>
>>> Hrmm, thinking about this again, though, I'm still not convinced I
>>> understand what prevents the firmware from triggering "sleep mode"
>>> while the sleep/wake TCS is being borrowed for an active-only
>>> transaction.  If we're sitting in rpmh_write() and sleeping in
>>> wait_for_completion_timeout() couldn't the system go idle and trigger
>>> sleep mode?  In OSI-mode (with last man down) you'll always have a
>>> rpmh_flush() called by the last man down so you know you can make sure
>>> you're in a consistent state (one final flush and no more active-only
>> transactions will happen).  Without last man down you have to assume
>>> it can happen at any time don't you?
>>>
>>> ...so I guess I'll go back to asserting that zero-active-TCS is
>>> incompatible with non-OSI unless you have some way to prevent the
>>> sleep mode from being triggered while you've borrowed the wake TCS.
>> i had change for this in v4 to handle same.
>>
>> Link: https://patchwork.kernel.org/patch/11366205/
>>
>> + /*
>> + * RPMh domain can not be powered off when there is pending ACK for
>> + * ACTIVE_TCS request. Exit when controller is busy.
>> + */
>>
>> before calling rpmh_flush() we check ctrlr is idle (ensuring
>>
>> tcs_is_free() check passes)  this will ensure that
>>
>> previous active transaction is completed before going ahead.
>>
>> i will add this check in v14.
>>
>> since this curretntly check for ACTIVE TCS only, i will update it to check the repurposed "WAKE TCS" is also free.
> The difficulty isn't in adding a simple check, it's in adding a check
> that is race free and handles locking properly.  Specifically looking
> at your the v4 you pointed to, I see things like this:
>
>   if (!rpmh_rsc_ctrlr_is_idle(drv))
>     return -EBUSY;
>   return rpmh_flush(&drv->client);
>
> The rpmh_rsc_ctrlr_is_idle() grabs a spin lock implying that there
> could be other people acting on RPMh at the same time (otherwise, why
> do you even need locks?).  ...but when it returns the lock is
> released.  Once the lock is dropped then other clients are free to
> start using RPMH because nothing prevents them.  ...then you go ahead
> and start flushing.
>
> Said another way: due to the way you've structured locking in that
> patch rpmh_rsc_ctrlr_is_idle() is inherently dangerous because it
> returns an instantaneous state that may well have changed between the
> spin_unlock_irqrestore() at the end of the function and when the
> function returns.
>
> You could, of course, fix this by requiring that the caller hold the
> 'drv->lock' for both the calls to rpmh_rsc_ctrlr_is_idle() and
> rpmh_flush() (ignoring the fact the drv->lock is nominally part of
> rpmh-rsc.c and not rpmh.c).  Now it would be safe from the problem I
> described.  ...but now you get into a new problem.  If you ever hold
> two locks you always need to make sure you grab them in the same order
> any time you grab them both.  ...but tcs_write() we first grab a
> tcs_lock and _then_ drv->lock.  That means the "fix" of holding
> drv->lock for both the calls to rpmh_rsc_ctrlr_is_idle() and
> rpmh_flush() won't work because rpmh_flush() will need to grab a
> tcs_lock.  Possibly we could make this work by eliminating the "tcs
> lock" and just having the one big "drv->lock" protect everything (or
> introducing a new "super lock" making the other two meaningless).  It
> would certainly be easier to reason about...
Thanks Doug.

Agree, a simple check won't help here.

From above discussions, summarizing out 3 items that gets impacted when using rpmh_start/end_transaction().

1. rpmh_write_async() becomes of little use since drivers may need to wait for rpmh_flush() to finish
if caches becomes dirty in between.
2. It creates more pressure on WAKE TCS when there is no dedicated ACTIVE TCS. Transactions are delayed
if rpmh_flush() is in progress holding the locks and new request comes in to send Active only data.
3. rpmh_rsc_ctrlr_is_idle() needs locking if ANY cpu can be calling this, this may require reordering of
locks / increase the time for which locks are held during rpmh transactions. On downstream variant we don't
have locking in this since in OSI, only last CPU is invoking it and it is the only one invoking this function.

Given above impact this approach seem not so simple as i though of earlier. i have alternate solution which
uses cpu_pm_notifications() and invokes rpmh_flush() for non-OSI targets. This approach should not impact
above 3 items.

I will soon post out v14 with this, testing in progress.

>
> NOTE: the only way I'm able to reason about all the above things is
> because I spent all the time to document what rpmh-rsc is doing and
> what assumptions the various functions had [1].  It'd be great if that
> could get a review.
Sure.
>
>
>>> This whole "no active TCS" is really quite a mess.  Given how broken
>>> it seems to me it almost feels like we should remove "no active TCS"
>>> from the driver for now and then re-add it in a later patch when we
>>> can really validate everything.  I tried addressing this in my
>>> rpmh-rsc cleanup series and every time I thought I had a good solution
>>> I could find another way to break it.
>>>
>>> Do you actually have the "no active TCS" case working on the current
>>> code, or does it only work on some downstream variant of the driver?
>>>
>>> It works fine on downstream variant. Some checks are still needed like above from v4
>>>
>>> since we do rpmh_flush() immediatly for dirty caches now.
> OK.  So I take it you haven't tested the "zero active" case with the
> upstream code?  In theory it should be easy to test, right?  Just hack
> the driver to pretend there are zero active TCSs?

No, it won't work out this way. if we want to test with zero active case, need to pick up [2].

[2] also need follow up fixes to work. This change is also in my to do
list to get merged. I will include a change in v14 series at the end, it can help test this series
for zero active tcs as well.

Note that it doesn't have any dependency with this series and current series can get merged
without [2].
>
> Which SoCs in specific need the zero active TCSs?  We are spending a
> lot of time talking about this and reviewing the code with this in
> mind.  It adds a lot of complexity to the driver.  If nothing under
> active development needs it I think we should do ourselves a favor and
> remove it for now, then add it back later.  Otherwise this whole
> process is just going to take a lot more time.
>
There are multiple SoCs having zero active TCSes in downstream code. So we can not remove it.

As i said above we need [2] plus some fixes to have zero active TCS working fine on upstream driver.

Thanks,
Maulik

>>> I'm not saying we should limit the total number of requests to 1000.
>>> I'm saying that if there are 1000 active clients then that's a
>>> problem.  Right now there are something like 4 clients.  It doesn't
>>> matter how fast those clients are sending, active_clients will only be
>>> at most 4 and that would only be if they were all running their code
>>> at the exact same time.
>>>
>>> I want to be able to quickly detect this type of bug:
>>>
>>> start_transaction()
>>> ret = rpmh_write()
>>> if (ret)
>>>   return ret;
>>> ret = rpmh_write()
>>> end_transaction()
>>> return ret;
>>>
>>> ...in other words: someone has a code path where start_transaction()
>>> is called but never end_transaction().  I'm proposing that if we ever
>>> see the ridiculous value of 1000 active clients the only way it could
>>> happen is if one of the clients started more times than they ended.
>>>
>>>
>>> I guess maybe by the time there were 1000 it would be too late,
>>> though, because we'd have skipped A LOT of flushes by then?  Maybe
>>> instead we should add something where if RPMH is "ditry" for more than
>>> a certain amount of time we put a warning?
>> IMO, we should not add any such warning with any number.
>> There are only 4 clients and unlikely to have any new ones. those 4 we should be able to ensure
>> that they invoke end_transaction(), if they have already done start_transaction().beside,
>> Function description also says that "this must be ended by invoking rpmh_end_transaction()"
>> i am ok to also add  saying that "rpmh do not check this, so its caller's responsibility to
>> end it"
> I don't agree but I won't argue further.  If you want to leave out the
> WARN() then so be it.
>
> -Doug
>
> [1] https://lore.kernel.org/r/20200311161104.RFT.v2.5.I52653eb85d7dc8981ee0dafcd0b6cc0f273e9425@changeid

[2] https://patchwork.kernel.org/patch/10818129/

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Re: [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches
  2020-03-25 17:15                 ` Maulik Shah
@ 2020-03-26 18:08                   ` Doug Anderson
  0 siblings, 0 replies; 24+ messages in thread
From: Doug Anderson @ 2020-03-26 18:08 UTC (permalink / raw)
  To: Maulik Shah
  Cc: Stephen Boyd, Matthias Kaehlcke, Evan Green, Bjorn Andersson,
	LKML, linux-arm-msm, Andy Gross, Rajendra Nayak, Lina Iyer,
	lsrao

Hi,

On Wed, Mar 25, 2020 at 10:16 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> Hi,
>
> On 3/12/2020 8:41 PM, Doug Anderson wrote:
> > Hi,
> >
> > Quoting below may look funny since you replied with HTML mail and all
> > old quoting was lost.  :(  I tried my best.
> >
> > On Thu, Mar 12, 2020 at 3:32 AM Maulik Shah <mkshah@codeaurora.org> wrote:
> >>> Specifically I wouldn't trust all callers of rpmh_write() /
> >>> rpmh_write_async() to never send the same data.  If it was just an
> >>> speed/power optimization then sure you could trust them, but this is
> >>> for correctness.
> >> yes we should trust callers not to send duplicate data.
> > I guess we'll see how this works out.  Since this only affects the
> > "zero active-only" case and I'm pretty sure that case has more
> > important issues, I'm OK w/ ignoring for now.
> >
> >
> >>> Hrmm, thinking about this again, though, I'm still not convinced I
> >>> understand what prevents the firmware from triggering "sleep mode"
> >>> while the sleep/wake TCS is being borrowed for an active-only
> >>> transaction.  If we're sitting in rpmh_write() and sleeping in
> >>> wait_for_completion_timeout() couldn't the system go idle and trigger
> >>> sleep mode?  In OSI-mode (with last man down) you'll always have a
> >>> rpmh_flush() called by the last man down so you know you can make sure
> >>> you're in a consistent state (one final flush and no more active-only
> >> transactions will happen).  Without last man down you have to assume
> >>> it can happen at any time don't you?
> >>>
> >>> ...so I guess I'll go back to asserting that zero-active-TCS is
> >>> incompatible with non-OSI unless you have some way to prevent the
> >>> sleep mode from being triggered while you've borrowed the wake TCS.
> >> i had change for this in v4 to handle same.
> >>
> >> Link: https://patchwork.kernel.org/patch/11366205/
> >>
> >> + /*
> >> + * RPMh domain can not be powered off when there is pending ACK for
> >> + * ACTIVE_TCS request. Exit when controller is busy.
> >> + */
> >>
> >> before calling rpmh_flush() we check ctrlr is idle (ensuring
> >>
> >> tcs_is_free() check passes)  this will ensure that
> >>
> >> previous active transaction is completed before going ahead.
> >>
> >> i will add this check in v14.
> >>
> >> since this curretntly check for ACTIVE TCS only, i will update it to check the repurposed "WAKE TCS" is also free.
> > The difficulty isn't in adding a simple check, it's in adding a check
> > that is race free and handles locking properly.  Specifically looking
> > at your the v4 you pointed to, I see things like this:
> >
> >   if (!rpmh_rsc_ctrlr_is_idle(drv))
> >     return -EBUSY;
> >   return rpmh_flush(&drv->client);
> >
> > The rpmh_rsc_ctrlr_is_idle() grabs a spin lock implying that there
> > could be other people acting on RPMh at the same time (otherwise, why
> > do you even need locks?).  ...but when it returns the lock is
> > released.  Once the lock is dropped then other clients are free to
> > start using RPMH because nothing prevents them.  ...then you go ahead
> > and start flushing.
> >
> > Said another way: due to the way you've structured locking in that
> > patch rpmh_rsc_ctrlr_is_idle() is inherently dangerous because it
> > returns an instantaneous state that may well have changed between the
> > spin_unlock_irqrestore() at the end of the function and when the
> > function returns.
> >
> > You could, of course, fix this by requiring that the caller hold the
> > 'drv->lock' for both the calls to rpmh_rsc_ctrlr_is_idle() and
> > rpmh_flush() (ignoring the fact the drv->lock is nominally part of
> > rpmh-rsc.c and not rpmh.c).  Now it would be safe from the problem I
> > described.  ...but now you get into a new problem.  If you ever hold
> > two locks you always need to make sure you grab them in the same order
> > any time you grab them both.  ...but tcs_write() we first grab a
> > tcs_lock and _then_ drv->lock.  That means the "fix" of holding
> > drv->lock for both the calls to rpmh_rsc_ctrlr_is_idle() and
> > rpmh_flush() won't work because rpmh_flush() will need to grab a
> > tcs_lock.  Possibly we could make this work by eliminating the "tcs
> > lock" and just having the one big "drv->lock" protect everything (or
> > introducing a new "super lock" making the other two meaningless).  It
> > would certainly be easier to reason about...
> Thanks Doug.
>
> Agree, a simple check won't help here.
>
> From above discussions, summarizing out 3 items that gets impacted when using rpmh_start/end_transaction().
>
> 1. rpmh_write_async() becomes of little use since drivers may need to wait for rpmh_flush() to finish
> if caches becomes dirty in between.

I think this is really just a problem if there are no dedicated ACTIVE
TCS.  I think rpmh_flush() is pretty quick normally because all it's
doing is writing to register space, not waiting for any transactions
to finish.  The reason it's bad if there are no dedicated ACTIVE TCS
is that now we have to block waiting for the active transaction to
finish.


> 2. It creates more pressure on WAKE TCS when there is no dedicated ACTIVE TCS. Transactions are delayed
> if rpmh_flush() is in progress holding the locks and new request comes in to send Active only data.
> 3. rpmh_rsc_ctrlr_is_idle() needs locking if ANY cpu can be calling this, this may require reordering of
> locks / increase the time for which locks are held during rpmh transactions. On downstream variant we don't
> have locking in this since in OSI, only last CPU is invoking it and it is the only one invoking this function.
>
> Given above impact this approach seem not so simple as i though of earlier. i have alternate solution which
> uses cpu_pm_notifications() and invokes rpmh_flush() for non-OSI targets. This approach should not impact
> above 3 items.

Grepping for "cpu_pm_notifications" finds nothing, but I think you
mean you're going to register for notifications with
register_pm_notifier().

OK.  I guess I'm a bit confused here, though.  Maybe you can help
clear up my understanding.  I thought that one of the things you were
trying to solve with all the "last man down" type solutions was to
handle when RPMH wanted to transition into its sleep mode without a
full system suspend.  I thought that the RPMH sleep mode ran sometimes
when all the CPUs were idle and we were pretty sure they were going to
be idle for a while.

If this whole time all you've needed is to run at suspend time then it
feels like we could have avoided a whole lot of complexity.  ...but
again, maybe I'm just misunderstanding.


> I will soon post out v14 with this, testing in progress.
>
> >
> > NOTE: the only way I'm able to reason about all the above things is
> > because I spent all the time to document what rpmh-rsc is doing and
> > what assumptions the various functions had [1].  It'd be great if that
> > could get a review.
> Sure.
> >
> >
> >>> This whole "no active TCS" is really quite a mess.  Given how broken
> >>> it seems to me it almost feels like we should remove "no active TCS"
> >>> from the driver for now and then re-add it in a later patch when we
> >>> can really validate everything.  I tried addressing this in my
> >>> rpmh-rsc cleanup series and every time I thought I had a good solution
> >>> I could find another way to break it.
> >>>
> >>> Do you actually have the "no active TCS" case working on the current
> >>> code, or does it only work on some downstream variant of the driver?
> >>>
> >>> It works fine on downstream variant. Some checks are still needed like above from v4
> >>>
> >>> since we do rpmh_flush() immediatly for dirty caches now.
> > OK.  So I take it you haven't tested the "zero active" case with the
> > upstream code?  In theory it should be easy to test, right?  Just hack
> > the driver to pretend there are zero active TCSs?
>
> No, it won't work out this way. if we want to test with zero active case, need to pick up [2].
>
> [2] also need follow up fixes to work. This change is also in my to do
> list to get merged. I will include a change in v14 series at the end, it can help test this series
> for zero active tcs as well.

I would just provide a pointer to it in the description.  If it was
already there and I missed it, then sorry.  :(


> Note that it doesn't have any dependency with this series and current series can get merged
> without [2].

Ah, this was a patch I wasn't aware of.  I haven't had time to go scan
for patches that weren't pointed in my direction.  I'll go review it
now.  When I briefly looked at trying to solve this problem myself I
seem to remember it being harder to get all the locking right / races
fixed, so I'm a little surprised that patch is so short...  Maybe I
was overthinking...


> > Which SoCs in specific need the zero active TCSs?  We are spending a
> > lot of time talking about this and reviewing the code with this in
> > mind.  It adds a lot of complexity to the driver.  If nothing under
> > active development needs it I think we should do ourselves a favor and
> > remove it for now, then add it back later.  Otherwise this whole
> > process is just going to take a lot more time.
> >
> There are multiple SoCs having zero active TCSes in downstream code. So we can not remove it.
>
> As i said above we need [2] plus some fixes to have zero active TCS working fine on upstream driver.

If you have it working then no need to remove it.  ...but without that
patch it was clearly not working and it was adding a lot of complexity
to handle it.  In fact, this flushing patch series would have likely
been easy to get finalized / landed if we hadn't needed to deal with
the zero active TCS case.  It still feels like an option to say that
the "zero active TCS" case is only supported when you have OSI mode
unless you know of instances where we need that.


Hrm, I see you just posted v14 while I was writing this.  I guess I'll
go check that out now.  Maybe it will answer some of the questions I
had.


-Doug

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-03-26 18:08 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-09  9:30 [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Maulik Shah
2020-03-09  9:30 ` [PATCH v13 1/5] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
2020-03-09  9:30 ` [PATCH v13 2/5] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
2020-03-09 23:42   ` Doug Anderson
2020-03-13  8:53     ` Maulik Shah
2020-03-09  9:30 ` [PATCH v13 3/5] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data Maulik Shah
2020-03-09 23:42   ` Doug Anderson
2020-03-10 11:09     ` Maulik Shah
2020-03-09  9:30 ` [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
2020-03-09 23:43   ` Doug Anderson
2020-03-10 11:19     ` Maulik Shah
2020-03-10 15:46       ` Doug Anderson
2020-03-11  6:36         ` Maulik Shah
2020-03-11 23:06           ` Doug Anderson
     [not found]             ` <5a5274ac-41f4-b06d-ff49-c00cef67aa7f@codeaurora.org>
2020-03-12 15:11               ` Doug Anderson
2020-03-25 17:15                 ` Maulik Shah
2020-03-26 18:08                   ` Doug Anderson
2020-03-09  9:30 ` [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions Maulik Shah
2020-03-09 23:44   ` Doug Anderson
2020-03-10 11:46     ` Maulik Shah
2020-03-10 15:46       ` Doug Anderson
2020-03-11 23:02   ` Doug Anderson
2020-03-09 23:42 ` [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Doug Anderson
2020-03-10 11:49   ` Maulik Shah

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).