linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Maulik Shah <mkshah@codeaurora.org>
To: swboyd@chromium.org, mka@chromium.org, evgreen@chromium.org,
	bjorn.andersson@linaro.org
Cc: linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	agross@kernel.org, dianders@chromium.org, rnayak@codeaurora.org,
	ilina@codeaurora.org, lsrao@codeaurora.org,
	Maulik Shah <mkshah@codeaurora.org>
Subject: [PATCH v13 3/5] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data
Date: Mon,  9 Mar 2020 15:00:34 +0530	[thread overview]
Message-ID: <1583746236-13325-4-git-send-email-mkshah@codeaurora.org> (raw)
In-Reply-To: <1583746236-13325-1-git-send-email-mkshah@codeaurora.org>

TCSes have previously programmed data when rpmh_flush is called.
This can cause old data to trigger along with newly flushed.

Fix this by cleaning SLEEP and WAKE TCSes before new data is flushed.

With this there is no need to invoke rpmh_rsc_invalidate() call from
rpmh_invalidate().

Simplify rpmh_invalidate() by moving invalidate_batch() inside.

Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
---
 drivers/soc/qcom/rpmh.c | 36 +++++++++++++++---------------------
 1 file changed, 15 insertions(+), 21 deletions(-)

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index 03630ae..5bed8f4 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -317,19 +317,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
 	return ret;
 }
 
-static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
-{
-	struct batch_cache_req *req, *tmp;
-	unsigned long flags;
-
-	spin_lock_irqsave(&ctrlr->cache_lock, flags);
-	list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
-		kfree(req);
-	INIT_LIST_HEAD(&ctrlr->batch_cache);
-	ctrlr->dirty = true;
-	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
-}
-
 /**
  * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the
  * batch to finish.
@@ -467,6 +454,11 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 		return 0;
 	}
 
+	/* Invalidate the TCSes first to avoid stale data */
+	do {
+		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
+	} while (ret == -EAGAIN);
+
 	/* First flush the cached batch requests */
 	ret = flush_batch(ctrlr);
 	if (ret)
@@ -503,19 +495,21 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
  *
  * @dev: The device making the request
  *
- * Invalidate the sleep and active values in the TCS blocks.
+ * Invalidate the sleep and wake values in batch_cache.
  */
 int rpmh_invalidate(const struct device *dev)
 {
 	struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
-	int ret;
-
-	invalidate_batch(ctrlr);
+	struct batch_cache_req *req, *tmp;
+	unsigned long flags;
 
-	do {
-		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
-	} while (ret == -EAGAIN);
+	spin_lock_irqsave(&ctrlr->cache_lock, flags);
+	list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
+		kfree(req);
+	INIT_LIST_HEAD(&ctrlr->batch_cache);
+	ctrlr->dirty = true;
+	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL(rpmh_invalidate);
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

  parent reply	other threads:[~2020-03-09  9:31 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-09  9:30 [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Maulik Shah
2020-03-09  9:30 ` [PATCH v13 1/5] arm64: dts: qcom: sc7180: Add cpuidle low power states Maulik Shah
2020-03-09  9:30 ` [PATCH v13 2/5] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
2020-03-09 23:42   ` Doug Anderson
2020-03-13  8:53     ` Maulik Shah
2020-03-09  9:30 ` Maulik Shah [this message]
2020-03-09 23:42   ` [PATCH v13 3/5] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data Doug Anderson
2020-03-10 11:09     ` Maulik Shah
2020-03-09  9:30 ` [PATCH v13 4/5] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches Maulik Shah
2020-03-09 23:43   ` Doug Anderson
2020-03-10 11:19     ` Maulik Shah
2020-03-10 15:46       ` Doug Anderson
2020-03-11  6:36         ` Maulik Shah
2020-03-11 23:06           ` Doug Anderson
     [not found]             ` <5a5274ac-41f4-b06d-ff49-c00cef67aa7f@codeaurora.org>
2020-03-12 15:11               ` Doug Anderson
2020-03-25 17:15                 ` Maulik Shah
2020-03-26 18:08                   ` Doug Anderson
2020-03-09  9:30 ` [PATCH v13 5/5] drivers: qcom: Update rpmh clients to use start and end transactions Maulik Shah
2020-03-09 23:44   ` Doug Anderson
2020-03-10 11:46     ` Maulik Shah
2020-03-10 15:46       ` Doug Anderson
2020-03-11 23:02   ` Doug Anderson
2020-03-09 23:42 ` [PATCH v13 0/4] Invoke rpmh_flush for non OSI targets Doug Anderson
2020-03-10 11:49   ` Maulik Shah

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1583746236-13325-4-git-send-email-mkshah@codeaurora.org \
    --to=mkshah@codeaurora.org \
    --cc=agross@kernel.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=dianders@chromium.org \
    --cc=evgreen@chromium.org \
    --cc=ilina@codeaurora.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lsrao@codeaurora.org \
    --cc=mka@chromium.org \
    --cc=rnayak@codeaurora.org \
    --cc=swboyd@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).