linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kiwoong Kim <kwmad.kim@samsung.com>
To: linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org,
	alim.akhtar@samsung.com, avri.altman@wdc.com, jejb@linux.ibm.com,
	martin.petersen@oracle.com, beanhuo@micron.com,
	cang@codeaurora.org, adrian.hunter@intel.com, sc.suh@samsung.com,
	hy50.seo@samsung.com, sh425.lee@samsung.com,
	bhoon95.kim@samsung.com, vkumar.1997@samsung.com
Cc: Kiwoong Kim <kwmad.kim@samsung.com>
Subject: [PATCH v1] scsi: ufs: remove clk_scaling_lock when clkscaling isn't supported.
Date: Sat,  5 Feb 2022 16:39:20 +0900	[thread overview]
Message-ID: <1644046760-83345-1-git-send-email-kwmad.kim@samsung.com> (raw)
In-Reply-To: CGME20220205074128epcas2p40901c37a7328e825d8697f8d3269edba@epcas2p4.samsung.com

clk_scaling_lock is to prevent from running clkscaling related operations
with others which might be affected by the operations concurrently.
I think it looks hardware specific.
If the feature isn't supported, I think there is no reasonto prevent from
running other functions, such as ufshcd_queuecommand and
ufshcd_exec_dev_cmd, concurrently.

So I add a condition at some points protecting with clk_scaling_lock.

Signed-off-by: Kiwoong Kim <kwmad.kim@samsung.com>
---
 drivers/scsi/ufs/ufshcd.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 460d2b4..8471c90 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -2980,7 +2980,8 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
 	/* Protects use of hba->reserved_slot. */
 	lockdep_assert_held(&hba->dev_cmd.lock);
 
-	down_read(&hba->clk_scaling_lock);
+	if (ufshcd_is_clkscaling_supported(hba))
+		down_read(&hba->clk_scaling_lock);
 
 	lrbp = &hba->lrb[tag];
 	WARN_ON(lrbp->cmd);
@@ -2998,7 +2999,8 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
 				    (struct utp_upiu_req *)lrbp->ucd_rsp_ptr);
 
 out:
-	up_read(&hba->clk_scaling_lock);
+	if (ufshcd_is_clkscaling_supported(hba))
+		up_read(&hba->clk_scaling_lock);
 	return err;
 }
 
@@ -6014,7 +6016,8 @@ static void ufshcd_err_handling_prepare(struct ufs_hba *hba)
 		if (ufshcd_is_clkscaling_supported(hba) &&
 		    hba->clk_scaling.is_enabled)
 			ufshcd_suspend_clkscaling(hba);
-		ufshcd_clk_scaling_allow(hba, false);
+		if (ufshcd_is_clkscaling_supported(hba))
+			ufshcd_clk_scaling_allow(hba, false);
 	}
 	ufshcd_scsi_block_requests(hba);
 	/* Drain ufshcd_queuecommand() */
@@ -6247,7 +6250,8 @@ static void ufshcd_err_handler(struct work_struct *work)
 		 * Hold the scaling lock just in case dev cmds
 		 * are sent via bsg and/or sysfs.
 		 */
-		down_write(&hba->clk_scaling_lock);
+		if (ufshcd_is_clkscaling_supported(hba))
+			down_write(&hba->clk_scaling_lock);
 		hba->force_pmc = true;
 		pmc_err = ufshcd_config_pwr_mode(hba, &(hba->pwr_info));
 		if (pmc_err) {
@@ -6257,7 +6261,8 @@ static void ufshcd_err_handler(struct work_struct *work)
 		}
 		hba->force_pmc = false;
 		ufshcd_print_pwr_info(hba);
-		up_write(&hba->clk_scaling_lock);
+		if (ufshcd_is_clkscaling_supported(hba))
+			up_write(&hba->clk_scaling_lock);
 		spin_lock_irqsave(hba->host->host_lock, flags);
 	}
 
@@ -6753,7 +6758,8 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
 	/* Protects use of hba->reserved_slot. */
 	lockdep_assert_held(&hba->dev_cmd.lock);
 
-	down_read(&hba->clk_scaling_lock);
+	if (ufshcd_is_clkscaling_supported(hba))
+		down_read(&hba->clk_scaling_lock);
 
 	lrbp = &hba->lrb[tag];
 	WARN_ON(lrbp->cmd);
@@ -6822,7 +6828,8 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
 	ufshcd_add_query_upiu_trace(hba, err ? UFS_QUERY_ERR : UFS_QUERY_COMP,
 				    (struct utp_upiu_req *)lrbp->ucd_rsp_ptr);
 
-	up_read(&hba->clk_scaling_lock);
+	if (ufshcd_is_clkscaling_supported(hba))
+		up_read(&hba->clk_scaling_lock);
 	return err;
 }
 
-- 
2.7.4


       reply	other threads:[~2022-02-05  7:41 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20220205074128epcas2p40901c37a7328e825d8697f8d3269edba@epcas2p4.samsung.com>
2022-02-05  7:39 ` Kiwoong Kim [this message]
2022-02-06  8:20   ` [PATCH v1] scsi: ufs: remove clk_scaling_lock when clkscaling isn't supported Avri Altman
2022-02-11  2:15     ` Kiwoong Kim
2022-02-11 12:15       ` Adrian Hunter
2022-02-12  4:44         ` Kiwoong Kim
2022-02-14 14:31           ` Adrian Hunter
2022-02-15 11:00           ` Bean Huo
2022-02-15 17:09             ` Bart Van Assche
2022-02-17  8:12             ` Kiwoong Kim
2022-02-11 12:19       ` Avri Altman
2022-02-14 19:29   ` Bart Van Assche
2022-02-15  6:03     ` Kiwoong Kim
2022-02-15 17:15       ` Bart Van Assche
2022-02-17  8:15         ` Kiwoong Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1644046760-83345-1-git-send-email-kwmad.kim@samsung.com \
    --to=kwmad.kim@samsung.com \
    --cc=adrian.hunter@intel.com \
    --cc=alim.akhtar@samsung.com \
    --cc=avri.altman@wdc.com \
    --cc=beanhuo@micron.com \
    --cc=bhoon95.kim@samsung.com \
    --cc=cang@codeaurora.org \
    --cc=hy50.seo@samsung.com \
    --cc=jejb@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=sc.suh@samsung.com \
    --cc=sh425.lee@samsung.com \
    --cc=vkumar.1997@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).