All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/5] ufs: Do not requeue while ungating the clock
@ 2023-05-29 20:26 Bart Van Assche
  2023-05-29 20:26 ` [PATCH v4 1/5] scsi: core: Rework scsi_host_block() Bart Van Assche
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: Bart Van Assche @ 2023-05-29 20:26 UTC (permalink / raw)
  To: Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, Bart Van Assche

Hi Martin,

In the traces we recorded while testing zoned storage we noticed that UFS
commands are requeued while the clock is being ungated. Command requeueing
makes it harder than necessary to preserve the command order. Hence this
patch series that modifies the SCSI core and also the UFS driver such that
clock ungating does not trigger command requeueing.

Please consider this patch series for the next merge window.

Thanks,

Bart.

Changes compared to v3:
- Added a patch that removes two duplicate declarations.

Changes compared to v2:
- Only enable BLK_MQ_F_BLOCKING if clock gating is supported.
- Introduce flag queuecommand_may_block in both the SCSI host and SCSI host
  template data structures.

Changes compared to v1:
- Dropped patch "scsi: ufs: core: Unexport ufshcd_hold() and ufshcd_release()".
- Removed a ufshcd_scsi_block_requests() / ufshcd_scsi_unblock_requests() pair
  from patch "scsi: ufs: Ungate the clock synchronously".

Bart Van Assche (4):
  scsi: core: Rework scsi_host_block()
  scsi: core: Support setting BLK_MQ_F_BLOCKING
  scsi: ufs: Conditionally enable the BLK_MQ_F_BLOCKING flag
  scsi: ufs: Ungate the clock synchronously

Bart Van Assche (5):
  scsi: core: Rework scsi_host_block()
  scsi: core: Support setting BLK_MQ_F_BLOCKING
  scsi: ufs: Conditionally enable the BLK_MQ_F_BLOCKING flag
  scsi: ufs: Declare ufshcd_{hold,release}() once
  scsi: ufs: Ungate the clock synchronously

 drivers/scsi/hosts.c             |  1 +
 drivers/scsi/scsi_lib.c          | 27 ++++++----
 drivers/ufs/core/ufs-sysfs.c     |  2 +-
 drivers/ufs/core/ufshcd-crypto.c |  2 +-
 drivers/ufs/core/ufshcd-priv.h   |  3 --
 drivers/ufs/core/ufshcd.c        | 87 ++++++++++----------------------
 include/scsi/scsi_host.h         |  6 +++
 include/ufs/ufshcd.h             |  2 +-
 8 files changed, 54 insertions(+), 76 deletions(-)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 1/5] scsi: core: Rework scsi_host_block()
  2023-05-29 20:26 [PATCH v4 0/5] ufs: Do not requeue while ungating the clock Bart Van Assche
@ 2023-05-29 20:26 ` Bart Van Assche
  2023-05-29 20:26 ` [PATCH v4 2/5] scsi: core: Support setting BLK_MQ_F_BLOCKING Bart Van Assche
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Bart Van Assche @ 2023-05-29 20:26 UTC (permalink / raw)
  To: Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, Bart Van Assche,
	Mike Christie, Ming Lei, Ye Bin, Hannes Reinecke,
	James E.J. Bottomley

Make scsi_host_block() easier to read by converting it to the widely used
early-return style. See also commit f983622ae605 ("scsi: core: Avoid
calling synchronize_rcu() for each device in scsi_host_block()").

Reviewed-by: Mike Christie <michael.christie@oracle.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Cc: Ye Bin <yebin10@huawei.com>
Cc: Hannes Reinecke <hare@suse.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/scsi/scsi_lib.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 25489fbd94c6..5f29faa0560f 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -2940,11 +2940,20 @@ scsi_target_unblock(struct device *dev, enum scsi_device_state new_state)
 }
 EXPORT_SYMBOL_GPL(scsi_target_unblock);
 
+/**
+ * scsi_host_block - Try to transition all logical units to the SDEV_BLOCK state
+ * @shost: device to block
+ *
+ * Pause SCSI command processing for all logical units associated with the SCSI
+ * host and wait until pending scsi_queue_rq() calls have finished.
+ *
+ * Returns zero if successful or a negative error code upon failure.
+ */
 int
 scsi_host_block(struct Scsi_Host *shost)
 {
 	struct scsi_device *sdev;
-	int ret = 0;
+	int ret;
 
 	/*
 	 * Call scsi_internal_device_block_nowait so we can avoid
@@ -2956,7 +2965,7 @@ scsi_host_block(struct Scsi_Host *shost)
 		mutex_unlock(&sdev->state_mutex);
 		if (ret) {
 			scsi_device_put(sdev);
-			break;
+			return ret;
 		}
 	}
 
@@ -2966,10 +2975,9 @@ scsi_host_block(struct Scsi_Host *shost)
 	 */
 	WARN_ON_ONCE(shost->tag_set.flags & BLK_MQ_F_BLOCKING);
 
-	if (!ret)
-		synchronize_rcu();
+	synchronize_rcu();
 
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(scsi_host_block);
 

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 2/5] scsi: core: Support setting BLK_MQ_F_BLOCKING
  2023-05-29 20:26 [PATCH v4 0/5] ufs: Do not requeue while ungating the clock Bart Van Assche
  2023-05-29 20:26 ` [PATCH v4 1/5] scsi: core: Rework scsi_host_block() Bart Van Assche
@ 2023-05-29 20:26 ` Bart Van Assche
  2023-05-29 20:26 ` [PATCH v4 3/5] scsi: ufs: Conditionally enable the BLK_MQ_F_BLOCKING flag Bart Van Assche
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Bart Van Assche @ 2023-05-29 20:26 UTC (permalink / raw)
  To: Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, Bart Van Assche,
	Mike Christie, James E.J. Bottomley

Prepare for adding code in ufshcd_queuecommand() that may sleep. This
patch is similar to a patch posted last year by Mike Christie. See also
https://lore.kernel.org/all/20220308003957.123312-2-michael.christie@oracle.com/

Cc: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/scsi/hosts.c     |  1 +
 drivers/scsi/scsi_lib.c  | 11 ++++-------
 include/scsi/scsi_host.h |  6 ++++++
 3 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
index f0bc8bbb3938..198edf03f929 100644
--- a/drivers/scsi/hosts.c
+++ b/drivers/scsi/hosts.c
@@ -441,6 +441,7 @@ struct Scsi_Host *scsi_host_alloc(const struct scsi_host_template *sht, int priv
 	shost->cmd_per_lun = sht->cmd_per_lun;
 	shost->no_write_same = sht->no_write_same;
 	shost->host_tagset = sht->host_tagset;
+	shost->queuecommand_may_block = sht->queuecommand_may_block;
 
 	if (shost_eh_deadline == -1 || !sht->eh_host_reset_handler)
 		shost->eh_deadline = -1;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 5f29faa0560f..e0b5d35e7949 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1981,6 +1981,8 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
 	tag_set->flags = BLK_MQ_F_SHOULD_MERGE;
 	tag_set->flags |=
 		BLK_ALLOC_POLICY_TO_MQ_FLAG(shost->hostt->tag_alloc_policy);
+	if (shost->queuecommand_may_block)
+		tag_set->flags |= BLK_MQ_F_BLOCKING;
 	tag_set->driver_data = shost;
 	if (shost->host_tagset)
 		tag_set->flags |= BLK_MQ_F_TAG_HCTX_SHARED;
@@ -2969,13 +2971,8 @@ scsi_host_block(struct Scsi_Host *shost)
 		}
 	}
 
-	/*
-	 * SCSI never enables blk-mq's BLK_MQ_F_BLOCKING flag so
-	 * calling synchronize_rcu() once is enough.
-	 */
-	WARN_ON_ONCE(shost->tag_set.flags & BLK_MQ_F_BLOCKING);
-
-	synchronize_rcu();
+	/* Wait for ongoing scsi_queue_rq() calls to finish. */
+	blk_mq_wait_quiesce_done(&shost->tag_set);
 
 	return 0;
 }
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index 0f29799efa02..70b7475dcf56 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -458,6 +458,9 @@ struct scsi_host_template {
 	/* True if the host uses host-wide tagspace */
 	unsigned host_tagset:1;
 
+	/* The queuecommand callback may block. See also BLK_MQ_F_BLOCKING. */
+	unsigned queuecommand_may_block:1;
+
 	/*
 	 * Countdown for host blocking with no commands outstanding.
 	 */
@@ -653,6 +656,9 @@ struct Scsi_Host {
 	/* True if the host uses host-wide tagspace */
 	unsigned host_tagset:1;
 
+	/* The queuecommand callback may block. See also BLK_MQ_F_BLOCKING. */
+	unsigned queuecommand_may_block:1;
+
 	/* Host responded with short (<36 bytes) INQUIRY result */
 	unsigned short_inquiry:1;
 

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 3/5] scsi: ufs: Conditionally enable the BLK_MQ_F_BLOCKING flag
  2023-05-29 20:26 [PATCH v4 0/5] ufs: Do not requeue while ungating the clock Bart Van Assche
  2023-05-29 20:26 ` [PATCH v4 1/5] scsi: core: Rework scsi_host_block() Bart Van Assche
  2023-05-29 20:26 ` [PATCH v4 2/5] scsi: core: Support setting BLK_MQ_F_BLOCKING Bart Van Assche
@ 2023-05-29 20:26 ` Bart Van Assche
  2023-05-30  9:52   ` Bean Huo
  2023-05-29 20:26 ` [PATCH v4 4/5] scsi: ufs: Declare ufshcd_{hold,release}() once Bart Van Assche
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 14+ messages in thread
From: Bart Van Assche @ 2023-05-29 20:26 UTC (permalink / raw)
  To: Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, Bart Van Assche,
	James E.J. Bottomley, Stanley Chu, Asutosh Das, Bean Huo,
	Ziqi Chen, Arthur Simchaev, Adrien Thierry

Prepare for adding code in ufshcd_queuecommand() that may sleep.

Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/ufs/core/ufshcd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
index abe9a430cc37..c2d9109102f3 100644
--- a/drivers/ufs/core/ufshcd.c
+++ b/drivers/ufs/core/ufshcd.c
@@ -10187,6 +10187,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
 	host->max_channel = UFSHCD_MAX_CHANNEL;
 	host->unique_id = host->host_no;
 	host->max_cmd_len = UFS_CDB_SIZE;
+	host->queuecommand_may_block = !!(hba->caps & UFSHCD_CAP_CLK_GATING);
 
 	hba->max_pwr_info.is_valid = false;
 

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 4/5] scsi: ufs: Declare ufshcd_{hold,release}() once
  2023-05-29 20:26 [PATCH v4 0/5] ufs: Do not requeue while ungating the clock Bart Van Assche
                   ` (2 preceding siblings ...)
  2023-05-29 20:26 ` [PATCH v4 3/5] scsi: ufs: Conditionally enable the BLK_MQ_F_BLOCKING flag Bart Van Assche
@ 2023-05-29 20:26 ` Bart Van Assche
  2023-05-30  9:48   ` Adrian Hunter
  2023-05-29 20:26 ` [PATCH v4 5/5] scsi: ufs: Ungate the clock synchronously Bart Van Assche
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 14+ messages in thread
From: Bart Van Assche @ 2023-05-29 20:26 UTC (permalink / raw)
  To: Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, Bart Van Assche,
	James E.J. Bottomley, Can Guo, Manivannan Sadhasivam,
	Asutosh Das, Arthur Simchaev, Jinyoung Choi, Bean Huo,
	Krzysztof Kozlowski, Avri Altman

ufshcd_hold() and ufshcd_release are declared twice: once in
drivers/ufs/core/ufshcd-priv.h and a second time in include/ufs/ufshcd.h.
Remove the declarations from ufshcd-priv.h.

Fixes: dd11376b9f1b ("scsi: ufs: Split the drivers/scsi/ufs directory")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/ufs/core/ufshcd-priv.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
index d53b93c21a0c..8f58c2169398 100644
--- a/drivers/ufs/core/ufshcd-priv.h
+++ b/drivers/ufs/core/ufshcd-priv.h
@@ -84,9 +84,6 @@ unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba,
 int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
 			    u8 **buf, bool ascii);
 
-int ufshcd_hold(struct ufs_hba *hba, bool async);
-void ufshcd_release(struct ufs_hba *hba);
-
 int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd);
 
 int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 5/5] scsi: ufs: Ungate the clock synchronously
  2023-05-29 20:26 [PATCH v4 0/5] ufs: Do not requeue while ungating the clock Bart Van Assche
                   ` (3 preceding siblings ...)
  2023-05-29 20:26 ` [PATCH v4 4/5] scsi: ufs: Declare ufshcd_{hold,release}() once Bart Van Assche
@ 2023-05-29 20:26 ` Bart Van Assche
  2023-05-30 11:10   ` Bean Huo
  2023-05-30 17:39   ` Bao D. Nguyen
       [not found] ` <CGME20230529202702epcas2p2ab96250338c67cef61f4bbd479c2c73b@epcms2p7>
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 14+ messages in thread
From: Bart Van Assche @ 2023-05-29 20:26 UTC (permalink / raw)
  To: Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, Bart Van Assche,
	James E.J. Bottomley, Bean Huo, Avri Altman, Jinyoung Choi,
	Peter Wang, Daniil Lunev, Stanley Chu, Asutosh Das, Ziqi Chen,
	Arthur Simchaev, Adrien Thierry, Can Guo, Manivannan Sadhasivam,
	Yoshihiro Shimoda, Eric Biggers, Keoseong Park

Ungating the clock asynchronously causes ufshcd_queuecommand() to return
SCSI_MLQUEUE_HOST_BUSY and hence causes commands to be requeued.  This is
suboptimal. Allow ufshcd_queuecommand() to sleep such that clock ungating
does not trigger command requeuing. Remove the ufshcd_scsi_block_requests()
and ufshcd_scsi_unblock_requests() calls because these are no longer
needed. The flush_work(&hba->clk_gating.ungate_work) call is sufficient to
make the SCSI core wait for clock ungating to complete.

Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/ufs/core/ufs-sysfs.c     |  2 +-
 drivers/ufs/core/ufshcd-crypto.c |  2 +-
 drivers/ufs/core/ufshcd.c        | 86 ++++++++++----------------------
 include/ufs/ufshcd.h             |  2 +-
 4 files changed, 30 insertions(+), 62 deletions(-)

diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c
index 883f0e44b54e..cdf3d5f2b77b 100644
--- a/drivers/ufs/core/ufs-sysfs.c
+++ b/drivers/ufs/core/ufs-sysfs.c
@@ -168,7 +168,7 @@ static ssize_t auto_hibern8_show(struct device *dev,
 	}
 
 	pm_runtime_get_sync(hba->dev);
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	ahit = ufshcd_readl(hba, REG_AUTO_HIBERNATE_IDLE_TIMER);
 	ufshcd_release(hba);
 	pm_runtime_put_sync(hba->dev);
diff --git a/drivers/ufs/core/ufshcd-crypto.c b/drivers/ufs/core/ufshcd-crypto.c
index 198360fe5e8e..f2c4422cab86 100644
--- a/drivers/ufs/core/ufshcd-crypto.c
+++ b/drivers/ufs/core/ufshcd-crypto.c
@@ -24,7 +24,7 @@ static int ufshcd_program_key(struct ufs_hba *hba,
 	u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
 	int err = 0;
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 	if (hba->vops && hba->vops->program_key) {
 		err = hba->vops->program_key(hba, cfg, slot);
diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
index c2d9109102f3..d9b6da33842a 100644
--- a/drivers/ufs/core/ufshcd.c
+++ b/drivers/ufs/core/ufshcd.c
@@ -1189,7 +1189,7 @@ static int ufshcd_wait_for_doorbell_clr(struct ufs_hba *hba,
 	bool timeout = false, do_last_check = false;
 	ktime_t start;
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	spin_lock_irqsave(hba->host->host_lock, flags);
 	/*
 	 * Wait for all the outstanding tasks/transfer requests.
@@ -1310,7 +1310,7 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba, u64 timeout_us)
 	}
 
 	/* let's not get into low power until clock scaling is completed */
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 out:
 	return ret;
@@ -1640,7 +1640,7 @@ static ssize_t ufshcd_clkscale_enable_store(struct device *dev,
 		goto out;
 
 	ufshcd_rpm_get_sync(hba);
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 	hba->clk_scaling.is_enabled = value;
 
@@ -1723,7 +1723,7 @@ static void ufshcd_ungate_work(struct work_struct *work)
 	spin_lock_irqsave(hba->host->host_lock, flags);
 	if (hba->clk_gating.state == CLKS_ON) {
 		spin_unlock_irqrestore(hba->host->host_lock, flags);
-		goto unblock_reqs;
+		return;
 	}
 
 	spin_unlock_irqrestore(hba->host->host_lock, flags);
@@ -1746,25 +1746,21 @@ static void ufshcd_ungate_work(struct work_struct *work)
 		}
 		hba->clk_gating.is_suspended = false;
 	}
-unblock_reqs:
-	ufshcd_scsi_unblock_requests(hba);
 }
 
 /**
  * ufshcd_hold - Enable clocks that were gated earlier due to ufshcd_release.
  * Also, exit from hibern8 mode and set the link as active.
  * @hba: per adapter instance
- * @async: This indicates whether caller should ungate clocks asynchronously.
  */
-int ufshcd_hold(struct ufs_hba *hba, bool async)
+void ufshcd_hold(struct ufs_hba *hba)
 {
-	int rc = 0;
 	bool flush_result;
 	unsigned long flags;
 
 	if (!ufshcd_is_clkgating_allowed(hba) ||
 	    !hba->clk_gating.is_initialized)
-		goto out;
+		return;
 	spin_lock_irqsave(hba->host->host_lock, flags);
 	hba->clk_gating.active_reqs++;
 
@@ -1781,15 +1777,10 @@ int ufshcd_hold(struct ufs_hba *hba, bool async)
 		 */
 		if (ufshcd_can_hibern8_during_gating(hba) &&
 		    ufshcd_is_link_hibern8(hba)) {
-			if (async) {
-				rc = -EAGAIN;
-				hba->clk_gating.active_reqs--;
-				break;
-			}
 			spin_unlock_irqrestore(hba->host->host_lock, flags);
 			flush_result = flush_work(&hba->clk_gating.ungate_work);
 			if (hba->clk_gating.is_suspended && !flush_result)
-				goto out;
+				return;
 			spin_lock_irqsave(hba->host->host_lock, flags);
 			goto start;
 		}
@@ -1811,21 +1802,14 @@ int ufshcd_hold(struct ufs_hba *hba, bool async)
 		hba->clk_gating.state = REQ_CLKS_ON;
 		trace_ufshcd_clk_gating(dev_name(hba->dev),
 					hba->clk_gating.state);
-		if (queue_work(hba->clk_gating.clk_gating_workq,
-			       &hba->clk_gating.ungate_work))
-			ufshcd_scsi_block_requests(hba);
+		queue_work(hba->clk_gating.clk_gating_workq,
+			   &hba->clk_gating.ungate_work);
 		/*
 		 * fall through to check if we should wait for this
 		 * work to be done or not.
 		 */
 		fallthrough;
 	case REQ_CLKS_ON:
-		if (async) {
-			rc = -EAGAIN;
-			hba->clk_gating.active_reqs--;
-			break;
-		}
-
 		spin_unlock_irqrestore(hba->host->host_lock, flags);
 		flush_work(&hba->clk_gating.ungate_work);
 		/* Make sure state is CLKS_ON before returning */
@@ -1837,8 +1821,6 @@ int ufshcd_hold(struct ufs_hba *hba, bool async)
 		break;
 	}
 	spin_unlock_irqrestore(hba->host->host_lock, flags);
-out:
-	return rc;
 }
 EXPORT_SYMBOL_GPL(ufshcd_hold);
 
@@ -2070,7 +2052,7 @@ static void ufshcd_exit_clk_gating(struct ufs_hba *hba)
 	ufshcd_remove_clk_gating_sysfs(hba);
 
 	/* Ungate the clock if necessary. */
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	hba->clk_gating.is_initialized = false;
 	ufshcd_release(hba);
 
@@ -2468,7 +2450,7 @@ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
 	if (hba->quirks & UFSHCD_QUIRK_BROKEN_UIC_CMD)
 		return 0;
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	mutex_lock(&hba->uic_cmd_mutex);
 	ufshcd_add_delay_before_dme_cmd(hba);
 
@@ -2871,12 +2853,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
 
 	WARN_ONCE(tag < 0 || tag >= hba->nutrs, "Invalid tag %d\n", tag);
 
-	/*
-	 * Allows the UFS error handler to wait for prior ufshcd_queuecommand()
-	 * calls.
-	 */
-	rcu_read_lock();
-
 	switch (hba->ufshcd_state) {
 	case UFSHCD_STATE_OPERATIONAL:
 		break;
@@ -2922,13 +2898,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
 
 	hba->req_abort_count = 0;
 
-	err = ufshcd_hold(hba, true);
-	if (err) {
-		err = SCSI_MLQUEUE_HOST_BUSY;
-		goto out;
-	}
-	WARN_ON(ufshcd_is_clkgating_allowed(hba) &&
-		(hba->clk_gating.state != CLKS_ON));
+	ufshcd_hold(hba);
 
 	lrbp = &hba->lrb[tag];
 	lrbp->cmd = cmd;
@@ -2956,8 +2926,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
 	ufshcd_send_command(hba, tag, hwq);
 
 out:
-	rcu_read_unlock();
-
 	if (ufs_trigger_eh()) {
 		unsigned long flags;
 
@@ -3251,7 +3219,7 @@ int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode,
 
 	BUG_ON(!hba);
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	mutex_lock(&hba->dev_cmd.lock);
 	ufshcd_init_query(hba, &request, &response, opcode, idn, index,
 			selector);
@@ -3325,7 +3293,7 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode,
 		return -EINVAL;
 	}
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 	mutex_lock(&hba->dev_cmd.lock);
 	ufshcd_init_query(hba, &request, &response, opcode, idn, index,
@@ -3421,7 +3389,7 @@ static int __ufshcd_query_descriptor(struct ufs_hba *hba,
 		return -EINVAL;
 	}
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 	mutex_lock(&hba->dev_cmd.lock);
 	ufshcd_init_query(hba, &request, &response, opcode, idn, index,
@@ -4239,7 +4207,7 @@ int ufshcd_uic_change_pwr_mode(struct ufs_hba *hba, u8 mode)
 	uic_cmd.command = UIC_CMD_DME_SET;
 	uic_cmd.argument1 = UIC_ARG_MIB(PA_PWRMODE);
 	uic_cmd.argument3 = mode;
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	ret = ufshcd_uic_pwr_ctrl(hba, &uic_cmd);
 	ufshcd_release(hba);
 
@@ -4346,7 +4314,7 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit)
 	if (update &&
 	    !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) {
 		ufshcd_rpm_get_sync(hba);
-		ufshcd_hold(hba, false);
+		ufshcd_hold(hba);
 		ufshcd_auto_hibern8_enable(hba);
 		ufshcd_release(hba);
 		ufshcd_rpm_put_sync(hba);
@@ -4939,7 +4907,7 @@ static int ufshcd_verify_dev_init(struct ufs_hba *hba)
 	int err = 0;
 	int retries;
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	mutex_lock(&hba->dev_cmd.lock);
 	for (retries = NOP_OUT_RETRIES; retries > 0; retries--) {
 		err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_NOP,
@@ -6224,22 +6192,22 @@ static void ufshcd_err_handling_prepare(struct ufs_hba *hba)
 		ufshcd_setup_vreg(hba, true);
 		ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq);
 		ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq2);
-		ufshcd_hold(hba, false);
+		ufshcd_hold(hba);
 		if (!ufshcd_is_clkgating_allowed(hba))
 			ufshcd_setup_clocks(hba, true);
 		ufshcd_release(hba);
 		pm_op = hba->is_sys_suspended ? UFS_SYSTEM_PM : UFS_RUNTIME_PM;
 		ufshcd_vops_resume(hba, pm_op);
 	} else {
-		ufshcd_hold(hba, false);
+		ufshcd_hold(hba);
 		if (ufshcd_is_clkscaling_supported(hba) &&
 		    hba->clk_scaling.is_enabled)
 			ufshcd_suspend_clkscaling(hba);
 		ufshcd_clk_scaling_allow(hba, false);
 	}
 	ufshcd_scsi_block_requests(hba);
-	/* Drain ufshcd_queuecommand() */
-	synchronize_rcu();
+	/* Wait for ongoing ufshcd_queuecommand() calls to finish. */
+	blk_mq_wait_quiesce_done(&hba->host->tag_set);
 	cancel_work_sync(&hba->eeh_work);
 }
 
@@ -6884,7 +6852,7 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
 		return PTR_ERR(req);
 
 	req->end_io_data = &wait;
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 	spin_lock_irqsave(host->host_lock, flags);
 
@@ -7120,7 +7088,7 @@ int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,
 		cmd_type = DEV_CMD_TYPE_NOP;
 		fallthrough;
 	case UPIU_TRANSACTION_QUERY_REQ:
-		ufshcd_hold(hba, false);
+		ufshcd_hold(hba);
 		mutex_lock(&hba->dev_cmd.lock);
 		err = ufshcd_issue_devman_upiu_cmd(hba, req_upiu, rsp_upiu,
 						   desc_buff, buff_len,
@@ -7186,7 +7154,7 @@ int ufshcd_advanced_rpmb_req_handler(struct ufs_hba *hba, struct utp_upiu_req *r
 	u16 ehs_len;
 
 	/* Protects use of hba->reserved_slot. */
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	mutex_lock(&hba->dev_cmd.lock);
 	down_read(&hba->clk_scaling_lock);
 
@@ -7420,7 +7388,7 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
 
 	WARN_ONCE(tag < 0, "Invalid tag %d\n", tag);
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	reg = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
 	/* If command is already aborted/completed, return FAILED. */
 	if (!(test_bit(tag, &hba->outstanding_reqs))) {
@@ -9412,7 +9380,7 @@ static int __ufshcd_wl_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
 	 * If we can't transition into any of the low power modes
 	 * just gate the clocks.
 	 */
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	hba->clk_gating.is_suspended = true;
 
 	if (ufshcd_is_clkscaling_supported(hba))
diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
index db2e669985d5..40c537a3880b 100644
--- a/include/ufs/ufshcd.h
+++ b/include/ufs/ufshcd.h
@@ -1357,7 +1357,7 @@ void ufshcd_fixup_dev_quirks(struct ufs_hba *hba,
 int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
 			    u8 **buf, bool ascii);
 
-int ufshcd_hold(struct ufs_hba *hba, bool async);
+void ufshcd_hold(struct ufs_hba *hba);
 void ufshcd_release(struct ufs_hba *hba);
 
 void ufshcd_clkgate_delay_set(struct device *dev, unsigned long value);

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 4/5] scsi: ufs: Declare ufshcd_{hold,release}() once
  2023-05-29 20:26 ` [PATCH v4 4/5] scsi: ufs: Declare ufshcd_{hold,release}() once Bart Van Assche
@ 2023-05-30  9:48   ` Adrian Hunter
  0 siblings, 0 replies; 14+ messages in thread
From: Adrian Hunter @ 2023-05-30  9:48 UTC (permalink / raw)
  To: Bart Van Assche, Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, James E.J. Bottomley, Can Guo,
	Manivannan Sadhasivam, Asutosh Das, Arthur Simchaev,
	Jinyoung Choi, Bean Huo, Krzysztof Kozlowski, Avri Altman

On 29/05/23 23:26, Bart Van Assche wrote:
> ufshcd_hold() and ufshcd_release are declared twice: once in
> drivers/ufs/core/ufshcd-priv.h and a second time in include/ufs/ufshcd.h.
> Remove the declarations from ufshcd-priv.h.
> 
> Fixes: dd11376b9f1b ("scsi: ufs: Split the drivers/scsi/ufs directory")
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Reviewed-by: Adrian Hunter <adrian.hunter@intel.com>

> ---
>  drivers/ufs/core/ufshcd-priv.h | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
> index d53b93c21a0c..8f58c2169398 100644
> --- a/drivers/ufs/core/ufshcd-priv.h
> +++ b/drivers/ufs/core/ufshcd-priv.h
> @@ -84,9 +84,6 @@ unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba,
>  int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
>  			    u8 **buf, bool ascii);
>  
> -int ufshcd_hold(struct ufs_hba *hba, bool async);
> -void ufshcd_release(struct ufs_hba *hba);
> -
>  int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd);
>  
>  int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/5] scsi: ufs: Conditionally enable the BLK_MQ_F_BLOCKING flag
  2023-05-29 20:26 ` [PATCH v4 3/5] scsi: ufs: Conditionally enable the BLK_MQ_F_BLOCKING flag Bart Van Assche
@ 2023-05-30  9:52   ` Bean Huo
  0 siblings, 0 replies; 14+ messages in thread
From: Bean Huo @ 2023-05-30  9:52 UTC (permalink / raw)
  To: Bart Van Assche, Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, James E.J. Bottomley,
	Stanley Chu, Asutosh Das, Bean Huo, Ziqi Chen, Arthur Simchaev,
	Adrien Thierry

On Mon, 2023-05-29 at 13:26 -0700, Bart Van Assche wrote:
> Prepare for adding code in ufshcd_queuecommand() that may sleep.
> 
> Acked-by: Adrian Hunter <adrian.hunter@intel.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Bean Huo <beanhuo@micron.com>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 5/5] scsi: ufs: Ungate the clock synchronously
  2023-05-29 20:26 ` [PATCH v4 5/5] scsi: ufs: Ungate the clock synchronously Bart Van Assche
@ 2023-05-30 11:10   ` Bean Huo
  2023-05-30 17:39   ` Bao D. Nguyen
  1 sibling, 0 replies; 14+ messages in thread
From: Bean Huo @ 2023-05-30 11:10 UTC (permalink / raw)
  To: Bart Van Assche, Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, James E.J. Bottomley,
	Bean Huo, Avri Altman, Jinyoung Choi, Peter Wang, Daniil Lunev,
	Stanley Chu, Asutosh Das, Ziqi Chen, Arthur Simchaev,
	Adrien Thierry, Can Guo, Manivannan Sadhasivam,
	Yoshihiro Shimoda, Eric Biggers, Keoseong Park

On Mon, 2023-05-29 at 13:26 -0700, Bart Van Assche wrote:
> Ungating the clock asynchronously causes ufshcd_queuecommand() to
> return
> SCSI_MLQUEUE_HOST_BUSY and hence causes commands to be requeued. 
> This is
> suboptimal. Allow ufshcd_queuecommand() to sleep such that clock
> ungating
> does not trigger command requeuing. Remove the
> ufshcd_scsi_block_requests()
> and ufshcd_scsi_unblock_requests() calls because these are no longer
> needed. The flush_work(&hba->clk_gating.ungate_work) call is
> sufficient to
> make the SCSI core wait for clock ungating to complete.
> 
> Acked-by: Adrian Hunter <adrian.hunter@intel.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Reviewed-by: Bean Huo <beanhuo@micron.com>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 5/5] scsi: ufs: Ungate the clock synchronously
  2023-05-29 20:26 ` [PATCH v4 5/5] scsi: ufs: Ungate the clock synchronously Bart Van Assche
  2023-05-30 11:10   ` Bean Huo
@ 2023-05-30 17:39   ` Bao D. Nguyen
  1 sibling, 0 replies; 14+ messages in thread
From: Bao D. Nguyen @ 2023-05-30 17:39 UTC (permalink / raw)
  To: Bart Van Assche, Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, James E.J. Bottomley,
	Bean Huo, Avri Altman, Jinyoung Choi, Peter Wang, Daniil Lunev,
	Stanley Chu, Asutosh Das, Ziqi Chen, Arthur Simchaev,
	Adrien Thierry, Can Guo, Manivannan Sadhasivam,
	Yoshihiro Shimoda, Eric Biggers, Keoseong Park

On 5/29/2023 1:26 PM, Bart Van Assche wrote:
> Ungating the clock asynchronously causes ufshcd_queuecommand() to return
> SCSI_MLQUEUE_HOST_BUSY and hence causes commands to be requeued.  This is
> suboptimal. Allow ufshcd_queuecommand() to sleep such that clock ungating
> does not trigger command requeuing. Remove the ufshcd_scsi_block_requests()
> and ufshcd_scsi_unblock_requests() calls because these are no longer
> needed. The flush_work(&hba->clk_gating.ungate_work) call is sufficient to
> make the SCSI core wait for clock ungating to complete.
> 
> Acked-by: Adrian Hunter <adrian.hunter@intel.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Reviewed-by: Bao D. Nguyen <quic_nguyenb@quicinc.com>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH v4 4/5] scsi: ufs: Declare ufshcd_{hold,release}() once
       [not found] ` <CGME20230529202702epcas2p2ab96250338c67cef61f4bbd479c2c73b@epcms2p7>
@ 2023-05-31  7:59   ` Keoseong Park
  0 siblings, 0 replies; 14+ messages in thread
From: Keoseong Park @ 2023-05-31  7:59 UTC (permalink / raw)
  To: Bart Van Assche, Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, James E.J. Bottomley,
	Can Guo, Manivannan Sadhasivam, Asutosh Das, Arthur Simchaev,
	Jinyoung CHOI, Bean Huo, Krzysztof Kozlowski, Avri Altman

>ufshcd_hold() and ufshcd_release are declared twice: once in
>drivers/ufs/core/ufshcd-priv.h and a second time in include/ufs/ufshcd.h.
>Remove the declarations from ufshcd-priv.h.
>
>Fixes: dd11376b9f1b ("scsi: ufs: Split the drivers/scsi/ufs directory")
>Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Reviewed-by: Keoseong Park <keosung.park@samsung.com>

Best Regards,
Keoseong

>---
> drivers/ufs/core/ufshcd-priv.h | 3 ---
> 1 file changed, 3 deletions(-)
>
>diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
>index d53b93c21a0c..8f58c2169398 100644
>--- a/drivers/ufs/core/ufshcd-priv.h
>+++ b/drivers/ufs/core/ufshcd-priv.h
>@@ -84,9 +84,6 @@ unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba,
> int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
> 			    u8 **buf, bool ascii);
> 
>-int ufshcd_hold(struct ufs_hba *hba, bool async);
>-void ufshcd_release(struct ufs_hba *hba);
>-
> int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd);
> 
> int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 0/5] ufs: Do not requeue while ungating the clock
  2023-05-29 20:26 [PATCH v4 0/5] ufs: Do not requeue while ungating the clock Bart Van Assche
                   ` (5 preceding siblings ...)
       [not found] ` <CGME20230529202702epcas2p2ab96250338c67cef61f4bbd479c2c73b@epcms2p7>
@ 2023-05-31 15:45 ` Martin K. Petersen
  2023-06-08  1:42 ` Martin K. Petersen
  7 siblings, 0 replies; 14+ messages in thread
From: Martin K. Petersen @ 2023-05-31 15:45 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Martin K . Petersen, Jaegeuk Kim, linux-scsi, Adrian Hunter


Bart,

> In the traces we recorded while testing zoned storage we noticed that
> UFS commands are requeued while the clock is being ungated. Command
> requeueing makes it harder than necessary to preserve the command
> order. Hence this patch series that modifies the SCSI core and also
> the UFS driver such that clock ungating does not trigger command
> requeueing.

Applied to 6.5/scsi-staging, thanks!

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 0/5] ufs: Do not requeue while ungating the clock
  2023-05-29 20:26 [PATCH v4 0/5] ufs: Do not requeue while ungating the clock Bart Van Assche
                   ` (6 preceding siblings ...)
  2023-05-31 15:45 ` [PATCH v4 0/5] ufs: Do not requeue while ungating the clock Martin K. Petersen
@ 2023-06-08  1:42 ` Martin K. Petersen
  7 siblings, 0 replies; 14+ messages in thread
From: Martin K. Petersen @ 2023-06-08  1:42 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Martin K . Petersen, Jaegeuk Kim, linux-scsi, Adrian Hunter

On Mon, 29 May 2023 13:26:35 -0700, Bart Van Assche wrote:

> In the traces we recorded while testing zoned storage we noticed that UFS
> commands are requeued while the clock is being ungated. Command requeueing
> makes it harder than necessary to preserve the command order. Hence this
> patch series that modifies the SCSI core and also the UFS driver such that
> clock ungating does not trigger command requeueing.
> 
> Please consider this patch series for the next merge window.
> 
> [...]

Applied to 6.5/scsi-queue, thanks!

[1/5] scsi: core: Rework scsi_host_block()
      https://git.kernel.org/mkp/scsi/c/c854bcdf5e18
[2/5] scsi: core: Support setting BLK_MQ_F_BLOCKING
      https://git.kernel.org/mkp/scsi/c/b125bb99559e
[3/5] scsi: ufs: Conditionally enable the BLK_MQ_F_BLOCKING flag
      https://git.kernel.org/mkp/scsi/c/6c03c8e9b729
[4/5] scsi: ufs: Declare ufshcd_{hold,release}() once
      https://git.kernel.org/mkp/scsi/c/4b68b7f9c46d
[5/5] scsi: ufs: Ungate the clock synchronously
      https://git.kernel.org/mkp/scsi/c/078f4f4b34d6

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 5/5] scsi: ufs: Ungate the clock synchronously
  2023-05-29 20:21 [PATCH] scsi: stex: Fix gcc 13 warnings Bart Van Assche
@ 2023-05-29 20:21 ` Bart Van Assche
  0 siblings, 0 replies; 14+ messages in thread
From: Bart Van Assche @ 2023-05-29 20:21 UTC (permalink / raw)
  To: Martin K . Petersen
  Cc: Jaegeuk Kim, linux-scsi, Adrian Hunter, Bart Van Assche, stable,
	Randy Dunlap, James E.J. Bottomley, Jinyoung Choi, Bean Huo,
	Avri Altman, Daniil Lunev, Peter Wang, Stanley Chu, Asutosh Das,
	Ziqi Chen, Arthur Simchaev, Adrien Thierry, Can Guo,
	Manivannan Sadhasivam, Yoshihiro Shimoda, Eric Biggers,
	Keoseong Park

Ungating the clock asynchronously causes ufshcd_queuecommand() to return
SCSI_MLQUEUE_HOST_BUSY and hence causes commands to be requeued.  This is
suboptimal. Allow ufshcd_queuecommand() to sleep such that clock ungating
does not trigger command requeuing. Remove the ufshcd_scsi_block_requests()
and ufshcd_scsi_unblock_requests() calls because these are no longer
needed. The flush_work(&hba->clk_gating.ungate_work) call is sufficient to
make the SCSI core wait for clock ungating to complete.

Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/ufs/core/ufs-sysfs.c     |  2 +-
 drivers/ufs/core/ufshcd-crypto.c |  2 +-
 drivers/ufs/core/ufshcd.c        | 86 ++++++++++----------------------
 include/ufs/ufshcd.h             |  2 +-
 4 files changed, 30 insertions(+), 62 deletions(-)

diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c
index 883f0e44b54e..cdf3d5f2b77b 100644
--- a/drivers/ufs/core/ufs-sysfs.c
+++ b/drivers/ufs/core/ufs-sysfs.c
@@ -168,7 +168,7 @@ static ssize_t auto_hibern8_show(struct device *dev,
 	}
 
 	pm_runtime_get_sync(hba->dev);
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	ahit = ufshcd_readl(hba, REG_AUTO_HIBERNATE_IDLE_TIMER);
 	ufshcd_release(hba);
 	pm_runtime_put_sync(hba->dev);
diff --git a/drivers/ufs/core/ufshcd-crypto.c b/drivers/ufs/core/ufshcd-crypto.c
index 198360fe5e8e..f2c4422cab86 100644
--- a/drivers/ufs/core/ufshcd-crypto.c
+++ b/drivers/ufs/core/ufshcd-crypto.c
@@ -24,7 +24,7 @@ static int ufshcd_program_key(struct ufs_hba *hba,
 	u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
 	int err = 0;
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 	if (hba->vops && hba->vops->program_key) {
 		err = hba->vops->program_key(hba, cfg, slot);
diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
index c2d9109102f3..d9b6da33842a 100644
--- a/drivers/ufs/core/ufshcd.c
+++ b/drivers/ufs/core/ufshcd.c
@@ -1189,7 +1189,7 @@ static int ufshcd_wait_for_doorbell_clr(struct ufs_hba *hba,
 	bool timeout = false, do_last_check = false;
 	ktime_t start;
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	spin_lock_irqsave(hba->host->host_lock, flags);
 	/*
 	 * Wait for all the outstanding tasks/transfer requests.
@@ -1310,7 +1310,7 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba, u64 timeout_us)
 	}
 
 	/* let's not get into low power until clock scaling is completed */
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 out:
 	return ret;
@@ -1640,7 +1640,7 @@ static ssize_t ufshcd_clkscale_enable_store(struct device *dev,
 		goto out;
 
 	ufshcd_rpm_get_sync(hba);
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 	hba->clk_scaling.is_enabled = value;
 
@@ -1723,7 +1723,7 @@ static void ufshcd_ungate_work(struct work_struct *work)
 	spin_lock_irqsave(hba->host->host_lock, flags);
 	if (hba->clk_gating.state == CLKS_ON) {
 		spin_unlock_irqrestore(hba->host->host_lock, flags);
-		goto unblock_reqs;
+		return;
 	}
 
 	spin_unlock_irqrestore(hba->host->host_lock, flags);
@@ -1746,25 +1746,21 @@ static void ufshcd_ungate_work(struct work_struct *work)
 		}
 		hba->clk_gating.is_suspended = false;
 	}
-unblock_reqs:
-	ufshcd_scsi_unblock_requests(hba);
 }
 
 /**
  * ufshcd_hold - Enable clocks that were gated earlier due to ufshcd_release.
  * Also, exit from hibern8 mode and set the link as active.
  * @hba: per adapter instance
- * @async: This indicates whether caller should ungate clocks asynchronously.
  */
-int ufshcd_hold(struct ufs_hba *hba, bool async)
+void ufshcd_hold(struct ufs_hba *hba)
 {
-	int rc = 0;
 	bool flush_result;
 	unsigned long flags;
 
 	if (!ufshcd_is_clkgating_allowed(hba) ||
 	    !hba->clk_gating.is_initialized)
-		goto out;
+		return;
 	spin_lock_irqsave(hba->host->host_lock, flags);
 	hba->clk_gating.active_reqs++;
 
@@ -1781,15 +1777,10 @@ int ufshcd_hold(struct ufs_hba *hba, bool async)
 		 */
 		if (ufshcd_can_hibern8_during_gating(hba) &&
 		    ufshcd_is_link_hibern8(hba)) {
-			if (async) {
-				rc = -EAGAIN;
-				hba->clk_gating.active_reqs--;
-				break;
-			}
 			spin_unlock_irqrestore(hba->host->host_lock, flags);
 			flush_result = flush_work(&hba->clk_gating.ungate_work);
 			if (hba->clk_gating.is_suspended && !flush_result)
-				goto out;
+				return;
 			spin_lock_irqsave(hba->host->host_lock, flags);
 			goto start;
 		}
@@ -1811,21 +1802,14 @@ int ufshcd_hold(struct ufs_hba *hba, bool async)
 		hba->clk_gating.state = REQ_CLKS_ON;
 		trace_ufshcd_clk_gating(dev_name(hba->dev),
 					hba->clk_gating.state);
-		if (queue_work(hba->clk_gating.clk_gating_workq,
-			       &hba->clk_gating.ungate_work))
-			ufshcd_scsi_block_requests(hba);
+		queue_work(hba->clk_gating.clk_gating_workq,
+			   &hba->clk_gating.ungate_work);
 		/*
 		 * fall through to check if we should wait for this
 		 * work to be done or not.
 		 */
 		fallthrough;
 	case REQ_CLKS_ON:
-		if (async) {
-			rc = -EAGAIN;
-			hba->clk_gating.active_reqs--;
-			break;
-		}
-
 		spin_unlock_irqrestore(hba->host->host_lock, flags);
 		flush_work(&hba->clk_gating.ungate_work);
 		/* Make sure state is CLKS_ON before returning */
@@ -1837,8 +1821,6 @@ int ufshcd_hold(struct ufs_hba *hba, bool async)
 		break;
 	}
 	spin_unlock_irqrestore(hba->host->host_lock, flags);
-out:
-	return rc;
 }
 EXPORT_SYMBOL_GPL(ufshcd_hold);
 
@@ -2070,7 +2052,7 @@ static void ufshcd_exit_clk_gating(struct ufs_hba *hba)
 	ufshcd_remove_clk_gating_sysfs(hba);
 
 	/* Ungate the clock if necessary. */
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	hba->clk_gating.is_initialized = false;
 	ufshcd_release(hba);
 
@@ -2468,7 +2450,7 @@ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
 	if (hba->quirks & UFSHCD_QUIRK_BROKEN_UIC_CMD)
 		return 0;
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	mutex_lock(&hba->uic_cmd_mutex);
 	ufshcd_add_delay_before_dme_cmd(hba);
 
@@ -2871,12 +2853,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
 
 	WARN_ONCE(tag < 0 || tag >= hba->nutrs, "Invalid tag %d\n", tag);
 
-	/*
-	 * Allows the UFS error handler to wait for prior ufshcd_queuecommand()
-	 * calls.
-	 */
-	rcu_read_lock();
-
 	switch (hba->ufshcd_state) {
 	case UFSHCD_STATE_OPERATIONAL:
 		break;
@@ -2922,13 +2898,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
 
 	hba->req_abort_count = 0;
 
-	err = ufshcd_hold(hba, true);
-	if (err) {
-		err = SCSI_MLQUEUE_HOST_BUSY;
-		goto out;
-	}
-	WARN_ON(ufshcd_is_clkgating_allowed(hba) &&
-		(hba->clk_gating.state != CLKS_ON));
+	ufshcd_hold(hba);
 
 	lrbp = &hba->lrb[tag];
 	lrbp->cmd = cmd;
@@ -2956,8 +2926,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
 	ufshcd_send_command(hba, tag, hwq);
 
 out:
-	rcu_read_unlock();
-
 	if (ufs_trigger_eh()) {
 		unsigned long flags;
 
@@ -3251,7 +3219,7 @@ int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode,
 
 	BUG_ON(!hba);
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	mutex_lock(&hba->dev_cmd.lock);
 	ufshcd_init_query(hba, &request, &response, opcode, idn, index,
 			selector);
@@ -3325,7 +3293,7 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode,
 		return -EINVAL;
 	}
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 	mutex_lock(&hba->dev_cmd.lock);
 	ufshcd_init_query(hba, &request, &response, opcode, idn, index,
@@ -3421,7 +3389,7 @@ static int __ufshcd_query_descriptor(struct ufs_hba *hba,
 		return -EINVAL;
 	}
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 	mutex_lock(&hba->dev_cmd.lock);
 	ufshcd_init_query(hba, &request, &response, opcode, idn, index,
@@ -4239,7 +4207,7 @@ int ufshcd_uic_change_pwr_mode(struct ufs_hba *hba, u8 mode)
 	uic_cmd.command = UIC_CMD_DME_SET;
 	uic_cmd.argument1 = UIC_ARG_MIB(PA_PWRMODE);
 	uic_cmd.argument3 = mode;
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	ret = ufshcd_uic_pwr_ctrl(hba, &uic_cmd);
 	ufshcd_release(hba);
 
@@ -4346,7 +4314,7 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit)
 	if (update &&
 	    !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) {
 		ufshcd_rpm_get_sync(hba);
-		ufshcd_hold(hba, false);
+		ufshcd_hold(hba);
 		ufshcd_auto_hibern8_enable(hba);
 		ufshcd_release(hba);
 		ufshcd_rpm_put_sync(hba);
@@ -4939,7 +4907,7 @@ static int ufshcd_verify_dev_init(struct ufs_hba *hba)
 	int err = 0;
 	int retries;
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	mutex_lock(&hba->dev_cmd.lock);
 	for (retries = NOP_OUT_RETRIES; retries > 0; retries--) {
 		err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_NOP,
@@ -6224,22 +6192,22 @@ static void ufshcd_err_handling_prepare(struct ufs_hba *hba)
 		ufshcd_setup_vreg(hba, true);
 		ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq);
 		ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq2);
-		ufshcd_hold(hba, false);
+		ufshcd_hold(hba);
 		if (!ufshcd_is_clkgating_allowed(hba))
 			ufshcd_setup_clocks(hba, true);
 		ufshcd_release(hba);
 		pm_op = hba->is_sys_suspended ? UFS_SYSTEM_PM : UFS_RUNTIME_PM;
 		ufshcd_vops_resume(hba, pm_op);
 	} else {
-		ufshcd_hold(hba, false);
+		ufshcd_hold(hba);
 		if (ufshcd_is_clkscaling_supported(hba) &&
 		    hba->clk_scaling.is_enabled)
 			ufshcd_suspend_clkscaling(hba);
 		ufshcd_clk_scaling_allow(hba, false);
 	}
 	ufshcd_scsi_block_requests(hba);
-	/* Drain ufshcd_queuecommand() */
-	synchronize_rcu();
+	/* Wait for ongoing ufshcd_queuecommand() calls to finish. */
+	blk_mq_wait_quiesce_done(&hba->host->tag_set);
 	cancel_work_sync(&hba->eeh_work);
 }
 
@@ -6884,7 +6852,7 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
 		return PTR_ERR(req);
 
 	req->end_io_data = &wait;
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 
 	spin_lock_irqsave(host->host_lock, flags);
 
@@ -7120,7 +7088,7 @@ int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,
 		cmd_type = DEV_CMD_TYPE_NOP;
 		fallthrough;
 	case UPIU_TRANSACTION_QUERY_REQ:
-		ufshcd_hold(hba, false);
+		ufshcd_hold(hba);
 		mutex_lock(&hba->dev_cmd.lock);
 		err = ufshcd_issue_devman_upiu_cmd(hba, req_upiu, rsp_upiu,
 						   desc_buff, buff_len,
@@ -7186,7 +7154,7 @@ int ufshcd_advanced_rpmb_req_handler(struct ufs_hba *hba, struct utp_upiu_req *r
 	u16 ehs_len;
 
 	/* Protects use of hba->reserved_slot. */
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	mutex_lock(&hba->dev_cmd.lock);
 	down_read(&hba->clk_scaling_lock);
 
@@ -7420,7 +7388,7 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
 
 	WARN_ONCE(tag < 0, "Invalid tag %d\n", tag);
 
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	reg = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
 	/* If command is already aborted/completed, return FAILED. */
 	if (!(test_bit(tag, &hba->outstanding_reqs))) {
@@ -9412,7 +9380,7 @@ static int __ufshcd_wl_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
 	 * If we can't transition into any of the low power modes
 	 * just gate the clocks.
 	 */
-	ufshcd_hold(hba, false);
+	ufshcd_hold(hba);
 	hba->clk_gating.is_suspended = true;
 
 	if (ufshcd_is_clkscaling_supported(hba))
diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
index db2e669985d5..40c537a3880b 100644
--- a/include/ufs/ufshcd.h
+++ b/include/ufs/ufshcd.h
@@ -1357,7 +1357,7 @@ void ufshcd_fixup_dev_quirks(struct ufs_hba *hba,
 int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
 			    u8 **buf, bool ascii);
 
-int ufshcd_hold(struct ufs_hba *hba, bool async);
+void ufshcd_hold(struct ufs_hba *hba);
 void ufshcd_release(struct ufs_hba *hba);
 
 void ufshcd_clkgate_delay_set(struct device *dev, unsigned long value);

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-06-08  1:43 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-29 20:26 [PATCH v4 0/5] ufs: Do not requeue while ungating the clock Bart Van Assche
2023-05-29 20:26 ` [PATCH v4 1/5] scsi: core: Rework scsi_host_block() Bart Van Assche
2023-05-29 20:26 ` [PATCH v4 2/5] scsi: core: Support setting BLK_MQ_F_BLOCKING Bart Van Assche
2023-05-29 20:26 ` [PATCH v4 3/5] scsi: ufs: Conditionally enable the BLK_MQ_F_BLOCKING flag Bart Van Assche
2023-05-30  9:52   ` Bean Huo
2023-05-29 20:26 ` [PATCH v4 4/5] scsi: ufs: Declare ufshcd_{hold,release}() once Bart Van Assche
2023-05-30  9:48   ` Adrian Hunter
2023-05-29 20:26 ` [PATCH v4 5/5] scsi: ufs: Ungate the clock synchronously Bart Van Assche
2023-05-30 11:10   ` Bean Huo
2023-05-30 17:39   ` Bao D. Nguyen
     [not found] ` <CGME20230529202702epcas2p2ab96250338c67cef61f4bbd479c2c73b@epcms2p7>
2023-05-31  7:59   ` [PATCH v4 4/5] scsi: ufs: Declare ufshcd_{hold,release}() once Keoseong Park
2023-05-31 15:45 ` [PATCH v4 0/5] ufs: Do not requeue while ungating the clock Martin K. Petersen
2023-06-08  1:42 ` Martin K. Petersen
  -- strict thread matches above, loose matches on Subject: below --
2023-05-29 20:21 [PATCH] scsi: stex: Fix gcc 13 warnings Bart Van Assche
2023-05-29 20:21 ` [PATCH v4 5/5] scsi: ufs: Ungate the clock synchronously Bart Van Assche

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.