All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 00/12] blk-mq: Implement runtime power management
@ 2018-08-09 19:41 Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 01/12] block, scsi: Introduce request flag RQF_DV Bart Van Assche
                   ` (11 more replies)
  0 siblings, 12 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche

Hello Jens,

This patch series not only implements runtime power management for blk-mq but
also fixes a starvation issue in the power management code for the legacy
block layer. Please consider this patch series for the upstream kernel.

Thanks,

Bart.

Changes compared to v5:
- Introduced a new flag RQF_DV that replaces RQF_PREEMPT for SCSI domain
  validation.
- Introduced a new request queue state QUEUE_FLAG_DV_ONLY for SCSI domain
  validation.
- Instead of using SDEV_QUIESCE for both runtime suspend and SCSI domain
  validation, use that state for domain validation only and introduce a new
  state for runtime suspend, namely SDEV_QUIESCE.
- Reallow system suspend during SCSI domain validation.
- Moved the runtime resume call from the request allocation code into
  blk_queue_enter().
- Instead of relying on q_usage_counter, iterate over the tag set to determine
  whether or not any requests are in flight.

Changes compared to v4:
- Dropped the patches "Give RQF_PREEMPT back its original meaning" and
  "Serialize queue freezing and blk_pre_runtime_suspend()".
- Replaced "percpu_ref_read()" with "percpu_is_in_use()".
- Inserted pm_request_resume() calls in the block layer request allocation
  code such that the context that submits a request no longer has to call
  pm_runtime_get().

Changes compared to v3:
- Avoid adverse interactions between system-wide suspend/resume and runtime
  power management by changing the PREEMPT_ONLY flag into a counter.
- Give RQF_PREEMPT back its original meaning, namely that it is only set for
  ide_preempt requests.
- Remove the flag BLK_MQ_REQ_PREEMPT.
- Removed the pm_request_resume() call.

Changes compared to v2:
- Fixed the build for CONFIG_BLOCK=n.
- Added a patch that introduces percpu_ref_read() in the percpu-counter code.
- Added a patch that makes it easier to detect missing pm_runtime_get*() calls.
- Addressed Jianchao's feedback including the comment about runtime overhead
  of switching a per-cpu counter to atomic mode.

Changes compared to v1:
- Moved the runtime power management code into a separate file.
- Addressed Ming's feedback.

Bart Van Assche (12):
  block, scsi: Introduce request flag RQF_DV
  scsi: Alter handling of RQF_DV requests
  scsi: Only set RQF_DV for requests used for domain validation
  scsi: Introduce the SDEV_SUSPENDED device status
  block, scsi: Rename QUEUE_FLAG_PREEMPT_ONLY into DV_ONLY and introduce
    PM_ONLY
  scsi: Reallow SPI domain validation during system suspend
  block: Move power management code into a new source file
  block, scsi: Introduce blk_pm_runtime_exit()
  block: Split blk_pm_add_request() and blk_pm_put_request()
  block: Change the runtime power management approach (1/2)
  block: Change the runtime power management approach (2/2)
  blk-mq: Enable support for runtime power management

 block/Kconfig                     |   5 +
 block/Makefile                    |   1 +
 block/blk-core.c                  | 310 ++++++------------------------
 block/blk-mq-debugfs.c            |   4 +-
 block/blk-mq.c                    |   4 +-
 block/blk-pm.c                    | 262 +++++++++++++++++++++++++
 block/blk-pm.h                    |  69 +++++++
 block/elevator.c                  |  22 +--
 drivers/ide/ide-pm.c              |   3 +-
 drivers/scsi/scsi_lib.c           | 266 ++++++++++++++++++-------
 drivers/scsi/scsi_pm.c            |   7 +-
 drivers/scsi/scsi_sysfs.c         |   2 +
 drivers/scsi/scsi_transport_spi.c |  26 +--
 drivers/scsi/sd.c                 |  10 +-
 drivers/scsi/sr.c                 |   4 +-
 include/linux/blk-mq.h            |   6 +-
 include/linux/blk-pm.h            |  26 +++
 include/linux/blkdev.h            |  41 ++--
 include/scsi/scsi_device.h        |  15 +-
 include/scsi/scsi_transport_spi.h |   1 -
 20 files changed, 673 insertions(+), 411 deletions(-)
 create mode 100644 block/blk-pm.c
 create mode 100644 block/blk-pm.h
 create mode 100644 include/linux/blk-pm.h

-- 
2.18.0

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6 01/12] block, scsi: Introduce request flag RQF_DV
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 02/12] scsi: Alter handling of RQF_DV requests Bart Van Assche
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, David S . Miller, Ming Lei, Jianchao Wang,
	Hannes Reinecke, Johannes Thumshirn, Alan Stern

Instead of marking all power management, SCSI domain validation and
IDE preempt requests with RQF_PREEMPT, only mark IDE preempt
requests with RQF_PREEMPT. Use RQF_DV to mark requests submitted by
scsi_execute() and RQF_PM to mark power management requests. Most
but not all power management requests already have the RQF_PM flag
set.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 block/blk-core.c        | 11 +++++------
 block/blk-mq-debugfs.c  |  1 +
 block/blk-mq.c          |  2 --
 drivers/ide/ide-pm.c    |  3 ++-
 drivers/scsi/scsi_lib.c | 11 ++++++++---
 include/linux/blk-mq.h  |  6 ++++--
 include/linux/blkdev.h  |  5 +++--
 7 files changed, 23 insertions(+), 16 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 12550340418d..c4ff58491758 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -913,11 +913,11 @@ EXPORT_SYMBOL(blk_alloc_queue);
 /**
  * blk_queue_enter() - try to increase q->q_usage_counter
  * @q: request queue pointer
- * @flags: BLK_MQ_REQ_NOWAIT and/or BLK_MQ_REQ_PREEMPT
+ * @flags: BLK_MQ_REQ_NOWAIT, BLK_MQ_REQ_PM and/or BLK_MQ_REQ_DV
  */
 int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
 {
-	const bool preempt = flags & BLK_MQ_REQ_PREEMPT;
+	const bool preempt = flags & (BLK_MQ_REQ_PM | BLK_MQ_REQ_DV);
 
 	while (true) {
 		bool success = false;
@@ -1436,8 +1436,6 @@ static struct request *__get_request(struct request_list *rl, unsigned int op,
 	blk_rq_set_rl(rq, rl);
 	rq->cmd_flags = op;
 	rq->rq_flags = rq_flags;
-	if (flags & BLK_MQ_REQ_PREEMPT)
-		rq->rq_flags |= RQF_PREEMPT;
 
 	/* init elvpriv */
 	if (rq_flags & RQF_ELVPRIV) {
@@ -1575,7 +1573,7 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
 	goto retry;
 }
 
-/* flags: BLK_MQ_REQ_PREEMPT and/or BLK_MQ_REQ_NOWAIT. */
+/* flags: BLK_MQ_REQ_NOWAIT, BLK_MQ_REQ_PM and/or BLK_MQ_REQ_DV. */
 static struct request *blk_old_get_request(struct request_queue *q,
 				unsigned int op, blk_mq_req_flags_t flags)
 {
@@ -1618,7 +1616,8 @@ struct request *blk_get_request(struct request_queue *q, unsigned int op,
 	struct request *req;
 
 	WARN_ON_ONCE(op & REQ_NOWAIT);
-	WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PREEMPT));
+	WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PM |
+			       BLK_MQ_REQ_DV));
 
 	if (q->mq_ops) {
 		req = blk_mq_alloc_request(q, op, flags);
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index cb1e6cf7ac48..b4e722311ae1 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -330,6 +330,7 @@ static const char *const rqf_name[] = {
 	RQF_NAME(SPECIAL_PAYLOAD),
 	RQF_NAME(ZONE_WRITE_LOCKED),
 	RQF_NAME(MQ_POLL_SLEPT),
+	RQF_NAME(DV),
 };
 #undef RQF_NAME
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 72a0033ccee9..2a0eb058ba5a 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -300,8 +300,6 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
 	rq->rq_flags = rq_flags;
 	rq->cpu = -1;
 	rq->cmd_flags = op;
-	if (data->flags & BLK_MQ_REQ_PREEMPT)
-		rq->rq_flags |= RQF_PREEMPT;
 	if (blk_queue_io_stat(data->q))
 		rq->rq_flags |= RQF_IO_STAT;
 	INIT_LIST_HEAD(&rq->queuelist);
diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
index 59217aa1d1fb..8bf378164aee 100644
--- a/drivers/ide/ide-pm.c
+++ b/drivers/ide/ide-pm.c
@@ -90,8 +90,9 @@ int generic_ide_resume(struct device *dev)
 	}
 
 	memset(&rqpm, 0, sizeof(rqpm));
-	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, BLK_MQ_REQ_PREEMPT);
+	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, BLK_MQ_REQ_PM);
 	ide_req(rq)->type = ATA_PRIV_PM_RESUME;
+	rq->rq_flags |= RQF_PM;
 	rq->special = &rqpm;
 	rqpm.pm_step = IDE_PM_START_RESUME;
 	rqpm.pm_state = PM_EVENT_ON;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 9cb9a166fa0c..a65a03e2bcc4 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -263,11 +263,16 @@ int __scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
 {
 	struct request *req;
 	struct scsi_request *rq;
+	blk_mq_req_flags_t blk_mq_req_flags = 0;
 	int ret = DRIVER_ERROR << 24;
 
+	if (rq_flags & RQF_PM)
+		blk_mq_req_flags |= BLK_MQ_REQ_PM;
+	rq_flags |= RQF_DV;
+	blk_mq_req_flags |= BLK_MQ_REQ_DV;
 	req = blk_get_request(sdev->request_queue,
 			data_direction == DMA_TO_DEVICE ?
-			REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, BLK_MQ_REQ_PREEMPT);
+			REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, blk_mq_req_flags);
 	if (IS_ERR(req))
 		return ret;
 	rq = scsi_req(req);
@@ -1356,7 +1361,7 @@ scsi_prep_state_check(struct scsi_device *sdev, struct request *req)
 			/*
 			 * If the devices is blocked we defer normal commands.
 			 */
-			if (req && !(req->rq_flags & RQF_PREEMPT))
+			if (req && !(req->rq_flags & (RQF_PM | RQF_DV)))
 				ret = BLKPREP_DEFER;
 			break;
 		default:
@@ -1365,7 +1370,7 @@ scsi_prep_state_check(struct scsi_device *sdev, struct request *req)
 			 * special commands.  In particular any user initiated
 			 * command is not allowed.
 			 */
-			if (req && !(req->rq_flags & RQF_PREEMPT))
+			if (req && !(req->rq_flags & (RQF_PM | RQF_DV)))
 				ret = BLKPREP_KILL;
 			break;
 		}
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 1da59c16f637..4ac964eeabc8 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -221,8 +221,10 @@ enum {
 	BLK_MQ_REQ_RESERVED	= (__force blk_mq_req_flags_t)(1 << 1),
 	/* allocate internal/sched tag */
 	BLK_MQ_REQ_INTERNAL	= (__force blk_mq_req_flags_t)(1 << 2),
-	/* set RQF_PREEMPT */
-	BLK_MQ_REQ_PREEMPT	= (__force blk_mq_req_flags_t)(1 << 3),
+	/* RQF_PM will be set by the caller */
+	BLK_MQ_REQ_PM		= (__force blk_mq_req_flags_t)(1 << 3),
+	/* RQF_DV will be set by the caller */
+	BLK_MQ_REQ_DV		= (__force blk_mq_req_flags_t)(1 << 4),
 };
 
 struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index d6869e0e2b64..938725ef492b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -97,8 +97,7 @@ typedef __u32 __bitwise req_flags_t;
 #define RQF_MQ_INFLIGHT		((__force req_flags_t)(1 << 6))
 /* don't call prep for this one */
 #define RQF_DONTPREP		((__force req_flags_t)(1 << 7))
-/* set for "ide_preempt" requests and also for requests for which the SCSI
-   "quiesce" state must be ignored. */
+/* set for "ide_preempt" requests */
 #define RQF_PREEMPT		((__force req_flags_t)(1 << 8))
 /* contains copies of user pages */
 #define RQF_COPY_USER		((__force req_flags_t)(1 << 9))
@@ -127,6 +126,8 @@ typedef __u32 __bitwise req_flags_t;
 #define RQF_MQ_POLL_SLEPT	((__force req_flags_t)(1 << 20))
 /* ->timeout has been called, don't expire again */
 #define RQF_TIMED_OUT		((__force req_flags_t)(1 << 21))
+/* set for SCSI domain validation requests */
+#define RQF_DV			((__force req_flags_t)(1 << 22))
 
 /* flags that prevent us from merging requests: */
 #define RQF_NOMERGE_FLAGS \
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 02/12] scsi: Alter handling of RQF_DV requests
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 01/12] block, scsi: Introduce request flag RQF_DV Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-10  1:20   ` Ming Lei
  2018-08-09 19:41 ` [PATCH v6 03/12] scsi: Only set RQF_DV for requests used for domain validation Bart Van Assche
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Ming Lei, Jianchao Wang, Hannes Reinecke,
	Johannes Thumshirn, Alan Stern

Process all requests in state SDEV_CREATED instead of only RQF_DV
requests. This does not change the behavior of the SCSI core because
the SCSI device state is modified into another state before SCSI
devices become visible in sysfs and before any device nodes are
created in /dev. Do not process RQF_DV requests in state SDEV_CANCEL
because only power management requests should be processed in this
state. Handle all SCSI device states explicitly in
scsi_prep_state_check() instead of using a default case in the
switch/case statement in scsi_prep_state_check(). This allows the
compiler to verify whether all states have been handled.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 drivers/scsi/scsi_lib.c | 78 +++++++++++++++++++----------------------
 1 file changed, 36 insertions(+), 42 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index a65a03e2bcc4..8685704f6c8b 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1331,49 +1331,43 @@ scsi_prep_state_check(struct scsi_device *sdev, struct request *req)
 	 * If the device is not in running state we will reject some
 	 * or all commands.
 	 */
-	if (unlikely(sdev->sdev_state != SDEV_RUNNING)) {
-		switch (sdev->sdev_state) {
-		case SDEV_OFFLINE:
-		case SDEV_TRANSPORT_OFFLINE:
-			/*
-			 * If the device is offline we refuse to process any
-			 * commands.  The device must be brought online
-			 * before trying any recovery commands.
-			 */
-			sdev_printk(KERN_ERR, sdev,
-				    "rejecting I/O to offline device\n");
-			ret = BLKPREP_KILL;
-			break;
-		case SDEV_DEL:
-			/*
-			 * If the device is fully deleted, we refuse to
-			 * process any commands as well.
-			 */
-			sdev_printk(KERN_ERR, sdev,
-				    "rejecting I/O to dead device\n");
-			ret = BLKPREP_KILL;
-			break;
-		case SDEV_BLOCK:
-		case SDEV_CREATED_BLOCK:
+	switch (sdev->sdev_state) {
+	case SDEV_RUNNING:
+	case SDEV_CREATED:
+		break;
+	case SDEV_OFFLINE:
+	case SDEV_TRANSPORT_OFFLINE:
+		/*
+		 * If the device is offline we refuse to process any commands.
+		 * The device must be brought online before trying any
+		 * recovery commands.
+		 */
+		sdev_printk(KERN_ERR, sdev,
+			    "rejecting I/O to offline device\n");
+		ret = BLKPREP_KILL;
+		break;
+	case SDEV_DEL:
+		/*
+		 * If the device is fully deleted, we refuse to process any
+		 * commands as well.
+		 */
+		sdev_printk(KERN_ERR, sdev, "rejecting I/O to dead device\n");
+		ret = BLKPREP_KILL;
+		break;
+	case SDEV_BLOCK:
+	case SDEV_CREATED_BLOCK:
+		ret = BLKPREP_DEFER;
+		break;
+	case SDEV_QUIESCE:
+		/* Only allow RQF_PM and RQF_DV requests. */
+		if (!(req->rq_flags & (RQF_PM | RQF_DV)))
 			ret = BLKPREP_DEFER;
-			break;
-		case SDEV_QUIESCE:
-			/*
-			 * If the devices is blocked we defer normal commands.
-			 */
-			if (req && !(req->rq_flags & (RQF_PM | RQF_DV)))
-				ret = BLKPREP_DEFER;
-			break;
-		default:
-			/*
-			 * For any other not fully online state we only allow
-			 * special commands.  In particular any user initiated
-			 * command is not allowed.
-			 */
-			if (req && !(req->rq_flags & (RQF_PM | RQF_DV)))
-				ret = BLKPREP_KILL;
-			break;
-		}
+		break;
+	case SDEV_CANCEL:
+		/* Only allow RQF_PM requests. */
+		if (!(req->rq_flags & RQF_PM))
+			ret = BLKPREP_KILL;
+		break;
 	}
 	return ret;
 }
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 03/12] scsi: Only set RQF_DV for requests used for domain validation
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 01/12] block, scsi: Introduce request flag RQF_DV Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 02/12] scsi: Alter handling of RQF_DV requests Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 04/12] scsi: Introduce the SDEV_SUSPENDED device status Bart Van Assche
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Ming Lei, Jianchao Wang, Hannes Reinecke,
	Johannes Thumshirn, Alan Stern

Instead of setting RQF_DV for all requests submitted by scsi_execute(),
only set that flag for requests that are used for domain validation.
Move the SCSI Parallel Interface (SPI) domain validation status from
the transport data to struct scsi_target such that this status
information can be accessed easily from inside scsi_execute(). This
patch prevents that e.g. event checking can occur during domain
validation.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 drivers/scsi/scsi_lib.c           | 6 ++++--
 drivers/scsi/scsi_transport_spi.c | 9 +++------
 include/scsi/scsi_device.h        | 1 +
 include/scsi/scsi_transport_spi.h | 1 -
 4 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 8685704f6c8b..4d7411a7985f 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -268,8 +268,10 @@ int __scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
 
 	if (rq_flags & RQF_PM)
 		blk_mq_req_flags |= BLK_MQ_REQ_PM;
-	rq_flags |= RQF_DV;
-	blk_mq_req_flags |= BLK_MQ_REQ_DV;
+	if (scsi_target(sdev)->spi_dv_context == current) {
+		rq_flags |= RQF_DV;
+		blk_mq_req_flags |= BLK_MQ_REQ_DV;
+	}
 	req = blk_get_request(sdev->request_queue,
 			data_direction == DMA_TO_DEVICE ?
 			REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, blk_mq_req_flags);
diff --git a/drivers/scsi/scsi_transport_spi.c b/drivers/scsi/scsi_transport_spi.c
index 2ca150b16764..bf6b18768e79 100644
--- a/drivers/scsi/scsi_transport_spi.c
+++ b/drivers/scsi/scsi_transport_spi.c
@@ -66,7 +66,6 @@ static struct {
 };
 
 /* Private data accessors (keep these out of the header file) */
-#define spi_dv_in_progress(x) (((struct spi_transport_attrs *)&(x)->starget_data)->dv_in_progress)
 #define spi_dv_mutex(x) (((struct spi_transport_attrs *)&(x)->starget_data)->dv_mutex)
 
 struct spi_internal {
@@ -268,7 +267,6 @@ static int spi_setup_transport_attrs(struct transport_container *tc,
 	spi_pcomp_en(starget) = 0;
 	spi_hold_mcs(starget) = 0;
 	spi_dv_pending(starget) = 0;
-	spi_dv_in_progress(starget) = 0;
 	spi_initial_dv(starget) = 0;
 	mutex_init(&spi_dv_mutex(starget));
 
@@ -1018,14 +1016,12 @@ spi_dv_device(struct scsi_device *sdev)
 	 */
 	lock_system_sleep();
 
-	if (unlikely(spi_dv_in_progress(starget)))
+	if (unlikely(starget->spi_dv_context))
 		goto unlock;
 
 	if (unlikely(scsi_device_get(sdev)))
 		goto unlock;
 
-	spi_dv_in_progress(starget) = 1;
-
 	buffer = kzalloc(len, GFP_KERNEL);
 
 	if (unlikely(!buffer))
@@ -1043,7 +1039,9 @@ spi_dv_device(struct scsi_device *sdev)
 
 	starget_printk(KERN_INFO, starget, "Beginning Domain Validation\n");
 
+	starget->spi_dv_context = current;
 	spi_dv_device_internal(sdev, buffer);
+	starget->spi_dv_context = NULL;
 
 	starget_printk(KERN_INFO, starget, "Ending Domain Validation\n");
 
@@ -1057,7 +1055,6 @@ spi_dv_device(struct scsi_device *sdev)
  out_free:
 	kfree(buffer);
  out_put:
-	spi_dv_in_progress(starget) = 0;
 	scsi_device_put(sdev);
 unlock:
 	unlock_system_sleep();
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index 202f4d6a4342..440834f4252e 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -310,6 +310,7 @@ struct scsi_target {
 
 	char			scsi_level;
 	enum scsi_target_state	state;
+	struct task_struct	*spi_dv_context;
 	void 			*hostdata; /* available to low-level driver */
 	unsigned long		starget_data[0]; /* for the transport */
 	/* starget_data must be the last element!!!! */
diff --git a/include/scsi/scsi_transport_spi.h b/include/scsi/scsi_transport_spi.h
index a4fa52b4d5c5..36934ac0a5ca 100644
--- a/include/scsi/scsi_transport_spi.h
+++ b/include/scsi/scsi_transport_spi.h
@@ -56,7 +56,6 @@ struct spi_transport_attrs {
 	unsigned int support_qas; /* supports quick arbitration and selection */
 	/* Private Fields */
 	unsigned int dv_pending:1; /* Internal flag: DV Requested */
-	unsigned int dv_in_progress:1;	/* Internal: DV started */
 	struct mutex dv_mutex; /* semaphore to serialise dv */
 };
 
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 04/12] scsi: Introduce the SDEV_SUSPENDED device status
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
                   ` (2 preceding siblings ...)
  2018-08-09 19:41 ` [PATCH v6 03/12] scsi: Only set RQF_DV for requests used for domain validation Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 05/12] block, scsi: Rename QUEUE_FLAG_PREEMPT_ONLY into DV_ONLY and introduce PM_ONLY Bart Van Assche
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Ming Lei, Jianchao Wang, Hannes Reinecke,
	Johannes Thumshirn, Alan Stern

Instead of using the SDEV_QUIESCE state for both SCSI domain validation
and runtime suspend, use the SDEV_QUIESCE state only for SCSI domain
validation. Keep using scsi_device_quiesce() and scsi_device_unquiesce()
for SCSI domain validation. Add new functions scsi_device_suspend() and
scsi_device_unsuspend() for power management. Rename scsi_device_resume()
into scsi_device_unquiesce() to avoid confusion.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 drivers/scsi/scsi_lib.c           | 138 ++++++++++++++++++++++++------
 drivers/scsi/scsi_pm.c            |   6 +-
 drivers/scsi/scsi_sysfs.c         |   1 +
 drivers/scsi/scsi_transport_spi.c |   2 +-
 include/scsi/scsi_device.h        |  13 +--
 5 files changed, 125 insertions(+), 35 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 4d7411a7985f..eb914d8e17fd 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1365,6 +1365,11 @@ scsi_prep_state_check(struct scsi_device *sdev, struct request *req)
 		if (!(req->rq_flags & (RQF_PM | RQF_DV)))
 			ret = BLKPREP_DEFER;
 		break;
+	case SDEV_SUSPENDED:
+		/* Process RQF_PM requests only. */
+		if (!(req->rq_flags & RQF_PM))
+			ret = BLKPREP_DEFER;
+		break;
 	case SDEV_CANCEL:
 		/* Only allow RQF_PM requests. */
 		if (!(req->rq_flags & RQF_PM))
@@ -2669,6 +2674,7 @@ scsi_device_set_state(struct scsi_device *sdev, enum scsi_device_state state)
 		case SDEV_OFFLINE:
 		case SDEV_TRANSPORT_OFFLINE:
 		case SDEV_QUIESCE:
+		case SDEV_SUSPENDED:
 		case SDEV_BLOCK:
 			break;
 		default:
@@ -2677,6 +2683,7 @@ scsi_device_set_state(struct scsi_device *sdev, enum scsi_device_state state)
 		break;
 
 	case SDEV_QUIESCE:
+	case SDEV_SUSPENDED:
 		switch (oldstate) {
 		case SDEV_RUNNING:
 		case SDEV_OFFLINE:
@@ -2970,19 +2977,103 @@ static void scsi_wait_for_queuecommand(struct scsi_device *sdev)
 }
 
 /**
- *	scsi_device_quiesce - Block user issued commands.
- *	@sdev:	scsi device to quiesce.
+ * scsi_device_suspend - only process RQF_PM requests
+ * @sdev: scsi device to suspend.
+ *
+ * This works by trying to transition to the SDEV_SUSPENDED state (which must be
+ * a legal transition).  When the device is in this state, only RQF_DV
+ * requests will be accepted, all others will be deferred.
+ *
+ * Must be called with user context, may sleep.
+ *
+ * Returns zero if unsuccessful or an error if not.
+ */
+int
+scsi_device_suspend(struct scsi_device *sdev)
+{
+	struct request_queue *q = sdev->request_queue;
+	int err;
+
+	if (sdev->sdev_state == SDEV_SUSPENDED)
+		return 0;
+
+	blk_set_preempt_only(q);
+
+	blk_mq_freeze_queue(q);
+	/*
+	 * Ensure that the effect of blk_set_preempt_only() will be visible
+	 * for percpu_ref_tryget() callers that occur after the queue
+	 * unfreeze even if the queue was already frozen before this function
+	 * was called. See also https://lwn.net/Articles/573497/.
+	 */
+	synchronize_rcu();
+	blk_mq_unfreeze_queue(q);
+
+	mutex_lock(&sdev->state_mutex);
+	err = scsi_device_set_state(sdev, SDEV_SUSPENDED);
+	if (err)
+		blk_clear_preempt_only(q);
+	mutex_unlock(&sdev->state_mutex);
+
+	return err;
+}
+EXPORT_SYMBOL(scsi_device_suspend);
+
+/**
+ * scsi_device_unsuspend - unsuspend processing non-RQF_DV requests
+ * @sdev: scsi device to unsuspend.
  *
- *	This works by trying to transition to the SDEV_QUIESCE state
- *	(which must be a legal transition).  When the device is in this
- *	state, only special requests will be accepted, all others will
- *	be deferred.  Since special requests may also be requeued requests,
- *	a successful return doesn't guarantee the device will be 
- *	totally quiescent.
+ * Moves the device from suspended back to running and restarts the queues.
  *
- *	Must be called with user context, may sleep.
+ * Must be called with user context, may sleep.
+ */
+void scsi_device_unsuspend(struct scsi_device *sdev)
+{
+	mutex_lock(&sdev->state_mutex);
+	blk_clear_preempt_only(sdev->request_queue);
+	if (sdev->sdev_state == SDEV_SUSPENDED)
+		scsi_device_set_state(sdev, SDEV_RUNNING);
+	mutex_unlock(&sdev->state_mutex);
+}
+EXPORT_SYMBOL(scsi_device_unsuspend);
+
+static void
+device_suspend_fn(struct scsi_device *sdev, void *data)
+{
+	scsi_device_suspend(sdev);
+}
+
+void
+scsi_target_suspend(struct scsi_target *starget)
+{
+	starget_for_each_device(starget, NULL, device_suspend_fn);
+}
+EXPORT_SYMBOL(scsi_target_suspend);
+
+static void
+device_unsuspend_fn(struct scsi_device *sdev, void *data)
+{
+	scsi_device_unsuspend(sdev);
+}
+
+void
+scsi_target_unsuspend(struct scsi_target *starget)
+{
+	starget_for_each_device(starget, NULL, device_unsuspend_fn);
+}
+EXPORT_SYMBOL(scsi_target_unsuspend);
+
+/**
+ * scsi_device_quiesce - only process RQF_DV requests
+ * @sdev: scsi device to quiesce.
  *
- *	Returns zero if unsuccessful or an error if not.
+ * This works by trying to transition to the SDEV_QUIESCE state (which must be
+ * a legal transition).  When the device is in this state, only RQF_DV
+ * requests will be accepted, all others will be deferred.
+ *
+ * Must be called with user context, may sleep.
+ *
+ * Returns zero if unsuccessful or an error if not.
  */
 int
 scsi_device_quiesce(struct scsi_device *sdev)
@@ -3022,20 +3113,15 @@ scsi_device_quiesce(struct scsi_device *sdev)
 EXPORT_SYMBOL(scsi_device_quiesce);
 
 /**
- *	scsi_device_resume - Restart user issued commands to a quiesced device.
- *	@sdev:	scsi device to resume.
+ * scsi_device_unquiesce - unquiesce processing non-RQF_DV requests
+ * @sdev: scsi device to unquiesce.
  *
- *	Moves the device from quiesced back to running and restarts the
- *	queues.
+ * Moves the device from quiesced back to running and restarts the queues.
  *
- *	Must be called with user context, may sleep.
+ * Must be called with user context, may sleep.
  */
-void scsi_device_resume(struct scsi_device *sdev)
+void scsi_device_unquiesce(struct scsi_device *sdev)
 {
-	/* check if the device state was mutated prior to resume, and if
-	 * so assume the state is being managed elsewhere (for example
-	 * device deleted during suspend)
-	 */
 	mutex_lock(&sdev->state_mutex);
 	WARN_ON_ONCE(!sdev->quiesced_by);
 	sdev->quiesced_by = NULL;
@@ -3044,7 +3130,7 @@ void scsi_device_resume(struct scsi_device *sdev)
 		scsi_device_set_state(sdev, SDEV_RUNNING);
 	mutex_unlock(&sdev->state_mutex);
 }
-EXPORT_SYMBOL(scsi_device_resume);
+EXPORT_SYMBOL(scsi_device_unquiesce);
 
 static void
 device_quiesce_fn(struct scsi_device *sdev, void *data)
@@ -3060,17 +3146,17 @@ scsi_target_quiesce(struct scsi_target *starget)
 EXPORT_SYMBOL(scsi_target_quiesce);
 
 static void
-device_resume_fn(struct scsi_device *sdev, void *data)
+device_unquiesce_fn(struct scsi_device *sdev, void *data)
 {
-	scsi_device_resume(sdev);
+	scsi_device_unquiesce(sdev);
 }
 
 void
-scsi_target_resume(struct scsi_target *starget)
+scsi_target_unquiesce(struct scsi_target *starget)
 {
-	starget_for_each_device(starget, NULL, device_resume_fn);
+	starget_for_each_device(starget, NULL, device_unquiesce_fn);
 }
-EXPORT_SYMBOL(scsi_target_resume);
+EXPORT_SYMBOL(scsi_target_unquiesce);
 
 /**
  * scsi_internal_device_block_nowait - try to transition to the SDEV_BLOCK state
diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c
index b44c1bb687a2..6300e168701d 100644
--- a/drivers/scsi/scsi_pm.c
+++ b/drivers/scsi/scsi_pm.c
@@ -57,11 +57,11 @@ static int scsi_dev_type_suspend(struct device *dev,
 	/* flush pending in-flight resume operations, suspend is synchronous */
 	async_synchronize_full_domain(&scsi_sd_pm_domain);
 
-	err = scsi_device_quiesce(to_scsi_device(dev));
+	err = scsi_device_suspend(to_scsi_device(dev));
 	if (err == 0) {
 		err = cb(dev, pm);
 		if (err)
-			scsi_device_resume(to_scsi_device(dev));
+			scsi_device_unsuspend(to_scsi_device(dev));
 	}
 	dev_dbg(dev, "scsi suspend: %d\n", err);
 	return err;
@@ -74,7 +74,7 @@ static int scsi_dev_type_resume(struct device *dev,
 	int err = 0;
 
 	err = cb(dev, pm);
-	scsi_device_resume(to_scsi_device(dev));
+	scsi_device_unsuspend(to_scsi_device(dev));
 	dev_dbg(dev, "scsi resume: %d\n", err);
 
 	if (err == 0) {
diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
index 7943b762c12d..496c5eff4859 100644
--- a/drivers/scsi/scsi_sysfs.c
+++ b/drivers/scsi/scsi_sysfs.c
@@ -36,6 +36,7 @@ static const struct {
 	{ SDEV_CANCEL, "cancel" },
 	{ SDEV_DEL, "deleted" },
 	{ SDEV_QUIESCE, "quiesce" },
+	{ SDEV_SUSPENDED, "suspended" },
 	{ SDEV_OFFLINE,	"offline" },
 	{ SDEV_TRANSPORT_OFFLINE, "transport-offline" },
 	{ SDEV_BLOCK,	"blocked" },
diff --git a/drivers/scsi/scsi_transport_spi.c b/drivers/scsi/scsi_transport_spi.c
index bf6b18768e79..16bec4884249 100644
--- a/drivers/scsi/scsi_transport_spi.c
+++ b/drivers/scsi/scsi_transport_spi.c
@@ -1048,7 +1048,7 @@ spi_dv_device(struct scsi_device *sdev)
 	mutex_unlock(&spi_dv_mutex(starget));
 	spi_dv_pending(starget) = 0;
 
-	scsi_target_resume(starget);
+	scsi_target_unquiesce(starget);
 
 	spi_initial_dv(starget) = 1;
 
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index 440834f4252e..a2e3edf8be12 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -42,9 +42,8 @@ enum scsi_device_state {
 				 * Only error handler commands allowed */
 	SDEV_DEL,		/* device deleted 
 				 * no commands allowed */
-	SDEV_QUIESCE,		/* Device quiescent.  No block commands
-				 * will be accepted, only specials (which
-				 * originate in the mid-layer) */
+	SDEV_QUIESCE,		/* Only RQF_DV requests are accepted. */
+	SDEV_SUSPENDED,		/* Only RQF_PM requests are accepted. */
 	SDEV_OFFLINE,		/* Device offlined (by error handling or
 				 * user request */
 	SDEV_TRANSPORT_OFFLINE,	/* Offlined by transport class error handler */
@@ -415,9 +414,13 @@ extern void sdev_evt_send(struct scsi_device *sdev, struct scsi_event *evt);
 extern void sdev_evt_send_simple(struct scsi_device *sdev,
 			  enum scsi_device_event evt_type, gfp_t gfpflags);
 extern int scsi_device_quiesce(struct scsi_device *sdev);
-extern void scsi_device_resume(struct scsi_device *sdev);
+extern void scsi_device_unquiesce(struct scsi_device *sdev);
 extern void scsi_target_quiesce(struct scsi_target *);
-extern void scsi_target_resume(struct scsi_target *);
+extern void scsi_target_unquiesce(struct scsi_target *);
+extern int scsi_device_suspend(struct scsi_device *sdev);
+extern void scsi_device_unsuspend(struct scsi_device *sdev);
+extern void scsi_target_suspend(struct scsi_target *);
+extern void scsi_target_unsuspend(struct scsi_target *);
 extern void scsi_scan_target(struct device *parent, unsigned int channel,
 			     unsigned int id, u64 lun,
 			     enum scsi_scan_mode rescan);
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 05/12] block, scsi: Rename QUEUE_FLAG_PREEMPT_ONLY into DV_ONLY and introduce PM_ONLY
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
                   ` (3 preceding siblings ...)
  2018-08-09 19:41 ` [PATCH v6 04/12] scsi: Introduce the SDEV_SUSPENDED device status Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-10  1:39   ` jianchao.wang
  2018-08-09 19:41 ` [PATCH v6 06/12] scsi: Reallow SPI domain validation during system suspend Bart Van Assche
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Ming Lei, Jianchao Wang, Hannes Reinecke,
	Johannes Thumshirn, Alan Stern

Instead of having one queue state (PREEMPT_ONLY) in which both
power management and SCSI domain validation requests are processed,
rename the PREEMPT_ONLY state into DV_ONLY and introduce a new queue
flag QUEUE_FLAG_PM_ONLY. Provide the new functions blk_set_pm_only()
and blk_clear_pm_only() for power management and blk_set_dv_only()
and blk_clear_dv_only() functions for SCSI domain validation purposes.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 block/blk-core.c        | 65 ++++++++++++++++++++++++++++++-----------
 block/blk-mq-debugfs.c  |  3 +-
 drivers/scsi/scsi_lib.c | 16 +++++-----
 include/linux/blkdev.h  | 13 +++++----
 4 files changed, 66 insertions(+), 31 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index c4ff58491758..401f4927a8db 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -421,24 +421,44 @@ void blk_sync_queue(struct request_queue *q)
 EXPORT_SYMBOL(blk_sync_queue);
 
 /**
- * blk_set_preempt_only - set QUEUE_FLAG_PREEMPT_ONLY
+ * blk_set_dv_only - set QUEUE_FLAG_DV_ONLY
  * @q: request queue pointer
  *
- * Returns the previous value of the PREEMPT_ONLY flag - 0 if the flag was not
+ * Returns the previous value of the DV_ONLY flag - 0 if the flag was not
  * set and 1 if the flag was already set.
  */
-int blk_set_preempt_only(struct request_queue *q)
+int blk_set_dv_only(struct request_queue *q)
 {
-	return blk_queue_flag_test_and_set(QUEUE_FLAG_PREEMPT_ONLY, q);
+	return blk_queue_flag_test_and_set(QUEUE_FLAG_DV_ONLY, q);
 }
-EXPORT_SYMBOL_GPL(blk_set_preempt_only);
+EXPORT_SYMBOL_GPL(blk_set_dv_only);
 
-void blk_clear_preempt_only(struct request_queue *q)
+void blk_clear_dv_only(struct request_queue *q)
 {
-	blk_queue_flag_clear(QUEUE_FLAG_PREEMPT_ONLY, q);
+	blk_queue_flag_clear(QUEUE_FLAG_DV_ONLY, q);
 	wake_up_all(&q->mq_freeze_wq);
 }
-EXPORT_SYMBOL_GPL(blk_clear_preempt_only);
+EXPORT_SYMBOL_GPL(blk_clear_dv_only);
+
+/**
+ * blk_set_pm_only - set QUEUE_FLAG_PM_ONLY
+ * @q: request queue pointer
+ *
+ * Returns the previous value of the PM_ONLY flag - 0 if the flag was not
+ * set and 1 if the flag was already set.
+ */
+int blk_set_pm_only(struct request_queue *q)
+{
+	return blk_queue_flag_test_and_set(QUEUE_FLAG_PM_ONLY, q);
+}
+EXPORT_SYMBOL_GPL(blk_set_pm_only);
+
+void blk_clear_pm_only(struct request_queue *q)
+{
+	blk_queue_flag_clear(QUEUE_FLAG_PM_ONLY, q);
+	wake_up_all(&q->mq_freeze_wq);
+}
+EXPORT_SYMBOL_GPL(blk_clear_pm_only);
 
 /**
  * __blk_run_queue_uncond - run a queue whether or not it has been stopped
@@ -910,6 +930,20 @@ struct request_queue *blk_alloc_queue(gfp_t gfp_mask)
 }
 EXPORT_SYMBOL(blk_alloc_queue);
 
+/*
+ * Whether or not blk_queue_enter() should proceed. RQF_PM requests are always
+ * allowed. RQF_DV requests are allowed if the PM_ONLY queue flag has not been
+ * set. Other requests are only allowed if neither PM_ONLY nor DV_ONLY has been
+ * set.
+ */
+static inline bool blk_enter_allowed(struct request_queue *q,
+				     blk_mq_req_flags_t flags)
+{
+	return flags & BLK_MQ_REQ_PM ||
+		(!blk_queue_pm_only(q) &&
+		 (flags & BLK_MQ_REQ_DV || !blk_queue_dv_only(q)));
+}
+
 /**
  * blk_queue_enter() - try to increase q->q_usage_counter
  * @q: request queue pointer
@@ -917,23 +951,20 @@ EXPORT_SYMBOL(blk_alloc_queue);
  */
 int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
 {
-	const bool preempt = flags & (BLK_MQ_REQ_PM | BLK_MQ_REQ_DV);
-
 	while (true) {
 		bool success = false;
 
 		rcu_read_lock();
 		if (percpu_ref_tryget_live(&q->q_usage_counter)) {
 			/*
-			 * The code that sets the PREEMPT_ONLY flag is
-			 * responsible for ensuring that that flag is globally
-			 * visible before the queue is unfrozen.
+			 * The code that sets the PM_ONLY or DV_ONLY queue
+			 * flags is responsible for ensuring that that flag is
+			 * globally visible before the queue is unfrozen.
 			 */
-			if (preempt || !blk_queue_preempt_only(q)) {
+			if (blk_enter_allowed(q, flags))
 				success = true;
-			} else {
+			else
 				percpu_ref_put(&q->q_usage_counter);
-			}
 		}
 		rcu_read_unlock();
 
@@ -954,7 +985,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
 
 		wait_event(q->mq_freeze_wq,
 			   (atomic_read(&q->mq_freeze_depth) == 0 &&
-			    (preempt || !blk_queue_preempt_only(q))) ||
+			    blk_enter_allowed(q, flags)) ||
 			   blk_queue_dying(q));
 		if (blk_queue_dying(q))
 			return -ENODEV;
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index b4e722311ae1..2ac9949bf68e 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -132,7 +132,8 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(REGISTERED),
 	QUEUE_FLAG_NAME(SCSI_PASSTHROUGH),
 	QUEUE_FLAG_NAME(QUIESCED),
-	QUEUE_FLAG_NAME(PREEMPT_ONLY),
+	QUEUE_FLAG_NAME(PM_ONLY),
+	QUEUE_FLAG_NAME(DV_ONLY),
 };
 #undef QUEUE_FLAG_NAME
 
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index eb914d8e17fd..d466f846043f 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -2997,11 +2997,11 @@ scsi_device_suspend(struct scsi_device *sdev)
 	if (sdev->sdev_state == SDEV_SUSPENDED)
 		return 0;
 
-	blk_set_preempt_only(q);
+	blk_set_pm_only(q);
 
 	blk_mq_freeze_queue(q);
 	/*
-	 * Ensure that the effect of blk_set_preempt_only() will be visible
+	 * Ensure that the effect of blk_set_pm_only() will be visible
 	 * for percpu_ref_tryget() callers that occur after the queue
 	 * unfreeze even if the queue was already frozen before this function
 	 * was called. See also https://lwn.net/Articles/573497/.
@@ -3012,7 +3012,7 @@ scsi_device_suspend(struct scsi_device *sdev)
 	mutex_lock(&sdev->state_mutex);
 	err = scsi_device_set_state(sdev, SDEV_SUSPENDED);
 	if (err)
-		blk_clear_preempt_only(q);
+		blk_clear_pm_only(q);
 	mutex_unlock(&sdev->state_mutex);
 
 	return err;
@@ -3030,7 +3030,7 @@ EXPORT_SYMBOL(scsi_device_suspend);
 void scsi_device_unsuspend(struct scsi_device *sdev)
 {
 	mutex_lock(&sdev->state_mutex);
-	blk_clear_preempt_only(sdev->request_queue);
+	blk_clear_pm_only(sdev->request_queue);
 	if (sdev->sdev_state == SDEV_SUSPENDED)
 		scsi_device_set_state(sdev, SDEV_RUNNING);
 	mutex_unlock(&sdev->state_mutex);
@@ -3088,11 +3088,11 @@ scsi_device_quiesce(struct scsi_device *sdev)
 	 */
 	WARN_ON_ONCE(sdev->quiesced_by && sdev->quiesced_by != current);
 
-	blk_set_preempt_only(q);
+	blk_set_dv_only(q);
 
 	blk_mq_freeze_queue(q);
 	/*
-	 * Ensure that the effect of blk_set_preempt_only() will be visible
+	 * Ensure that the effect of blk_set_dv_only() will be visible
 	 * for percpu_ref_tryget() callers that occur after the queue
 	 * unfreeze even if the queue was already frozen before this function
 	 * was called. See also https://lwn.net/Articles/573497/.
@@ -3105,7 +3105,7 @@ scsi_device_quiesce(struct scsi_device *sdev)
 	if (err == 0)
 		sdev->quiesced_by = current;
 	else
-		blk_clear_preempt_only(q);
+		blk_clear_dv_only(q);
 	mutex_unlock(&sdev->state_mutex);
 
 	return err;
@@ -3125,7 +3125,7 @@ void scsi_device_unquiesce(struct scsi_device *sdev)
 	mutex_lock(&sdev->state_mutex);
 	WARN_ON_ONCE(!sdev->quiesced_by);
 	sdev->quiesced_by = NULL;
-	blk_clear_preempt_only(sdev->request_queue);
+	blk_clear_dv_only(sdev->request_queue);
 	if (sdev->sdev_state == SDEV_QUIESCE)
 		scsi_device_set_state(sdev, SDEV_RUNNING);
 	mutex_unlock(&sdev->state_mutex);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 938725ef492b..7717b71c6da3 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -699,7 +699,8 @@ struct request_queue {
 #define QUEUE_FLAG_REGISTERED  26	/* queue has been registered to a disk */
 #define QUEUE_FLAG_SCSI_PASSTHROUGH 27	/* queue supports SCSI commands */
 #define QUEUE_FLAG_QUIESCED    28	/* queue has been quiesced */
-#define QUEUE_FLAG_PREEMPT_ONLY	29	/* only process REQ_PREEMPT requests */
+#define QUEUE_FLAG_PM_ONLY	29	/* only process RQF_PM requests */
+#define QUEUE_FLAG_DV_ONLY	30	/* only process RQF_DV requests */
 
 #define QUEUE_FLAG_DEFAULT	((1 << QUEUE_FLAG_IO_STAT) |		\
 				 (1 << QUEUE_FLAG_SAME_COMP)	|	\
@@ -737,12 +738,14 @@ bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q);
 	((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
 			     REQ_FAILFAST_DRIVER))
 #define blk_queue_quiesced(q)	test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags)
-#define blk_queue_preempt_only(q)				\
-	test_bit(QUEUE_FLAG_PREEMPT_ONLY, &(q)->queue_flags)
+#define blk_queue_pm_only(q)	test_bit(QUEUE_FLAG_PM_ONLY, &(q)->queue_flags)
+#define blk_queue_dv_only(q)	test_bit(QUEUE_FLAG_DV_ONLY, &(q)->queue_flags)
 #define blk_queue_fua(q)	test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)
 
-extern int blk_set_preempt_only(struct request_queue *q);
-extern void blk_clear_preempt_only(struct request_queue *q);
+extern int blk_set_pm_only(struct request_queue *q);
+extern void blk_clear_pm_only(struct request_queue *q);
+extern int blk_set_dv_only(struct request_queue *q);
+extern void blk_clear_dv_only(struct request_queue *q);
 
 static inline int queue_in_flight(struct request_queue *q)
 {
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 06/12] scsi: Reallow SPI domain validation during system suspend
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
                   ` (4 preceding siblings ...)
  2018-08-09 19:41 ` [PATCH v6 05/12] block, scsi: Rename QUEUE_FLAG_PREEMPT_ONLY into DV_ONLY and introduce PM_ONLY Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 07/12] block: Move power management code into a new source file Bart Van Assche
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Woody Suwalski, Ming Lei, Jianchao Wang,
	Hannes Reinecke, Johannes Thumshirn, Alan Stern

Now that SCSI power management and SPI domain validation use different
mechanisms for blocking SCSI command execution, remove the mechanism
again that prevents system suspend during SPI domain validation.

This patch reverts 203f8c250e21 ("block, scsi: Fix race between SPI
domain validation and system suspend").

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Woody Suwalski <terraluna977@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 drivers/scsi/scsi_lib.c           | 37 +++++++++++++++++++++++++++++--
 drivers/scsi/scsi_sysfs.c         |  1 +
 drivers/scsi/scsi_transport_spi.c | 15 ++-----------
 include/scsi/scsi_device.h        |  1 +
 4 files changed, 39 insertions(+), 15 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index d466f846043f..89f790e73ed6 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1366,6 +1366,7 @@ scsi_prep_state_check(struct scsi_device *sdev, struct request *req)
 			ret = BLKPREP_DEFER;
 		break;
 	case SDEV_SUSPENDED:
+	case SDEV_QUIESCED_SUSPENDED:
 		/* Process RQF_PM requests only. */
 		if (!(req->rq_flags & RQF_PM))
 			ret = BLKPREP_DEFER;
@@ -2683,6 +2684,17 @@ scsi_device_set_state(struct scsi_device *sdev, enum scsi_device_state state)
 		break;
 
 	case SDEV_QUIESCE:
+		switch (oldstate) {
+		case SDEV_RUNNING:
+		case SDEV_QUIESCED_SUSPENDED:
+		case SDEV_OFFLINE:
+		case SDEV_TRANSPORT_OFFLINE:
+			break;
+		default:
+			goto illegal;
+		}
+		break;
+
 	case SDEV_SUSPENDED:
 		switch (oldstate) {
 		case SDEV_RUNNING:
@@ -2694,6 +2706,15 @@ scsi_device_set_state(struct scsi_device *sdev, enum scsi_device_state state)
 		}
 		break;
 
+	case SDEV_QUIESCED_SUSPENDED:
+		switch (oldstate) {
+		case SDEV_QUIESCE:
+			break;
+		default:
+			goto illegal;
+		}
+		break;
+
 	case SDEV_OFFLINE:
 	case SDEV_TRANSPORT_OFFLINE:
 		switch (oldstate) {
@@ -2994,8 +3015,13 @@ scsi_device_suspend(struct scsi_device *sdev)
 	struct request_queue *q = sdev->request_queue;
 	int err;
 
-	if (sdev->sdev_state == SDEV_SUSPENDED)
+	switch (sdev->sdev_state) {
+	case SDEV_SUSPENDED:
+	case SDEV_QUIESCED_SUSPENDED:
 		return 0;
+	default:
+		break;
+	}
 
 	blk_set_pm_only(q);
 
@@ -3011,6 +3037,8 @@ scsi_device_suspend(struct scsi_device *sdev)
 
 	mutex_lock(&sdev->state_mutex);
 	err = scsi_device_set_state(sdev, SDEV_SUSPENDED);
+	if (err)
+		err = scsi_device_set_state(sdev, SDEV_QUIESCED_SUSPENDED);
 	if (err)
 		blk_clear_pm_only(q);
 	mutex_unlock(&sdev->state_mutex);
@@ -3031,8 +3059,13 @@ void scsi_device_unsuspend(struct scsi_device *sdev)
 {
 	mutex_lock(&sdev->state_mutex);
 	blk_clear_pm_only(sdev->request_queue);
-	if (sdev->sdev_state == SDEV_SUSPENDED)
+	switch (sdev->sdev_state) {
+	case SDEV_SUSPENDED:
+	case SDEV_QUIESCED_SUSPENDED:
 		scsi_device_set_state(sdev, SDEV_RUNNING);
+	default:
+		break;
+	}
 	mutex_unlock(&sdev->state_mutex);
 }
 EXPORT_SYMBOL(scsi_device_unsuspend);
diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
index 496c5eff4859..8c152e35ff77 100644
--- a/drivers/scsi/scsi_sysfs.c
+++ b/drivers/scsi/scsi_sysfs.c
@@ -37,6 +37,7 @@ static const struct {
 	{ SDEV_DEL, "deleted" },
 	{ SDEV_QUIESCE, "quiesce" },
 	{ SDEV_SUSPENDED, "suspended" },
+	{ SDEV_QUIESCED_SUSPENDED, "quiesced-suspended" },
 	{ SDEV_OFFLINE,	"offline" },
 	{ SDEV_TRANSPORT_OFFLINE, "transport-offline" },
 	{ SDEV_BLOCK,	"blocked" },
diff --git a/drivers/scsi/scsi_transport_spi.c b/drivers/scsi/scsi_transport_spi.c
index 16bec4884249..582b18efc3fb 100644
--- a/drivers/scsi/scsi_transport_spi.c
+++ b/drivers/scsi/scsi_transport_spi.c
@@ -26,7 +26,6 @@
 #include <linux/mutex.h>
 #include <linux/sysfs.h>
 #include <linux/slab.h>
-#include <linux/suspend.h>
 #include <scsi/scsi.h>
 #include "scsi_priv.h"
 #include <scsi/scsi_device.h>
@@ -1008,19 +1007,11 @@ spi_dv_device(struct scsi_device *sdev)
 	u8 *buffer;
 	const int len = SPI_MAX_ECHO_BUFFER_SIZE*2;
 
-	/*
-	 * Because this function and the power management code both call
-	 * scsi_device_quiesce(), it is not safe to perform domain validation
-	 * while suspend or resume is in progress. Hence the
-	 * lock/unlock_system_sleep() calls.
-	 */
-	lock_system_sleep();
-
 	if (unlikely(starget->spi_dv_context))
-		goto unlock;
+		return;
 
 	if (unlikely(scsi_device_get(sdev)))
-		goto unlock;
+		return;
 
 	buffer = kzalloc(len, GFP_KERNEL);
 
@@ -1056,8 +1047,6 @@ spi_dv_device(struct scsi_device *sdev)
 	kfree(buffer);
  out_put:
 	scsi_device_put(sdev);
-unlock:
-	unlock_system_sleep();
 }
 EXPORT_SYMBOL(spi_dv_device);
 
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index a2e3edf8be12..a67dc459b70d 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -44,6 +44,7 @@ enum scsi_device_state {
 				 * no commands allowed */
 	SDEV_QUIESCE,		/* Only RQF_DV requests are accepted. */
 	SDEV_SUSPENDED,		/* Only RQF_PM requests are accepted. */
+	SDEV_QUIESCED_SUSPENDED,/* Only RQF_PM requests are accepted. */
 	SDEV_OFFLINE,		/* Device offlined (by error handling or
 				 * user request */
 	SDEV_TRANSPORT_OFFLINE,	/* Offlined by transport class error handler */
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 07/12] block: Move power management code into a new source file
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
                   ` (5 preceding siblings ...)
  2018-08-09 19:41 ` [PATCH v6 06/12] scsi: Reallow SPI domain validation during system suspend Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 08/12] block, scsi: Introduce blk_pm_runtime_exit() Bart Van Assche
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ming Lei,
	Jianchao Wang, Hannes Reinecke, Johannes Thumshirn, Alan Stern

Move the code for runtime power management from blk-core.c into the
new source file blk-pm.c. Move the corresponding declarations from
<linux/blkdev.h> into <linux/blk-pm.h>. For CONFIG_PM=n, leave out
the declarations of the functions that are not used in that mode.
This patch not only reduces the number of #ifdefs in the block layer
core code but also reduces the size of header file <linux/blkdev.h>
and hence should help to reduce the build time of the Linux kernel
if CONFIG_PM is not defined.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 block/Kconfig          |   5 ++
 block/Makefile         |   1 +
 block/blk-core.c       | 196 +----------------------------------------
 block/blk-pm.c         | 188 +++++++++++++++++++++++++++++++++++++++
 block/blk-pm.h         |  43 +++++++++
 block/elevator.c       |  22 +----
 drivers/scsi/scsi_pm.c |   1 +
 drivers/scsi/sd.c      |   1 +
 drivers/scsi/sr.c      |   1 +
 include/linux/blk-pm.h |  24 +++++
 include/linux/blkdev.h |  23 -----
 11 files changed, 266 insertions(+), 239 deletions(-)
 create mode 100644 block/blk-pm.c
 create mode 100644 block/blk-pm.h
 create mode 100644 include/linux/blk-pm.h

diff --git a/block/Kconfig b/block/Kconfig
index 1f2469a0123c..e213d90a5e64 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -228,4 +228,9 @@ config BLK_MQ_RDMA
 	depends on BLOCK && INFINIBAND
 	default y
 
+config BLK_PM
+	bool
+	depends on BLOCK && PM
+	default y
+
 source block/Kconfig.iosched
diff --git a/block/Makefile b/block/Makefile
index 572b33f32c07..27eac600474f 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -37,3 +37,4 @@ obj-$(CONFIG_BLK_WBT)		+= blk-wbt.o
 obj-$(CONFIG_BLK_DEBUG_FS)	+= blk-mq-debugfs.o
 obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
 obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
+obj-$(CONFIG_BLK_PM)		+= blk-pm.o
diff --git a/block/blk-core.c b/block/blk-core.c
index 401f4927a8db..3770a36e85be 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -42,6 +42,7 @@
 #include "blk.h"
 #include "blk-mq.h"
 #include "blk-mq-sched.h"
+#include "blk-pm.h"
 #include "blk-rq-qos.h"
 
 #ifdef CONFIG_DEBUG_FS
@@ -1757,16 +1758,6 @@ void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part)
 }
 EXPORT_SYMBOL_GPL(part_round_stats);
 
-#ifdef CONFIG_PM
-static void blk_pm_put_request(struct request *rq)
-{
-	if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending)
-		pm_runtime_mark_last_busy(rq->q->dev);
-}
-#else
-static inline void blk_pm_put_request(struct request *rq) {}
-#endif
-
 void __blk_put_request(struct request_queue *q, struct request *req)
 {
 	req_flags_t rq_flags = req->rq_flags;
@@ -3783,191 +3774,6 @@ void blk_finish_plug(struct blk_plug *plug)
 }
 EXPORT_SYMBOL(blk_finish_plug);
 
-#ifdef CONFIG_PM
-/**
- * blk_pm_runtime_init - Block layer runtime PM initialization routine
- * @q: the queue of the device
- * @dev: the device the queue belongs to
- *
- * Description:
- *    Initialize runtime-PM-related fields for @q and start auto suspend for
- *    @dev. Drivers that want to take advantage of request-based runtime PM
- *    should call this function after @dev has been initialized, and its
- *    request queue @q has been allocated, and runtime PM for it can not happen
- *    yet(either due to disabled/forbidden or its usage_count > 0). In most
- *    cases, driver should call this function before any I/O has taken place.
- *
- *    This function takes care of setting up using auto suspend for the device,
- *    the autosuspend delay is set to -1 to make runtime suspend impossible
- *    until an updated value is either set by user or by driver. Drivers do
- *    not need to touch other autosuspend settings.
- *
- *    The block layer runtime PM is request based, so only works for drivers
- *    that use request as their IO unit instead of those directly use bio's.
- */
-void blk_pm_runtime_init(struct request_queue *q, struct device *dev)
-{
-	/* Don't enable runtime PM for blk-mq until it is ready */
-	if (q->mq_ops) {
-		pm_runtime_disable(dev);
-		return;
-	}
-
-	q->dev = dev;
-	q->rpm_status = RPM_ACTIVE;
-	pm_runtime_set_autosuspend_delay(q->dev, -1);
-	pm_runtime_use_autosuspend(q->dev);
-}
-EXPORT_SYMBOL(blk_pm_runtime_init);
-
-/**
- * blk_pre_runtime_suspend - Pre runtime suspend check
- * @q: the queue of the device
- *
- * Description:
- *    This function will check if runtime suspend is allowed for the device
- *    by examining if there are any requests pending in the queue. If there
- *    are requests pending, the device can not be runtime suspended; otherwise,
- *    the queue's status will be updated to SUSPENDING and the driver can
- *    proceed to suspend the device.
- *
- *    For the not allowed case, we mark last busy for the device so that
- *    runtime PM core will try to autosuspend it some time later.
- *
- *    This function should be called near the start of the device's
- *    runtime_suspend callback.
- *
- * Return:
- *    0		- OK to runtime suspend the device
- *    -EBUSY	- Device should not be runtime suspended
- */
-int blk_pre_runtime_suspend(struct request_queue *q)
-{
-	int ret = 0;
-
-	if (!q->dev)
-		return ret;
-
-	spin_lock_irq(q->queue_lock);
-	if (q->nr_pending) {
-		ret = -EBUSY;
-		pm_runtime_mark_last_busy(q->dev);
-	} else {
-		q->rpm_status = RPM_SUSPENDING;
-	}
-	spin_unlock_irq(q->queue_lock);
-	return ret;
-}
-EXPORT_SYMBOL(blk_pre_runtime_suspend);
-
-/**
- * blk_post_runtime_suspend - Post runtime suspend processing
- * @q: the queue of the device
- * @err: return value of the device's runtime_suspend function
- *
- * Description:
- *    Update the queue's runtime status according to the return value of the
- *    device's runtime suspend function and mark last busy for the device so
- *    that PM core will try to auto suspend the device at a later time.
- *
- *    This function should be called near the end of the device's
- *    runtime_suspend callback.
- */
-void blk_post_runtime_suspend(struct request_queue *q, int err)
-{
-	if (!q->dev)
-		return;
-
-	spin_lock_irq(q->queue_lock);
-	if (!err) {
-		q->rpm_status = RPM_SUSPENDED;
-	} else {
-		q->rpm_status = RPM_ACTIVE;
-		pm_runtime_mark_last_busy(q->dev);
-	}
-	spin_unlock_irq(q->queue_lock);
-}
-EXPORT_SYMBOL(blk_post_runtime_suspend);
-
-/**
- * blk_pre_runtime_resume - Pre runtime resume processing
- * @q: the queue of the device
- *
- * Description:
- *    Update the queue's runtime status to RESUMING in preparation for the
- *    runtime resume of the device.
- *
- *    This function should be called near the start of the device's
- *    runtime_resume callback.
- */
-void blk_pre_runtime_resume(struct request_queue *q)
-{
-	if (!q->dev)
-		return;
-
-	spin_lock_irq(q->queue_lock);
-	q->rpm_status = RPM_RESUMING;
-	spin_unlock_irq(q->queue_lock);
-}
-EXPORT_SYMBOL(blk_pre_runtime_resume);
-
-/**
- * blk_post_runtime_resume - Post runtime resume processing
- * @q: the queue of the device
- * @err: return value of the device's runtime_resume function
- *
- * Description:
- *    Update the queue's runtime status according to the return value of the
- *    device's runtime_resume function. If it is successfully resumed, process
- *    the requests that are queued into the device's queue when it is resuming
- *    and then mark last busy and initiate autosuspend for it.
- *
- *    This function should be called near the end of the device's
- *    runtime_resume callback.
- */
-void blk_post_runtime_resume(struct request_queue *q, int err)
-{
-	if (!q->dev)
-		return;
-
-	spin_lock_irq(q->queue_lock);
-	if (!err) {
-		q->rpm_status = RPM_ACTIVE;
-		__blk_run_queue(q);
-		pm_runtime_mark_last_busy(q->dev);
-		pm_request_autosuspend(q->dev);
-	} else {
-		q->rpm_status = RPM_SUSPENDED;
-	}
-	spin_unlock_irq(q->queue_lock);
-}
-EXPORT_SYMBOL(blk_post_runtime_resume);
-
-/**
- * blk_set_runtime_active - Force runtime status of the queue to be active
- * @q: the queue of the device
- *
- * If the device is left runtime suspended during system suspend the resume
- * hook typically resumes the device and corrects runtime status
- * accordingly. However, that does not affect the queue runtime PM status
- * which is still "suspended". This prevents processing requests from the
- * queue.
- *
- * This function can be used in driver's resume hook to correct queue
- * runtime PM status and re-enable peeking requests from the queue. It
- * should be called before first request is added to the queue.
- */
-void blk_set_runtime_active(struct request_queue *q)
-{
-	spin_lock_irq(q->queue_lock);
-	q->rpm_status = RPM_ACTIVE;
-	pm_runtime_mark_last_busy(q->dev);
-	pm_request_autosuspend(q->dev);
-	spin_unlock_irq(q->queue_lock);
-}
-EXPORT_SYMBOL(blk_set_runtime_active);
-#endif
-
 int __init blk_dev_init(void)
 {
 	BUILD_BUG_ON(REQ_OP_LAST >= (1 << REQ_OP_BITS));
diff --git a/block/blk-pm.c b/block/blk-pm.c
new file mode 100644
index 000000000000..9b636960d285
--- /dev/null
+++ b/block/blk-pm.c
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/blk-pm.h>
+#include <linux/blkdev.h>
+#include <linux/pm_runtime.h>
+
+/**
+ * blk_pm_runtime_init - Block layer runtime PM initialization routine
+ * @q: the queue of the device
+ * @dev: the device the queue belongs to
+ *
+ * Description:
+ *    Initialize runtime-PM-related fields for @q and start auto suspend for
+ *    @dev. Drivers that want to take advantage of request-based runtime PM
+ *    should call this function after @dev has been initialized, and its
+ *    request queue @q has been allocated, and runtime PM for it can not happen
+ *    yet(either due to disabled/forbidden or its usage_count > 0). In most
+ *    cases, driver should call this function before any I/O has taken place.
+ *
+ *    This function takes care of setting up using auto suspend for the device,
+ *    the autosuspend delay is set to -1 to make runtime suspend impossible
+ *    until an updated value is either set by user or by driver. Drivers do
+ *    not need to touch other autosuspend settings.
+ *
+ *    The block layer runtime PM is request based, so only works for drivers
+ *    that use request as their IO unit instead of those directly use bio's.
+ */
+void blk_pm_runtime_init(struct request_queue *q, struct device *dev)
+{
+	/* Don't enable runtime PM for blk-mq until it is ready */
+	if (q->mq_ops) {
+		pm_runtime_disable(dev);
+		return;
+	}
+
+	q->dev = dev;
+	q->rpm_status = RPM_ACTIVE;
+	pm_runtime_set_autosuspend_delay(q->dev, -1);
+	pm_runtime_use_autosuspend(q->dev);
+}
+EXPORT_SYMBOL(blk_pm_runtime_init);
+
+/**
+ * blk_pre_runtime_suspend - Pre runtime suspend check
+ * @q: the queue of the device
+ *
+ * Description:
+ *    This function will check if runtime suspend is allowed for the device
+ *    by examining if there are any requests pending in the queue. If there
+ *    are requests pending, the device can not be runtime suspended; otherwise,
+ *    the queue's status will be updated to SUSPENDING and the driver can
+ *    proceed to suspend the device.
+ *
+ *    For the not allowed case, we mark last busy for the device so that
+ *    runtime PM core will try to autosuspend it some time later.
+ *
+ *    This function should be called near the start of the device's
+ *    runtime_suspend callback.
+ *
+ * Return:
+ *    0		- OK to runtime suspend the device
+ *    -EBUSY	- Device should not be runtime suspended
+ */
+int blk_pre_runtime_suspend(struct request_queue *q)
+{
+	int ret = 0;
+
+	if (!q->dev)
+		return ret;
+
+	spin_lock_irq(q->queue_lock);
+	if (q->nr_pending) {
+		ret = -EBUSY;
+		pm_runtime_mark_last_busy(q->dev);
+	} else {
+		q->rpm_status = RPM_SUSPENDING;
+	}
+	spin_unlock_irq(q->queue_lock);
+	return ret;
+}
+EXPORT_SYMBOL(blk_pre_runtime_suspend);
+
+/**
+ * blk_post_runtime_suspend - Post runtime suspend processing
+ * @q: the queue of the device
+ * @err: return value of the device's runtime_suspend function
+ *
+ * Description:
+ *    Update the queue's runtime status according to the return value of the
+ *    device's runtime suspend function and mark last busy for the device so
+ *    that PM core will try to auto suspend the device at a later time.
+ *
+ *    This function should be called near the end of the device's
+ *    runtime_suspend callback.
+ */
+void blk_post_runtime_suspend(struct request_queue *q, int err)
+{
+	if (!q->dev)
+		return;
+
+	spin_lock_irq(q->queue_lock);
+	if (!err) {
+		q->rpm_status = RPM_SUSPENDED;
+	} else {
+		q->rpm_status = RPM_ACTIVE;
+		pm_runtime_mark_last_busy(q->dev);
+	}
+	spin_unlock_irq(q->queue_lock);
+}
+EXPORT_SYMBOL(blk_post_runtime_suspend);
+
+/**
+ * blk_pre_runtime_resume - Pre runtime resume processing
+ * @q: the queue of the device
+ *
+ * Description:
+ *    Update the queue's runtime status to RESUMING in preparation for the
+ *    runtime resume of the device.
+ *
+ *    This function should be called near the start of the device's
+ *    runtime_resume callback.
+ */
+void blk_pre_runtime_resume(struct request_queue *q)
+{
+	if (!q->dev)
+		return;
+
+	spin_lock_irq(q->queue_lock);
+	q->rpm_status = RPM_RESUMING;
+	spin_unlock_irq(q->queue_lock);
+}
+EXPORT_SYMBOL(blk_pre_runtime_resume);
+
+/**
+ * blk_post_runtime_resume - Post runtime resume processing
+ * @q: the queue of the device
+ * @err: return value of the device's runtime_resume function
+ *
+ * Description:
+ *    Update the queue's runtime status according to the return value of the
+ *    device's runtime_resume function. If it is successfully resumed, process
+ *    the requests that are queued into the device's queue when it is resuming
+ *    and then mark last busy and initiate autosuspend for it.
+ *
+ *    This function should be called near the end of the device's
+ *    runtime_resume callback.
+ */
+void blk_post_runtime_resume(struct request_queue *q, int err)
+{
+	if (!q->dev)
+		return;
+
+	spin_lock_irq(q->queue_lock);
+	if (!err) {
+		q->rpm_status = RPM_ACTIVE;
+		__blk_run_queue(q);
+		pm_runtime_mark_last_busy(q->dev);
+		pm_request_autosuspend(q->dev);
+	} else {
+		q->rpm_status = RPM_SUSPENDED;
+	}
+	spin_unlock_irq(q->queue_lock);
+}
+EXPORT_SYMBOL(blk_post_runtime_resume);
+
+/**
+ * blk_set_runtime_active - Force runtime status of the queue to be active
+ * @q: the queue of the device
+ *
+ * If the device is left runtime suspended during system suspend the resume
+ * hook typically resumes the device and corrects runtime status
+ * accordingly. However, that does not affect the queue runtime PM status
+ * which is still "suspended". This prevents processing requests from the
+ * queue.
+ *
+ * This function can be used in driver's resume hook to correct queue
+ * runtime PM status and re-enable peeking requests from the queue. It
+ * should be called before first request is added to the queue.
+ */
+void blk_set_runtime_active(struct request_queue *q)
+{
+	spin_lock_irq(q->queue_lock);
+	q->rpm_status = RPM_ACTIVE;
+	pm_runtime_mark_last_busy(q->dev);
+	pm_request_autosuspend(q->dev);
+	spin_unlock_irq(q->queue_lock);
+}
+EXPORT_SYMBOL(blk_set_runtime_active);
diff --git a/block/blk-pm.h b/block/blk-pm.h
new file mode 100644
index 000000000000..1ffc8ef203ec
--- /dev/null
+++ b/block/blk-pm.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _BLOCK_BLK_PM_H_
+#define _BLOCK_BLK_PM_H_
+
+#include <linux/pm_runtime.h>
+
+#ifdef CONFIG_PM
+static inline void blk_pm_requeue_request(struct request *rq)
+{
+	if (rq->q->dev && !(rq->rq_flags & RQF_PM))
+		rq->q->nr_pending--;
+}
+
+static inline void blk_pm_add_request(struct request_queue *q,
+				      struct request *rq)
+{
+	if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 &&
+	    (q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING))
+		pm_request_resume(q->dev);
+}
+
+static inline void blk_pm_put_request(struct request *rq)
+{
+	if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending)
+		pm_runtime_mark_last_busy(rq->q->dev);
+}
+#else
+static inline void blk_pm_requeue_request(struct request *rq)
+{
+}
+
+static inline void blk_pm_add_request(struct request_queue *q,
+				      struct request *rq)
+{
+}
+
+static inline void blk_pm_put_request(struct request *rq)
+{
+}
+#endif
+
+#endif /* _BLOCK_BLK_PM_H_ */
diff --git a/block/elevator.c b/block/elevator.c
index fa828b5bfd4b..4c15f0240c99 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -41,6 +41,7 @@
 
 #include "blk.h"
 #include "blk-mq-sched.h"
+#include "blk-pm.h"
 #include "blk-wbt.h"
 
 static DEFINE_SPINLOCK(elv_list_lock);
@@ -557,27 +558,6 @@ void elv_bio_merged(struct request_queue *q, struct request *rq,
 		e->type->ops.sq.elevator_bio_merged_fn(q, rq, bio);
 }
 
-#ifdef CONFIG_PM
-static void blk_pm_requeue_request(struct request *rq)
-{
-	if (rq->q->dev && !(rq->rq_flags & RQF_PM))
-		rq->q->nr_pending--;
-}
-
-static void blk_pm_add_request(struct request_queue *q, struct request *rq)
-{
-	if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 &&
-	    (q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING))
-		pm_request_resume(q->dev);
-}
-#else
-static inline void blk_pm_requeue_request(struct request *rq) {}
-static inline void blk_pm_add_request(struct request_queue *q,
-				      struct request *rq)
-{
-}
-#endif
-
 void elv_requeue_request(struct request_queue *q, struct request *rq)
 {
 	/*
diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c
index 6300e168701d..eb790b679eaf 100644
--- a/drivers/scsi/scsi_pm.c
+++ b/drivers/scsi/scsi_pm.c
@@ -8,6 +8,7 @@
 #include <linux/pm_runtime.h>
 #include <linux/export.h>
 #include <linux/async.h>
+#include <linux/blk-pm.h>
 
 #include <scsi/scsi.h>
 #include <scsi/scsi_device.h>
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index bbebdc3769b0..69ab459abb98 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -45,6 +45,7 @@
 #include <linux/init.h>
 #include <linux/blkdev.h>
 #include <linux/blkpg.h>
+#include <linux/blk-pm.h>
 #include <linux/delay.h>
 #include <linux/mutex.h>
 #include <linux/string_helpers.h>
diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index 3f3cb72e0c0c..de4413e66eca 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -43,6 +43,7 @@
 #include <linux/interrupt.h>
 #include <linux/init.h>
 #include <linux/blkdev.h>
+#include <linux/blk-pm.h>
 #include <linux/mutex.h>
 #include <linux/slab.h>
 #include <linux/pm_runtime.h>
diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h
new file mode 100644
index 000000000000..b80c65aba249
--- /dev/null
+++ b/include/linux/blk-pm.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _BLK_PM_H_
+#define _BLK_PM_H_
+
+struct device;
+struct request_queue;
+
+/*
+ * block layer runtime pm functions
+ */
+#ifdef CONFIG_PM
+extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev);
+extern int blk_pre_runtime_suspend(struct request_queue *q);
+extern void blk_post_runtime_suspend(struct request_queue *q, int err);
+extern void blk_pre_runtime_resume(struct request_queue *q);
+extern void blk_post_runtime_resume(struct request_queue *q, int err);
+extern void blk_set_runtime_active(struct request_queue *q);
+#else
+static inline void blk_pm_runtime_init(struct request_queue *q,
+				       struct device *dev) {}
+#endif
+
+#endif /* _BLK_PM_H_ */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 7717b71c6da3..7560c88a0fe2 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1284,29 +1284,6 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id,
 extern void blk_put_queue(struct request_queue *);
 extern void blk_set_queue_dying(struct request_queue *);
 
-/*
- * block layer runtime pm functions
- */
-#ifdef CONFIG_PM
-extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev);
-extern int blk_pre_runtime_suspend(struct request_queue *q);
-extern void blk_post_runtime_suspend(struct request_queue *q, int err);
-extern void blk_pre_runtime_resume(struct request_queue *q);
-extern void blk_post_runtime_resume(struct request_queue *q, int err);
-extern void blk_set_runtime_active(struct request_queue *q);
-#else
-static inline void blk_pm_runtime_init(struct request_queue *q,
-	struct device *dev) {}
-static inline int blk_pre_runtime_suspend(struct request_queue *q)
-{
-	return -ENOSYS;
-}
-static inline void blk_post_runtime_suspend(struct request_queue *q, int err) {}
-static inline void blk_pre_runtime_resume(struct request_queue *q) {}
-static inline void blk_post_runtime_resume(struct request_queue *q, int err) {}
-static inline void blk_set_runtime_active(struct request_queue *q) {}
-#endif
-
 /*
  * blk_plug permits building a queue of related requests by holding the I/O
  * fragments for a short period. This allows merging of sequential requests
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 08/12] block, scsi: Introduce blk_pm_runtime_exit()
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
                   ` (6 preceding siblings ...)
  2018-08-09 19:41 ` [PATCH v6 07/12] block: Move power management code into a new source file Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-10  2:39   ` jianchao.wang
  2018-08-09 19:41 ` [PATCH v6 09/12] block: Split blk_pm_add_request() and blk_pm_put_request() Bart Van Assche
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Ming Lei, Jianchao Wang, Hannes Reinecke,
	Johannes Thumshirn, Alan Stern

Since it is possible to unbind a SCSI ULD and since unbinding
removes the association between a request queue and struct device,
the q->dev pointer has to be reset during unbind. Hence introduce
a function in the block layer that clears request_queue.dev.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 block/blk-pm.c         | 18 ++++++++++++++++++
 drivers/scsi/sd.c      |  9 ++++-----
 drivers/scsi/sr.c      |  3 ++-
 include/linux/blk-pm.h |  2 ++
 4 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/block/blk-pm.c b/block/blk-pm.c
index 9b636960d285..bf8532da952d 100644
--- a/block/blk-pm.c
+++ b/block/blk-pm.c
@@ -40,6 +40,24 @@ void blk_pm_runtime_init(struct request_queue *q, struct device *dev)
 }
 EXPORT_SYMBOL(blk_pm_runtime_init);
 
+/**
+ * blk_pm_runtime_exit - runtime PM exit routine
+ * @q: the queue of the device
+ *
+ * This function should be called from the device_driver.remove() callback
+ * function to avoid that further calls to runtime power management functions
+ * occur.
+ */
+void blk_pm_runtime_exit(struct request_queue *q)
+{
+	if (!q->dev)
+		return;
+
+	pm_runtime_get_sync(q->dev);
+	q->dev = NULL;
+}
+EXPORT_SYMBOL(blk_pm_runtime_exit);
+
 /**
  * blk_pre_runtime_suspend - Pre runtime suspend check
  * @q: the queue of the device
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 69ab459abb98..5537762dfcfd 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3420,12 +3420,11 @@ static int sd_probe(struct device *dev)
  **/
 static int sd_remove(struct device *dev)
 {
-	struct scsi_disk *sdkp;
-	dev_t devt;
+	struct scsi_disk *sdkp = dev_get_drvdata(dev);
+	struct scsi_device *sdp = sdkp->device;
+	dev_t devt = disk_devt(sdkp->disk);
 
-	sdkp = dev_get_drvdata(dev);
-	devt = disk_devt(sdkp->disk);
-	scsi_autopm_get_device(sdkp->device);
+	blk_pm_runtime_exit(sdp->request_queue);
 
 	async_synchronize_full_domain(&scsi_sd_pm_domain);
 	async_synchronize_full_domain(&scsi_sd_probe_domain);
diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index de4413e66eca..476987f7ed48 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -1002,8 +1002,9 @@ static void sr_kref_release(struct kref *kref)
 static int sr_remove(struct device *dev)
 {
 	struct scsi_cd *cd = dev_get_drvdata(dev);
+	struct scsi_device *sdev = cd->device;
 
-	scsi_autopm_get_device(cd->device);
+	blk_pm_runtime_exit(sdev->request_queue);
 
 	del_gendisk(cd->disk);
 	dev_set_drvdata(dev, NULL);
diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h
index b80c65aba249..6d654f41acbf 100644
--- a/include/linux/blk-pm.h
+++ b/include/linux/blk-pm.h
@@ -11,6 +11,7 @@ struct request_queue;
  */
 #ifdef CONFIG_PM
 extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev);
+extern void blk_pm_runtime_exit(struct request_queue *q);
 extern int blk_pre_runtime_suspend(struct request_queue *q);
 extern void blk_post_runtime_suspend(struct request_queue *q, int err);
 extern void blk_pre_runtime_resume(struct request_queue *q);
@@ -19,6 +20,7 @@ extern void blk_set_runtime_active(struct request_queue *q);
 #else
 static inline void blk_pm_runtime_init(struct request_queue *q,
 				       struct device *dev) {}
+static inline void blk_pm_runtime_exit(struct request_queue *q) {}
 #endif
 
 #endif /* _BLK_PM_H_ */
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 09/12] block: Split blk_pm_add_request() and blk_pm_put_request()
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
                   ` (7 preceding siblings ...)
  2018-08-09 19:41 ` [PATCH v6 08/12] block, scsi: Introduce blk_pm_runtime_exit() Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 10/12] block: Change the runtime power management approach (1/2) Bart Van Assche
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Ming Lei, Jianchao Wang, Hannes Reinecke,
	Johannes Thumshirn, Alan Stern

Move the pm_request_resume() and pm_runtime_mark_last_busy() calls into
two new functions.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 block/blk-core.c |  1 +
 block/blk-pm.h   | 36 +++++++++++++++++++++++++++++++-----
 block/elevator.c |  1 +
 3 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 3770a36e85be..59dd98585eb0 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1774,6 +1774,7 @@ void __blk_put_request(struct request_queue *q, struct request *req)
 
 	blk_req_zone_write_unlock(req);
 	blk_pm_put_request(req);
+	blk_pm_mark_last_busy(req);
 
 	elv_completed_request(q, req);
 
diff --git a/block/blk-pm.h b/block/blk-pm.h
index 1ffc8ef203ec..a8564ea72a41 100644
--- a/block/blk-pm.h
+++ b/block/blk-pm.h
@@ -6,8 +6,23 @@
 #include <linux/pm_runtime.h>
 
 #ifdef CONFIG_PM
+static inline void blk_pm_request_resume(struct request_queue *q)
+{
+	if (q->dev && (q->rpm_status == RPM_SUSPENDED ||
+		       q->rpm_status == RPM_SUSPENDING))
+		pm_request_resume(q->dev);
+}
+
+static inline void blk_pm_mark_last_busy(struct request *rq)
+{
+	if (rq->q->dev && !(rq->rq_flags & RQF_PM))
+		pm_runtime_mark_last_busy(rq->q->dev);
+}
+
 static inline void blk_pm_requeue_request(struct request *rq)
 {
+	lockdep_assert_held(rq->q->queue_lock);
+
 	if (rq->q->dev && !(rq->rq_flags & RQF_PM))
 		rq->q->nr_pending--;
 }
@@ -15,17 +30,28 @@ static inline void blk_pm_requeue_request(struct request *rq)
 static inline void blk_pm_add_request(struct request_queue *q,
 				      struct request *rq)
 {
-	if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 &&
-	    (q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING))
-		pm_request_resume(q->dev);
+	lockdep_assert_held(q->queue_lock);
+
+	if (q->dev && !(rq->rq_flags & RQF_PM))
+		q->nr_pending++;
 }
 
 static inline void blk_pm_put_request(struct request *rq)
 {
-	if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending)
-		pm_runtime_mark_last_busy(rq->q->dev);
+	lockdep_assert_held(rq->q->queue_lock);
+
+	if (rq->q->dev && !(rq->rq_flags & RQF_PM))
+		--rq->q->nr_pending;
 }
 #else
+static inline void blk_pm_request_resume(struct request_queue *q)
+{
+}
+
+static inline void blk_pm_mark_last_busy(struct request *rq)
+{
+}
+
 static inline void blk_pm_requeue_request(struct request *rq)
 {
 }
diff --git a/block/elevator.c b/block/elevator.c
index 4c15f0240c99..00c5d8dbce16 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -601,6 +601,7 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
 	trace_block_rq_insert(q, rq);
 
 	blk_pm_add_request(q, rq);
+	blk_pm_request_resume(q);
 
 	rq->q = q;
 
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 10/12] block: Change the runtime power management approach (1/2)
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
                   ` (8 preceding siblings ...)
  2018-08-09 19:41 ` [PATCH v6 09/12] block: Split blk_pm_add_request() and blk_pm_put_request() Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-10  1:59   ` jianchao.wang
  2018-08-09 19:41 ` [PATCH v6 11/12] block: Change the runtime power management approach (2/2) Bart Van Assche
  2018-08-09 19:41 ` [PATCH v6 12/12] blk-mq: Enable support for runtime power management Bart Van Assche
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ming Lei,
	Jianchao Wang, Hannes Reinecke, Johannes Thumshirn, Alan Stern

Instead of scheduling runtime resume of a request queue after a
request has been queued, schedule asynchronous resume during request
allocation. The new pm_request_resume() calls occur after
blk_queue_enter() has increased the q_usage_counter request queue
member. This change is needed for a later patch that will make request
allocation block while the queue status is not RPM_ACTIVE.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 block/blk-core.c | 2 ++
 block/blk-mq.c   | 2 ++
 block/elevator.c | 1 -
 3 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 59dd98585eb0..f30545fb2de2 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -972,6 +972,8 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
 		if (success)
 			return 0;
 
+		blk_pm_request_resume(q);
+
 		if (flags & BLK_MQ_REQ_NOWAIT)
 			return -EBUSY;
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2a0eb058ba5a..24439735f20b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -33,6 +33,7 @@
 #include "blk-mq.h"
 #include "blk-mq-debugfs.h"
 #include "blk-mq-tag.h"
+#include "blk-pm.h"
 #include "blk-stat.h"
 #include "blk-mq-sched.h"
 #include "blk-rq-qos.h"
@@ -473,6 +474,7 @@ static void __blk_mq_free_request(struct request *rq)
 	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
 	const int sched_tag = rq->internal_tag;
 
+	blk_pm_mark_last_busy(rq);
 	if (rq->tag != -1)
 		blk_mq_put_tag(hctx, hctx->tags, ctx, rq->tag);
 	if (sched_tag != -1)
diff --git a/block/elevator.c b/block/elevator.c
index 00c5d8dbce16..4c15f0240c99 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -601,7 +601,6 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
 	trace_block_rq_insert(q, rq);
 
 	blk_pm_add_request(q, rq);
-	blk_pm_request_resume(q);
 
 	rq->q = q;
 
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 11/12] block: Change the runtime power management approach (2/2)
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
                   ` (9 preceding siblings ...)
  2018-08-09 19:41 ` [PATCH v6 10/12] block: Change the runtime power management approach (1/2) Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  2018-08-10  1:51   ` jianchao.wang
  2018-08-09 19:41 ` [PATCH v6 12/12] blk-mq: Enable support for runtime power management Bart Van Assche
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ming Lei,
	Jianchao Wang, Hannes Reinecke, Johannes Thumshirn, Alan Stern

Instead of allowing requests that are not power management requests
to enter the queue in runtime suspended status (RPM_SUSPENDED), make
the blk_get_request() caller block. This change fixes a starvation
issue: it is now guaranteed that power management requests will be
executed no matter how many blk_get_request() callers are waiting.
Instead of maintaining the q->nr_pending counter, rely on
q->q_usage_counter. Call pm_runtime_mark_last_busy() every time a
request finishes instead of only if the queue depth drops to zero.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 block/blk-core.c | 37 ++++++-------------------
 block/blk-pm.c   | 72 ++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 75 insertions(+), 34 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index f30545fb2de2..b0bb6b5320fe 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2772,30 +2772,6 @@ void blk_account_io_done(struct request *req, u64 now)
 	}
 }
 
-#ifdef CONFIG_PM
-/*
- * Don't process normal requests when queue is suspended
- * or in the process of suspending/resuming
- */
-static bool blk_pm_allow_request(struct request *rq)
-{
-	switch (rq->q->rpm_status) {
-	case RPM_RESUMING:
-	case RPM_SUSPENDING:
-		return rq->rq_flags & RQF_PM;
-	case RPM_SUSPENDED:
-		return false;
-	default:
-		return true;
-	}
-}
-#else
-static bool blk_pm_allow_request(struct request *rq)
-{
-	return true;
-}
-#endif
-
 void blk_account_io_start(struct request *rq, bool new_io)
 {
 	struct hd_struct *part;
@@ -2841,11 +2817,14 @@ static struct request *elv_next_request(struct request_queue *q)
 
 	while (1) {
 		list_for_each_entry(rq, &q->queue_head, queuelist) {
-			if (blk_pm_allow_request(rq))
-				return rq;
-
-			if (rq->rq_flags & RQF_SOFTBARRIER)
-				break;
+#ifdef CONFIG_PM
+			/*
+			 * If a request gets queued in state RPM_SUSPENDED
+			 * then that's a kernel bug.
+			 */
+			WARN_ON_ONCE(q->rpm_status == RPM_SUSPENDED);
+#endif
+			return rq;
 		}
 
 		/*
diff --git a/block/blk-pm.c b/block/blk-pm.c
index bf8532da952d..977beffdccd2 100644
--- a/block/blk-pm.c
+++ b/block/blk-pm.c
@@ -1,8 +1,11 @@
 // SPDX-License-Identifier: GPL-2.0
 
+#include <linux/blk-mq.h>
 #include <linux/blk-pm.h>
 #include <linux/blkdev.h>
 #include <linux/pm_runtime.h>
+#include "blk-mq.h"
+#include "blk-mq-tag.h"
 
 /**
  * blk_pm_runtime_init - Block layer runtime PM initialization routine
@@ -58,6 +61,36 @@ void blk_pm_runtime_exit(struct request_queue *q)
 }
 EXPORT_SYMBOL(blk_pm_runtime_exit);
 
+struct in_flight_data {
+	struct request_queue	*q;
+	int			in_flight;
+};
+
+static void blk_count_in_flight(struct blk_mq_hw_ctx *hctx, struct request *rq,
+				void *priv, bool reserved)
+{
+	struct in_flight_data *in_flight = priv;
+
+	if (rq->q == in_flight->q)
+		in_flight->in_flight++;
+}
+
+/*
+ * Count the number of requests that are in flight for request queue @q. Use
+ * @q->nr_pending for legacy queues. Iterate over the tag set for blk-mq
+ * queues.  Use blk_mq_queue_tag_busy_iter() instead of
+ * blk_mq_tagset_busy_iter() because the latter only considers requests that
+ * have already been started.
+ */
+static int blk_requests_in_flight(struct request_queue *q)
+{
+	struct in_flight_data in_flight = { .q = q };
+
+	if (q->mq_ops)
+		blk_mq_queue_tag_busy_iter(q, blk_count_in_flight, &in_flight);
+	return q->nr_pending + in_flight.in_flight;
+}
+
 /**
  * blk_pre_runtime_suspend - Pre runtime suspend check
  * @q: the queue of the device
@@ -86,14 +119,38 @@ int blk_pre_runtime_suspend(struct request_queue *q)
 	if (!q->dev)
 		return ret;
 
+	WARN_ON_ONCE(q->rpm_status != RPM_ACTIVE);
+
+	blk_set_pm_only(q);
+	/*
+	 * This function only gets called if the most recent
+	 * pm_request_resume() call occurred at least autosuspend_delay_ms
+	 * ago. Since blk_queue_enter() is called by the request allocation
+	 * code before pm_request_resume(), if no requests have a tag assigned
+	 * it is safe to suspend the device.
+	 */
+	ret = -EBUSY;
+	if (blk_requests_in_flight(q) == 0) {
+		/*
+		 * Call synchronize_rcu() such that later blk_queue_enter()
+		 * calls see the preempt-only state. See also
+		 * http://lwn.net/Articles/573497/.
+		 */
+		synchronize_rcu();
+		if (blk_requests_in_flight(q) == 0)
+			ret = 0;
+	}
+
 	spin_lock_irq(q->queue_lock);
-	if (q->nr_pending) {
-		ret = -EBUSY;
+	if (ret < 0)
 		pm_runtime_mark_last_busy(q->dev);
-	} else {
+	else
 		q->rpm_status = RPM_SUSPENDING;
-	}
 	spin_unlock_irq(q->queue_lock);
+
+	if (ret)
+		blk_clear_pm_only(q);
+
 	return ret;
 }
 EXPORT_SYMBOL(blk_pre_runtime_suspend);
@@ -124,6 +181,9 @@ void blk_post_runtime_suspend(struct request_queue *q, int err)
 		pm_runtime_mark_last_busy(q->dev);
 	}
 	spin_unlock_irq(q->queue_lock);
+
+	if (err)
+		blk_clear_pm_only(q);
 }
 EXPORT_SYMBOL(blk_post_runtime_suspend);
 
@@ -171,13 +231,15 @@ void blk_post_runtime_resume(struct request_queue *q, int err)
 	spin_lock_irq(q->queue_lock);
 	if (!err) {
 		q->rpm_status = RPM_ACTIVE;
-		__blk_run_queue(q);
 		pm_runtime_mark_last_busy(q->dev);
 		pm_request_autosuspend(q->dev);
 	} else {
 		q->rpm_status = RPM_SUSPENDED;
 	}
 	spin_unlock_irq(q->queue_lock);
+
+	if (!err)
+		blk_clear_pm_only(q);
 }
 EXPORT_SYMBOL(blk_post_runtime_resume);
 
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 12/12] blk-mq: Enable support for runtime power management
  2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
                   ` (10 preceding siblings ...)
  2018-08-09 19:41 ` [PATCH v6 11/12] block: Change the runtime power management approach (2/2) Bart Van Assche
@ 2018-08-09 19:41 ` Bart Van Assche
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-09 19:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ming Lei,
	Jianchao Wang, Hannes Reinecke, Johannes Thumshirn, Alan Stern

Now that the blk-mq core processes power management requests
(marked with RQF_PREEMPT) in other states than RPM_ACTIVE, enable
runtime power management for blk-mq.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 block/blk-pm.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/block/blk-pm.c b/block/blk-pm.c
index 977beffdccd2..e76d85dcdd67 100644
--- a/block/blk-pm.c
+++ b/block/blk-pm.c
@@ -30,12 +30,6 @@
  */
 void blk_pm_runtime_init(struct request_queue *q, struct device *dev)
 {
-	/* Don't enable runtime PM for blk-mq until it is ready */
-	if (q->mq_ops) {
-		pm_runtime_disable(dev);
-		return;
-	}
-
 	q->dev = dev;
 	q->rpm_status = RPM_ACTIVE;
 	pm_runtime_set_autosuspend_delay(q->dev, -1);
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 02/12] scsi: Alter handling of RQF_DV requests
  2018-08-09 19:41 ` [PATCH v6 02/12] scsi: Alter handling of RQF_DV requests Bart Van Assche
@ 2018-08-10  1:20   ` Ming Lei
  2018-08-10 15:07     ` Bart Van Assche
  0 siblings, 1 reply; 26+ messages in thread
From: Ming Lei @ 2018-08-10  1:20 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Martin K . Petersen,
	Jianchao Wang, Hannes Reinecke, Johannes Thumshirn, Alan Stern

On Thu, Aug 09, 2018 at 12:41:39PM -0700, Bart Van Assche wrote:
> Process all requests in state SDEV_CREATED instead of only RQF_DV
> requests. This does not change the behavior of the SCSI core because
> the SCSI device state is modified into another state before SCSI
> devices become visible in sysfs and before any device nodes are
> created in /dev. Do not process RQF_DV requests in state SDEV_CANCEL
> because only power management requests should be processed in this
> state.

Could you explain a bit why only PM requests should be processed in
SDEV_CANCEL?

Thanks,
Ming

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 05/12] block, scsi: Rename QUEUE_FLAG_PREEMPT_ONLY into DV_ONLY and introduce PM_ONLY
  2018-08-09 19:41 ` [PATCH v6 05/12] block, scsi: Rename QUEUE_FLAG_PREEMPT_ONLY into DV_ONLY and introduce PM_ONLY Bart Van Assche
@ 2018-08-10  1:39   ` jianchao.wang
  2018-08-10 15:18     ` Bart Van Assche
  0 siblings, 1 reply; 26+ messages in thread
From: jianchao.wang @ 2018-08-10  1:39 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Martin K . Petersen, Ming Lei,
	Hannes Reinecke, Johannes Thumshirn, Alan Stern

Hi Bart

On 08/10/2018 03:41 AM, Bart Van Assche wrote:
> +/*
> + * Whether or not blk_queue_enter() should proceed. RQF_PM requests are always
> + * allowed. RQF_DV requests are allowed if the PM_ONLY queue flag has not been
> + * set. Other requests are only allowed if neither PM_ONLY nor DV_ONLY has been
> + * set.
> + */
> +static inline bool blk_enter_allowed(struct request_queue *q,
> +				     blk_mq_req_flags_t flags)
> +{
> +	return flags & BLK_MQ_REQ_PM ||
> +		(!blk_queue_pm_only(q) &&
> +		 (flags & BLK_MQ_REQ_DV || !blk_queue_dv_only(q)));
> +}

If a new state is indeed necessary, I think this kind of checking in hot path is inefficient.
How about introduce a new state into request_queue, such as request_queue->gate_state.
Set the PM_ONLY and DV_ONLY into this state, then we could just check request_queue->gate_state > 0
before do further checking.

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 11/12] block: Change the runtime power management approach (2/2)
  2018-08-09 19:41 ` [PATCH v6 11/12] block: Change the runtime power management approach (2/2) Bart Van Assche
@ 2018-08-10  1:51   ` jianchao.wang
  2018-08-10 15:22     ` Bart Van Assche
  0 siblings, 1 reply; 26+ messages in thread
From: jianchao.wang @ 2018-08-10  1:51 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Ming Lei, Hannes Reinecke,
	Johannes Thumshirn, Alan Stern

Hi Bart

On 08/10/2018 03:41 AM, Bart Van Assche wrote:
> +
> +	blk_set_pm_only(q);
> +	/*
> +	 * This function only gets called if the most recent
> +	 * pm_request_resume() call occurred at least autosuspend_delay_ms
> +	 * ago. Since blk_queue_enter() is called by the request allocation
> +	 * code before pm_request_resume(), if no requests have a tag assigned
> +	 * it is safe to suspend the device.
> +	 */
> +	ret = -EBUSY;
> +	if (blk_requests_in_flight(q) == 0) {
> +		/*
> +		 * Call synchronize_rcu() such that later blk_queue_enter()
> +		 * calls see the preempt-only state. See also
> +		 * https://urldefense.proofpoint.com/v2/url?u=http-3A__lwn.net_Articles_573497_&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=7WdAxUBeiTUTCy8v-7zXyr4qk7sx26ATvfo6QSTvZyQ&m=U9uPCJD2WnkXvdzrWaKPh2wJuk8-IHvxZ9sWDVrg2Tg&s=c9E23TPCpNQkiZpuzGztwHxjWF8qrESfRnPmI-e-Z48&e=.
> +		 */
> +		synchronize_rcu();
> +		if (blk_requests_in_flight(q) == 0)
> +			ret = 0;
> +	}

I still think blk_set_pm_only should be moved after blk_requests_in_flight.
Otherwise, the normal IO will be blocked for a little while if there are still
busy requests.

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 10/12] block: Change the runtime power management approach (1/2)
  2018-08-09 19:41 ` [PATCH v6 10/12] block: Change the runtime power management approach (1/2) Bart Van Assche
@ 2018-08-10  1:59   ` jianchao.wang
  2018-08-10 15:20     ` Bart Van Assche
  0 siblings, 1 reply; 26+ messages in thread
From: jianchao.wang @ 2018-08-10  1:59 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Ming Lei, Hannes Reinecke,
	Johannes Thumshirn, Alan Stern

Hi Bart

On 08/10/2018 03:41 AM, Bart Van Assche wrote:
> Instead of scheduling runtime resume of a request queue after a
> request has been queued, schedule asynchronous resume during request
> allocation. The new pm_request_resume() calls occur after
> blk_queue_enter() has increased the q_usage_counter request queue
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> member. This change is needed for a later patch that will make request
> allocation block while the queue status is not RPM_ACTIVE.

Is it "after getting q->q_usage_counter fails" ?
And also this blk_pm_request_resume  will not affect the normal path. ;)

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 08/12] block, scsi: Introduce blk_pm_runtime_exit()
  2018-08-09 19:41 ` [PATCH v6 08/12] block, scsi: Introduce blk_pm_runtime_exit() Bart Van Assche
@ 2018-08-10  2:39   ` jianchao.wang
  2018-08-10 15:27     ` Bart Van Assche
  0 siblings, 1 reply; 26+ messages in thread
From: jianchao.wang @ 2018-08-10  2:39 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Martin K . Petersen, Ming Lei,
	Hannes Reinecke, Johannes Thumshirn, Alan Stern

Hi Bart

On 08/10/2018 03:41 AM, Bart Van Assche wrote:
> +void blk_pm_runtime_exit(struct request_queue *q)
> +{
> +	if (!q->dev)
> +		return;
> +
> +	pm_runtime_get_sync(q->dev);
> +	q->dev = NULL;
> +}
> +EXPORT_SYMBOL(blk_pm_runtime_exit);
> +
>  /**
>   * blk_pre_runtime_suspend - Pre runtime suspend check
>   * @q: the queue of the device
> diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
> index 69ab459abb98..5537762dfcfd 100644
> --- a/drivers/scsi/sd.c
> +++ b/drivers/scsi/sd.c
> @@ -3420,12 +3420,11 @@ static int sd_probe(struct device *dev)
>   **/
>  static int sd_remove(struct device *dev)
>  {
> -	struct scsi_disk *sdkp;
> -	dev_t devt;
> +	struct scsi_disk *sdkp = dev_get_drvdata(dev);
> +	struct scsi_device *sdp = sdkp->device;
> +	dev_t devt = disk_devt(sdkp->disk);
>  
> -	sdkp = dev_get_drvdata(dev);
> -	devt = disk_devt(sdkp->disk);
> -	scsi_autopm_get_device(sdkp->device);
> +	blk_pm_runtime_exit(sdp->request_queue)


Based on __scsi_device_remove, sd_remove will be invoked before blk_cleanup_queue.
So it should be not safe that we set the q->dev to NULL here.

We have following operations in followed patch:
+static inline void blk_pm_mark_last_busy(struct request *rq)
+{
+	if (rq->q->dev && !(rq->rq_flags & RQF_PM))
+		pm_runtime_mark_last_busy(rq->q->dev);
+}

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 02/12] scsi: Alter handling of RQF_DV requests
  2018-08-10  1:20   ` Ming Lei
@ 2018-08-10 15:07     ` Bart Van Assche
  0 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-10 15:07 UTC (permalink / raw)
  To: ming.lei
  Cc: jthumshirn, linux-block, hch, martin.petersen, axboe, hare,
	stern, jianchao.w.wang

On Fri, 2018-08-10 at 09:20 +0800, Ming Lei wrote:
> On Thu, Aug 09, 2018 at 12:41:39PM -0700, Bart Van Assche wrote:
> > Process all requests in state SDEV_CREATED instead of only =
RQF_DV
> > requests. This does not change the behavior of the SCSI core be=
cause
> > the SCSI device state is modified into another state before SCS=
I
> > devices become visible in sysfs and before any device nodes are
> > created in /dev. Do not process RQF_DV requests in state SD=
EV_CANCEL
> > because only power management requests should be processed in t=
his
> > state.
>=20
> Could you explain a bit why only PM requests should be processed in
> SDEV_CANCEL?

Hi Ming,

There is only one function that changes the device state into SDEV_CANC=
EL,
namely __scsi_remove_device(). I think that in the SDEV_C=
ANCEL state all
newly queued requests should be ignored except the SYNCHRONIZE CACHE and
STOP commands submitted by sd_shutdown(). More information about the SC=
SI
device states is available in the commit message of 9b22a8fb0edd ("Upda=
ted
state model for SCSI devices").

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 05/12] block, scsi: Rename QUEUE_FLAG_PREEMPT_ONLY into DV_ONLY and introduce PM_ONLY
  2018-08-10  1:39   ` jianchao.wang
@ 2018-08-10 15:18     ` Bart Van Assche
  0 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-10 15:18 UTC (permalink / raw)
  To: jianchao.w.wang, axboe
  Cc: hch, jthumshirn, linux-block, hare, martin.petersen, ming.lei, stern

On Fri, 2018-08-10 at 09:39 +0800, jianchao.wang wrote:
> On 08/10/2018 03:41 AM, Bart Van Assche wrote:
> > +/*
> > + * Whether or not blk_queue_enter() should procee=
d. RQF_PM requests are always
> > + * allowed. RQF_DV requests are allowed if the PM_=
-ONLY queue flag has not been
> > + * set. Other requests are only allowed if neither PM_=
-ONLY nor DV_ONLY has been
> > + * set.
> > + */
> > +static inline bool blk_enter_allowed(struct request=
F8-queue *q,
> > +				     blk_mq_req_flags_t flags)
> > +{
> > +	return flags & BLK_MQ_REQ_PM ||
> > +		(!blk_queue_pm_only(q) &&
> > +		 (flags & BLK_MQ_REQ_DV || !blk=
F8-queue_dv_only(q)));
> > +}
>=20
> If a new state is indeed necessary, I think this kind of checking in =
hot path is inefficient.
> How about introduce a new state into request_queue, such as reque=
st_queue->gate_state.
> Set the PM_ONLY and DV_ONLY into this state, then we could ju=
st check request_queue->gate_state > 0
> before do further checking.

Hello Jianchao,

I agree with you that the hot path should be as efficient as possible. But =
I
don't think that the above code adds significantly more overhead to the hot
path. The requests for which we care about performance are the requests tha=
t
read and write data. BLK_MQ_REQ_PM is not set for any of these =
requests.
For the high performance case we care about the pm-only flag won't be set.
If that flag is not set the rest of the boolean expression does not have to
be evaluated (flags & BLK_MQ_REQ_DV || !blk_q=
ueue_dv_only(q)). So I think the
above change only adds one additional boolean test to the hot path. That
shouldn't have a measurable performance impact.

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 10/12] block: Change the runtime power management approach (1/2)
  2018-08-10  1:59   ` jianchao.wang
@ 2018-08-10 15:20     ` Bart Van Assche
  0 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-10 15:20 UTC (permalink / raw)
  To: jianchao.w.wang, axboe
  Cc: hch, jthumshirn, linux-block, hare, stern, ming.lei

On Fri, 2018-08-10 at 09:59 +0800, jianchao.wang wrote:
> On 08/10/2018 03:41 AM, Bart Van Assche wrote:
> > Instead of scheduling runtime resume of a request queue after a
> > request has been queued, schedule asynchronous resume during re=
quest
> > allocation. The new pm_request_resume() calls occur aft=
er
> > blk_queue_enter() has increased the q_usage_cou=
nter request queue
>=20
>                         ^^^^^^^^^^^^^^^^=
XgBeAF4AXgBeAF4AXgBeAF4AXgBeAF4AXgBeAF4AXgBeAF4AXgBeAF4AXgBeAF4AXgBeAF4-
> > member. This change is needed for a later patch that will make =
request
> > allocation block while the queue status is not RPM_ACTIVE.
>=20
> Is it "after getting q->q_usage_counter fails" ?
> And also this blk_pm_request_resume  will not affect the =
normal path. ;)

Right, the commit message needs to be brought in sync with the code.

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 11/12] block: Change the runtime power management approach (2/2)
  2018-08-10  1:51   ` jianchao.wang
@ 2018-08-10 15:22     ` Bart Van Assche
  0 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-10 15:22 UTC (permalink / raw)
  To: jianchao.w.wang, axboe
  Cc: hch, jthumshirn, linux-block, hare, stern, ming.lei

On Fri, 2018-08-10 at 09:51 +0800, jianchao.wang wrote:
> On 08/10/2018 03:41 AM, Bart Van Assche wrote:
> > +
> > +	blk_set_pm_only(q);
> > +	/*
> > +	 * This function only gets called if the most recent
> > +	 * pm_request_resume() call occurred at least au=
tosuspend_delay_ms
> > +	 * ago. Since blk_queue_enter() is called by the=
 request allocation
> > +	 * code before pm_request_resume(), if no reques=
ts have a tag assigned
> > +	 * it is safe to suspend the device.
> > +	 */
> > +	ret = -EBUSY;
> > +	if (blk_requests_in_flight(q) == 0) {
> > +		/*
> > +		 * Call synchronize_rcu() such that later blk_q=
ueue_enter()
> > +		 * calls see the preempt-only state. See also
> > +		 *=20
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__l=
wn.net_Articles_573497_&d=DwIBAg&c=RoP1YumCXCgaWH=
vlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=7WdAxUBeiTUTCy8v-7zXyr4qk7sx26A=
Tvfo6QSTvZyQ&m=U9uPCJD2WnkXvdzrWaKPh2wJuk8-IHvxZ9sWDVrg2Tg&s=
0-c9E23TPCpNQkiZpuzGztwHxjWF8qrESfRnPmI-e-Z48&e=
> > .
> > +		 */
> > +		synchronize_rcu();
> > +		if (blk_requests_in_flight(q) == 0)
> > +			ret = 0;
> > +	}
>=20
> I still think blk_set_pm_only should be moved after blk=
F8-requests_in_flight.
> Otherwise, the normal IO will be blocked for a little while if there =
are still
> busy requests.

Hi Jianchao,

Although I think it is unlikely that the scenario you described will happen=
, I
will make the change you requested.

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 08/12] block, scsi: Introduce blk_pm_runtime_exit()
  2018-08-10  2:39   ` jianchao.wang
@ 2018-08-10 15:27     ` Bart Van Assche
  2018-08-10 16:17       ` Bart Van Assche
  0 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2018-08-10 15:27 UTC (permalink / raw)
  To: jianchao.w.wang, axboe
  Cc: hch, jthumshirn, linux-block, hare, martin.petersen, ming.lei, stern

On Fri, 2018-08-10 at 10:39 +0800, jianchao.wang wrote:
> On 08/10/2018 03:41 AM, Bart Van Assche wrote:
> > +void blk_pm_runtime_exit(struct request_queue=
 *q)
> > +{
> > +	if (!q->dev)
> > +		return;
> > +
> > +	pm_runtime_get_sync(q->dev);
> > +	q->dev = NULL;
> > +}
> > +EXPORT_SYMBOL(blk_pm_runtime_exit);
> > +
> >  /**
> >   * blk_pre_runtime_suspend - Pre runtime suspe=
nd check
> >   * @q: the queue of the device
> > diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
> > index 69ab459abb98..5537762dfcfd 100644
> > --- a/drivers/scsi/sd.c
> > +++ b/drivers/scsi/sd.c
> > @@ -3420,12 +3420,11 @@ static int sd_probe(st=
ruct device *dev)
> >   **/
> >  static int sd_remove(struct device *dev)
> >  {
> > -	struct scsi_disk *sdkp;
> > -	dev_t devt;
> > +	struct scsi_disk *sdkp = dev_get_drvdata=
(dev);
> > +	struct scsi_device *sdp = sdkp->device;
> > +	dev_t devt = disk_devt(sdkp->disk);
> > =20
> > -	sdkp = dev_get_drvdata(dev);
> > -	devt = disk_devt(sdkp->disk);
> > -	scsi_autopm_get_device(sdkp->device);
> > +	blk_pm_runtime_exit(sdp->request_queue)
>=20
>=20
> Based on __scsi_device_remove, sd_remove will be in=
voked before blk_cleanup_queue.
> So it should be not safe that we set the q->dev to NULL here.
>=20
> We have following operations in followed patch:
> +static inline void blk_pm_mark_last_busy(struct req=
uest *rq)
> +{
> +	if (rq->q->dev && !(rq->rq_flags & R=
QF_PM))
> +		pm_runtime_mark_last_busy(rq->q->dev);=
-
> +}

Hello Jianchao,

How about moving the blk_pm_runtime_exit() call into blk_cl=
eanup_queue() at
a point where it is sure that there are no requests in progress? That shoul=
d
avoid that blk_pm_mark_last_busy() races against blk_pm=
_runtime_exit().

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 08/12] block, scsi: Introduce blk_pm_runtime_exit()
  2018-08-10 15:27     ` Bart Van Assche
@ 2018-08-10 16:17       ` Bart Van Assche
  2018-08-13  9:24         ` jianchao.wang
  0 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2018-08-10 16:17 UTC (permalink / raw)
  To: jianchao.w.wang, axboe
  Cc: hch, stern, jthumshirn, linux-block, martin.petersen, hare, ming.lei

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-7", Size: 3309 bytes --]

On Fri, 2018-08-10 at 15:27 +-0000, Bart Van Assche wrote:
+AD4- On Fri, 2018-08-10 at 10:39 +-0800, jianchao.wang wrote:
+AD4- +AD4- On 08/10/2018 03:41 AM, Bart Van Assche wrote:
+AD4- +AD4- +AD4- +-void blk+AF8-pm+AF8-runtime+AF8-exit(struct request+AF8=
-queue +ACo-q)
+AD4- +AD4- +AD4- +-+AHs-
+AD4- +AD4- +AD4- +-	if (+ACE-q-+AD4-dev)
+AD4- +AD4- +AD4- +-		return+ADs-
+AD4- +AD4- +AD4- +-
+AD4- +AD4- +AD4- +-	pm+AF8-runtime+AF8-get+AF8-sync(q-+AD4-dev)+ADs-
+AD4- +AD4- +AD4- +-	q-+AD4-dev +AD0- NULL+ADs-
+AD4- +AD4- +AD4- +-+AH0-
+AD4- +AD4- +AD4- +-EXPORT+AF8-SYMBOL(blk+AF8-pm+AF8-runtime+AF8-exit)+ADs-
+AD4- +AD4- +AD4- +-
+AD4- +AD4- +AD4-  /+ACoAKg-
+AD4- +AD4- +AD4-   +ACo- blk+AF8-pre+AF8-runtime+AF8-suspend - Pre runtime=
 suspend check
+AD4- +AD4- +AD4-   +ACo- +AEA-q: the queue of the device
+AD4- +AD4- +AD4- diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+AD4- +AD4- +AD4- index 69ab459abb98..5537762dfcfd 100644
+AD4- +AD4- +AD4- --- a/drivers/scsi/sd.c
+AD4- +AD4- +AD4- +-+-+- b/drivers/scsi/sd.c
+AD4- +AD4- +AD4- +AEAAQA- -3420,12 +-3420,11 +AEAAQA- static int sd+AF8-pr=
obe(struct device +ACo-dev)
+AD4- +AD4- +AD4-   +ACoAKg-/
+AD4- +AD4- +AD4-  static int sd+AF8-remove(struct device +ACo-dev)
+AD4- +AD4- +AD4-  +AHs-
+AD4- +AD4- +AD4- -	struct scsi+AF8-disk +ACo-sdkp+ADs-
+AD4- +AD4- +AD4- -	dev+AF8-t devt+ADs-
+AD4- +AD4- +AD4- +-	struct scsi+AF8-disk +ACo-sdkp +AD0- dev+AF8-get+AF8-d=
rvdata(dev)+ADs-
+AD4- +AD4- +AD4- +-	struct scsi+AF8-device +ACo-sdp +AD0- sdkp-+AD4-device=
+ADs-
+AD4- +AD4- +AD4- +-	dev+AF8-t devt +AD0- disk+AF8-devt(sdkp-+AD4-disk)+ADs=
-
+AD4- +AD4- +AD4- =20
+AD4- +AD4- +AD4- -	sdkp +AD0- dev+AF8-get+AF8-drvdata(dev)+ADs-
+AD4- +AD4- +AD4- -	devt +AD0- disk+AF8-devt(sdkp-+AD4-disk)+ADs-
+AD4- +AD4- +AD4- -	scsi+AF8-autopm+AF8-get+AF8-device(sdkp-+AD4-device)+AD=
s-
+AD4- +AD4- +AD4- +-	blk+AF8-pm+AF8-runtime+AF8-exit(sdp-+AD4-request+AF8-q=
ueue)
+AD4- +AD4-=20
+AD4- +AD4-=20
+AD4- +AD4- Based on +AF8AXw-scsi+AF8-device+AF8-remove, sd+AF8-remove will=
 be invoked before blk+AF8-cleanup+AF8-queue.
+AD4- +AD4- So it should be not safe that we set the q-+AD4-dev to NULL her=
e.
+AD4- +AD4-=20
+AD4- +AD4- We have following operations in followed patch:
+AD4- +AD4- +-static inline void blk+AF8-pm+AF8-mark+AF8-last+AF8-busy(stru=
ct request +ACo-rq)
+AD4- +AD4- +-+AHs-
+AD4- +AD4- +-	if (rq-+AD4-q-+AD4-dev +ACYAJg- +ACE-(rq-+AD4-rq+AF8-flags +=
ACY- RQF+AF8-PM))
+AD4- +AD4- +-		pm+AF8-runtime+AF8-mark+AF8-last+AF8-busy(rq-+AD4-q-+AD4-de=
v)+ADs-
+AD4- +AD4- +-+AH0-
+AD4-=20
+AD4- How about moving the blk+AF8-pm+AF8-runtime+AF8-exit() call into blk+=
AF8-cleanup+AF8-queue() at
+AD4- a point where it is sure that there are no requests in progress? That=
 should
+AD4- avoid that blk+AF8-pm+AF8-mark+AF8-last+AF8-busy() races against blk+=
AF8-pm+AF8-runtime+AF8-exit().

That wouldn't work because unbinding the SCSI ULD doesn't remove the reques=
t
queue. How about changing blk+AF8-pm+AF8-runtime+AF8-exit() into something =
like the following?

void blk+AF8-pm+AF8-runtime+AF8-exit(struct request+AF8-queue +ACo-q)
+AHs-
	if (+ACE-q-+AD4-dev)
		return+ADs-

	pm+AF8-runtime+AF8-get+AF8-sync(q-+AD4-dev)+ADs-
	blk+AF8-freeze+AF8-queue(q)+ADs-
	q-+AD4-dev +AD0- NULL+ADs-
	blk+AF8-unfreeze+AF8-queue(q)+ADs-
+AH0-

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 08/12] block, scsi: Introduce blk_pm_runtime_exit()
  2018-08-10 16:17       ` Bart Van Assche
@ 2018-08-13  9:24         ` jianchao.wang
  2018-08-13 16:09           ` Bart Van Assche
  0 siblings, 1 reply; 26+ messages in thread
From: jianchao.wang @ 2018-08-13  9:24 UTC (permalink / raw)
  To: Bart Van Assche, axboe
  Cc: hch, stern, jthumshirn, linux-block, martin.petersen, hare, ming.lei

Hi Bart

On 08/11/2018 12:17 AM, Bart Van Assche wrote:
> void blk_pm_runtime_exit(struct request_queue *q)
> {
> 	if (!q->dev)
> 		return;
> 
> 	pm_runtime_get_sync(q->dev);
> 	blk_freeze_queue(q);
> 	q->dev = NULL;
> 	blk_unfreeze_queue(q);
> }

I'm afraid this will not work.

In following patch:

@@ -972,6 +972,8 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
 		if (success)
 			return 0;
 
+		blk_pm_request_resume(q);
+
 		if (flags & BLK_MQ_REQ_NOWAIT)
 			return -EBUSY;

We could still invoke blk_pm_request_resume even if the queue is frozen.

And we have blk_pm_request_resume later as following:

+static inline void blk_pm_request_resume(struct request_queue *q)
+{
+	if (q->dev && (q->rpm_status == RPM_SUSPENDED ||
+		       q->rpm_status == RPM_SUSPENDING))
+		pm_request_resume(q->dev);
+}

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 08/12] block, scsi: Introduce blk_pm_runtime_exit()
  2018-08-13  9:24         ` jianchao.wang
@ 2018-08-13 16:09           ` Bart Van Assche
  0 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-08-13 16:09 UTC (permalink / raw)
  To: jianchao.w.wang, axboe
  Cc: hch, hare, jthumshirn, linux-block, stern, martin.petersen, ming.lei

On Mon, 2018-08-13 at 17:24 +0800, jianchao.wang wrote:
> I'm afraid this will not work.

Since this patch fixes a bug that nobody has reported so far and since no
later patches rely on this patch, I will leave it out.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2018-08-13 16:09 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-09 19:41 [PATCH v6 00/12] blk-mq: Implement runtime power management Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 01/12] block, scsi: Introduce request flag RQF_DV Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 02/12] scsi: Alter handling of RQF_DV requests Bart Van Assche
2018-08-10  1:20   ` Ming Lei
2018-08-10 15:07     ` Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 03/12] scsi: Only set RQF_DV for requests used for domain validation Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 04/12] scsi: Introduce the SDEV_SUSPENDED device status Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 05/12] block, scsi: Rename QUEUE_FLAG_PREEMPT_ONLY into DV_ONLY and introduce PM_ONLY Bart Van Assche
2018-08-10  1:39   ` jianchao.wang
2018-08-10 15:18     ` Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 06/12] scsi: Reallow SPI domain validation during system suspend Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 07/12] block: Move power management code into a new source file Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 08/12] block, scsi: Introduce blk_pm_runtime_exit() Bart Van Assche
2018-08-10  2:39   ` jianchao.wang
2018-08-10 15:27     ` Bart Van Assche
2018-08-10 16:17       ` Bart Van Assche
2018-08-13  9:24         ` jianchao.wang
2018-08-13 16:09           ` Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 09/12] block: Split blk_pm_add_request() and blk_pm_put_request() Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 10/12] block: Change the runtime power management approach (1/2) Bart Van Assche
2018-08-10  1:59   ` jianchao.wang
2018-08-10 15:20     ` Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 11/12] block: Change the runtime power management approach (2/2) Bart Van Assche
2018-08-10  1:51   ` jianchao.wang
2018-08-10 15:22     ` Bart Van Assche
2018-08-09 19:41 ` [PATCH v6 12/12] blk-mq: Enable support for runtime power management Bart Van Assche

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.