All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 18:25 ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

Hello Jens,

Multiple block drivers need the functionality to stop a request queue 
and to wait until all ongoing request_fn() / queue_rq() calls have 
finished without waiting until all outstanding requests have finished. 
Hence this patch series that introduces the blk_quiesce_queue() and 
blk_resume_queue() functions. The dm-mq, SRP and nvme patches in this 
patch series are three examples of where these functions are useful. 
These patches apply on top of the September 21 version of your 
for-4.9/block branch. The individual patches in this series are:

0001-blk-mq-Introduce-blk_mq_queue_stopped.patch
0002-dm-Fix-a-race-condition-related-to-stopping-and-star.patch
0003-RFC-nvme-Use-BLK_MQ_S_STOPPED-instead-of-QUEUE_FLAG_.patch
0004-block-Move-blk_freeze_queue-and-blk_unfreeze_queue-c.patch
0005-block-Extend-blk_freeze_queue_start-to-the-non-blk-m.patch
0006-block-Rename-mq_freeze_wq-and-mq_freeze_depth.patch
0007-blk-mq-Introduce-blk_quiesce_queue-and-blk_resume_qu.patch
0008-SRP-transport-Port-srp_wait_for_queuecommand-to-scsi.patch
0009-RFC-nvme-Fix-a-race-condition.patch

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 18:25 ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:25 UTC (permalink / raw)


Hello Jens,

Multiple block drivers need the functionality to stop a request queue 
and to wait until all ongoing request_fn() / queue_rq() calls have 
finished without waiting until all outstanding requests have finished. 
Hence this patch series that introduces the blk_quiesce_queue() and 
blk_resume_queue() functions. The dm-mq, SRP and nvme patches in this 
patch series are three examples of where these functions are useful. 
These patches apply on top of the September 21 version of your 
for-4.9/block branch. The individual patches in this series are:

0001-blk-mq-Introduce-blk_mq_queue_stopped.patch
0002-dm-Fix-a-race-condition-related-to-stopping-and-star.patch
0003-RFC-nvme-Use-BLK_MQ_S_STOPPED-instead-of-QUEUE_FLAG_.patch
0004-block-Move-blk_freeze_queue-and-blk_unfreeze_queue-c.patch
0005-block-Extend-blk_freeze_queue_start-to-the-non-blk-m.patch
0006-block-Rename-mq_freeze_wq-and-mq_freeze_depth.patch
0007-blk-mq-Introduce-blk_quiesce_queue-and-blk_resume_qu.patch
0008-SRP-transport-Port-srp_wait_for_queuecommand-to-scsi.patch
0009-RFC-nvme-Fix-a-race-condition.patch

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 1/9] blk-mq: Introduce blk_mq_queue_stopped()
@ 2016-09-26 18:26   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

The function blk_queue_stopped() allows to test whether or not a
traditional request queue has been stopped. Introduce a helper
function that allows block drivers to query easily whether or not
one or more hardware contexts of a blk-mq queue have been stopped.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c         | 20 ++++++++++++++++++++
 include/linux/blk-mq.h |  1 +
 2 files changed, 21 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e9ebe98..98d4812 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -938,6 +938,26 @@ void blk_mq_run_hw_queues(struct request_queue *q, bool async)
 }
 EXPORT_SYMBOL(blk_mq_run_hw_queues);
 
+/**
+ * blk_mq_queue_stopped() - check whether one or more hctxs have been stopped
+ * @q: request queue.
+ *
+ * The caller is responsible for serializing this function against
+ * blk_mq_{start,stop}_hw_queue().
+ */
+bool blk_mq_queue_stopped(struct request_queue *q)
+{
+	struct blk_mq_hw_ctx *hctx;
+	int i;
+
+	queue_for_each_hw_ctx(q, hctx, i)
+		if (test_bit(BLK_MQ_S_STOPPED, &hctx->state))
+			return true;
+
+	return false;
+}
+EXPORT_SYMBOL(blk_mq_queue_stopped);
+
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx)
 {
 	cancel_work(&hctx->run_work);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 5daa0ef..368c460d 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -233,6 +233,7 @@ void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs
 void blk_mq_abort_requeue_list(struct request_queue *q);
 void blk_mq_complete_request(struct request *rq, int error);
 
+bool blk_mq_queue_stopped(struct request_queue *q);
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_stop_hw_queues(struct request_queue *q);
-- 
2.10.0


^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 1/9] blk-mq: Introduce blk_mq_queue_stopped()
@ 2016-09-26 18:26   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

The function blk_queue_stopped() allows to test whether or not a
traditional request queue has been stopped. Introduce a helper
function that allows block drivers to query easily whether or not
one or more hardware contexts of a blk-mq queue have been stopped.

Signed-off-by: Bart Van Assche <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
Cc: Jens Axboe <axboe-b10kYP2dOMg@public.gmane.org>
Cc: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
 block/blk-mq.c         | 20 ++++++++++++++++++++
 include/linux/blk-mq.h |  1 +
 2 files changed, 21 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e9ebe98..98d4812 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -938,6 +938,26 @@ void blk_mq_run_hw_queues(struct request_queue *q, bool async)
 }
 EXPORT_SYMBOL(blk_mq_run_hw_queues);
 
+/**
+ * blk_mq_queue_stopped() - check whether one or more hctxs have been stopped
+ * @q: request queue.
+ *
+ * The caller is responsible for serializing this function against
+ * blk_mq_{start,stop}_hw_queue().
+ */
+bool blk_mq_queue_stopped(struct request_queue *q)
+{
+	struct blk_mq_hw_ctx *hctx;
+	int i;
+
+	queue_for_each_hw_ctx(q, hctx, i)
+		if (test_bit(BLK_MQ_S_STOPPED, &hctx->state))
+			return true;
+
+	return false;
+}
+EXPORT_SYMBOL(blk_mq_queue_stopped);
+
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx)
 {
 	cancel_work(&hctx->run_work);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 5daa0ef..368c460d 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -233,6 +233,7 @@ void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs
 void blk_mq_abort_requeue_list(struct request_queue *q);
 void blk_mq_complete_request(struct request *rq, int error);
 
+bool blk_mq_queue_stopped(struct request_queue *q);
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_stop_hw_queues(struct request_queue *q);
-- 
2.10.0

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 1/9] blk-mq: Introduce blk_mq_queue_stopped()
@ 2016-09-26 18:26   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:26 UTC (permalink / raw)


The function blk_queue_stopped() allows to test whether or not a
traditional request queue has been stopped. Introduce a helper
function that allows block drivers to query easily whether or not
one or more hardware contexts of a blk-mq queue have been stopped.

Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
Cc: Jens Axboe <axboe at fb.com>
Cc: Christoph Hellwig <hch at lst.de>
---
 block/blk-mq.c         | 20 ++++++++++++++++++++
 include/linux/blk-mq.h |  1 +
 2 files changed, 21 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e9ebe98..98d4812 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -938,6 +938,26 @@ void blk_mq_run_hw_queues(struct request_queue *q, bool async)
 }
 EXPORT_SYMBOL(blk_mq_run_hw_queues);
 
+/**
+ * blk_mq_queue_stopped() - check whether one or more hctxs have been stopped
+ * @q: request queue.
+ *
+ * The caller is responsible for serializing this function against
+ * blk_mq_{start,stop}_hw_queue().
+ */
+bool blk_mq_queue_stopped(struct request_queue *q)
+{
+	struct blk_mq_hw_ctx *hctx;
+	int i;
+
+	queue_for_each_hw_ctx(q, hctx, i)
+		if (test_bit(BLK_MQ_S_STOPPED, &hctx->state))
+			return true;
+
+	return false;
+}
+EXPORT_SYMBOL(blk_mq_queue_stopped);
+
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx)
 {
 	cancel_work(&hctx->run_work);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 5daa0ef..368c460d 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -233,6 +233,7 @@ void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs
 void blk_mq_abort_requeue_list(struct request_queue *q);
 void blk_mq_complete_request(struct request *rq, int error);
 
+bool blk_mq_queue_stopped(struct request_queue *q);
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_stop_hw_queues(struct request_queue *q);
-- 
2.10.0

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 2/9] dm: Fix a race condition related to stopping and starting queues
  2016-09-26 18:25 ` Bart Van Assche
@ 2016-09-26 18:26   ` Bart Van Assche
  -1 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
calls have stopped before setting the "queue stopped" flag. This
allows to remove the "queue stopped" test from dm_mq_queue_rq() and
dm_mq_requeue_request(). This patch fixes a race condition because
dm_mq_queue_rq() is called without holding the queue lock and hence
BLK_MQ_S_STOPPED can be set at any time while dm_mq_queue_rq() is
in progress.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Mike Snitzer <snitzer@redhat.com>
---
 drivers/md/dm-rq.c | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index 182b679..1b7a65e 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -116,9 +116,12 @@ static void dm_mq_stop_queue(struct request_queue *q)
 	queue_flag_set(QUEUE_FLAG_STOPPED, q);
 	spin_unlock_irqrestore(q->queue_lock, flags);
 
+	/* Wait until dm_mq_queue_rq() has finished. */
+	blk_quiesce_queue(q);
 	/* Avoid that requeuing could restart the queue. */
 	blk_mq_cancel_requeue_work(q);
 	blk_mq_stop_hw_queues(q);
+	blk_resume_queue(q);
 }
 
 void dm_stop_queue(struct request_queue *q)
@@ -901,17 +904,6 @@ static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
 		dm_put_live_table(md, srcu_idx);
 	}
 
-	/*
-	 * On suspend dm_stop_queue() handles stopping the blk-mq
-	 * request_queue BUT: even though the hw_queues are marked
-	 * BLK_MQ_S_STOPPED at that point there is still a race that
-	 * is allowing block/blk-mq.c to call ->queue_rq against a
-	 * hctx that it really shouldn't.  The following check guards
-	 * against this rarity (albeit _not_ race-free).
-	 */
-	if (unlikely(test_bit(BLK_MQ_S_STOPPED, &hctx->state)))
-		return BLK_MQ_RQ_QUEUE_BUSY;
-
 	if (ti->type->busy && ti->type->busy(ti))
 		return BLK_MQ_RQ_QUEUE_BUSY;
 
-- 
2.10.0


^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 2/9] dm: Fix a race condition related to stopping and starting queues
@ 2016-09-26 18:26   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:26 UTC (permalink / raw)


Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
calls have stopped before setting the "queue stopped" flag. This
allows to remove the "queue stopped" test from dm_mq_queue_rq() and
dm_mq_requeue_request(). This patch fixes a race condition because
dm_mq_queue_rq() is called without holding the queue lock and hence
BLK_MQ_S_STOPPED can be set at any time while dm_mq_queue_rq() is
in progress.

Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
Cc: Mike Snitzer <snitzer at redhat.com>
---
 drivers/md/dm-rq.c | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index 182b679..1b7a65e 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -116,9 +116,12 @@ static void dm_mq_stop_queue(struct request_queue *q)
 	queue_flag_set(QUEUE_FLAG_STOPPED, q);
 	spin_unlock_irqrestore(q->queue_lock, flags);
 
+	/* Wait until dm_mq_queue_rq() has finished. */
+	blk_quiesce_queue(q);
 	/* Avoid that requeuing could restart the queue. */
 	blk_mq_cancel_requeue_work(q);
 	blk_mq_stop_hw_queues(q);
+	blk_resume_queue(q);
 }
 
 void dm_stop_queue(struct request_queue *q)
@@ -901,17 +904,6 @@ static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
 		dm_put_live_table(md, srcu_idx);
 	}
 
-	/*
-	 * On suspend dm_stop_queue() handles stopping the blk-mq
-	 * request_queue BUT: even though the hw_queues are marked
-	 * BLK_MQ_S_STOPPED at that point there is still a race that
-	 * is allowing block/blk-mq.c to call ->queue_rq against a
-	 * hctx that it really shouldn't.  The following check guards
-	 * against this rarity (albeit _not_ race-free).
-	 */
-	if (unlikely(test_bit(BLK_MQ_S_STOPPED, &hctx->state)))
-		return BLK_MQ_RQ_QUEUE_BUSY;
-
 	if (ti->type->busy && ti->type->busy(ti))
 		return BLK_MQ_RQ_QUEUE_BUSY;
 
-- 
2.10.0

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 3/9] [RFC] nvme: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code
@ 2016-09-26 18:27   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:27 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

Make nvme_requeue_req() check BLK_MQ_S_STOPPED instead of
QUEUE_FLAG_STOPPED. Remove the QUEUE_FLAG_STOPPED manipulations
that became superfluous because of this change. Untested.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index bd2156c..057f1fa 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -205,7 +205,7 @@ void nvme_requeue_req(struct request *req)
 
 	blk_mq_requeue_request(req);
 	spin_lock_irqsave(req->q->queue_lock, flags);
-	if (!blk_queue_stopped(req->q))
+	if (!blk_mq_queue_stopped(req->q))
 		blk_mq_kick_requeue_list(req->q);
 	spin_unlock_irqrestore(req->q->queue_lock, flags);
 }
@@ -2082,10 +2082,6 @@ void nvme_stop_queues(struct nvme_ctrl *ctrl)
 
 	mutex_lock(&ctrl->namespaces_mutex);
 	list_for_each_entry(ns, &ctrl->namespaces, list) {
-		spin_lock_irq(ns->queue->queue_lock);
-		queue_flag_set(QUEUE_FLAG_STOPPED, ns->queue);
-		spin_unlock_irq(ns->queue->queue_lock);
-
 		blk_mq_cancel_requeue_work(ns->queue);
 		blk_mq_stop_hw_queues(ns->queue);
 	}
@@ -2099,7 +2095,6 @@ void nvme_start_queues(struct nvme_ctrl *ctrl)
 
 	mutex_lock(&ctrl->namespaces_mutex);
 	list_for_each_entry(ns, &ctrl->namespaces, list) {
-		queue_flag_clear_unlocked(QUEUE_FLAG_STOPPED, ns->queue);
 		blk_mq_start_stopped_hw_queues(ns->queue, true);
 		blk_mq_kick_requeue_list(ns->queue);
 	}
-- 
2.10.0


^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 3/9] [RFC] nvme: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code
@ 2016-09-26 18:27   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:27 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

Make nvme_requeue_req() check BLK_MQ_S_STOPPED instead of
QUEUE_FLAG_STOPPED. Remove the QUEUE_FLAG_STOPPED manipulations
that became superfluous because of this change. Untested.

Signed-off-by: Bart Van Assche <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
Cc: Keith Busch <keith.busch-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Cc: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Cc: Sagi Grimberg <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
---
 drivers/nvme/host/core.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index bd2156c..057f1fa 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -205,7 +205,7 @@ void nvme_requeue_req(struct request *req)
 
 	blk_mq_requeue_request(req);
 	spin_lock_irqsave(req->q->queue_lock, flags);
-	if (!blk_queue_stopped(req->q))
+	if (!blk_mq_queue_stopped(req->q))
 		blk_mq_kick_requeue_list(req->q);
 	spin_unlock_irqrestore(req->q->queue_lock, flags);
 }
@@ -2082,10 +2082,6 @@ void nvme_stop_queues(struct nvme_ctrl *ctrl)
 
 	mutex_lock(&ctrl->namespaces_mutex);
 	list_for_each_entry(ns, &ctrl->namespaces, list) {
-		spin_lock_irq(ns->queue->queue_lock);
-		queue_flag_set(QUEUE_FLAG_STOPPED, ns->queue);
-		spin_unlock_irq(ns->queue->queue_lock);
-
 		blk_mq_cancel_requeue_work(ns->queue);
 		blk_mq_stop_hw_queues(ns->queue);
 	}
@@ -2099,7 +2095,6 @@ void nvme_start_queues(struct nvme_ctrl *ctrl)
 
 	mutex_lock(&ctrl->namespaces_mutex);
 	list_for_each_entry(ns, &ctrl->namespaces, list) {
-		queue_flag_clear_unlocked(QUEUE_FLAG_STOPPED, ns->queue);
 		blk_mq_start_stopped_hw_queues(ns->queue, true);
 		blk_mq_kick_requeue_list(ns->queue);
 	}
-- 
2.10.0

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 3/9] [RFC] nvme: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code
@ 2016-09-26 18:27   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:27 UTC (permalink / raw)


Make nvme_requeue_req() check BLK_MQ_S_STOPPED instead of
QUEUE_FLAG_STOPPED. Remove the QUEUE_FLAG_STOPPED manipulations
that became superfluous because of this change. Untested.

Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
Cc: Keith Busch <keith.busch at intel.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/host/core.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index bd2156c..057f1fa 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -205,7 +205,7 @@ void nvme_requeue_req(struct request *req)
 
 	blk_mq_requeue_request(req);
 	spin_lock_irqsave(req->q->queue_lock, flags);
-	if (!blk_queue_stopped(req->q))
+	if (!blk_mq_queue_stopped(req->q))
 		blk_mq_kick_requeue_list(req->q);
 	spin_unlock_irqrestore(req->q->queue_lock, flags);
 }
@@ -2082,10 +2082,6 @@ void nvme_stop_queues(struct nvme_ctrl *ctrl)
 
 	mutex_lock(&ctrl->namespaces_mutex);
 	list_for_each_entry(ns, &ctrl->namespaces, list) {
-		spin_lock_irq(ns->queue->queue_lock);
-		queue_flag_set(QUEUE_FLAG_STOPPED, ns->queue);
-		spin_unlock_irq(ns->queue->queue_lock);
-
 		blk_mq_cancel_requeue_work(ns->queue);
 		blk_mq_stop_hw_queues(ns->queue);
 	}
@@ -2099,7 +2095,6 @@ void nvme_start_queues(struct nvme_ctrl *ctrl)
 
 	mutex_lock(&ctrl->namespaces_mutex);
 	list_for_each_entry(ns, &ctrl->namespaces, list) {
-		queue_flag_clear_unlocked(QUEUE_FLAG_STOPPED, ns->queue);
 		blk_mq_start_stopped_hw_queues(ns->queue, true);
 		blk_mq_kick_requeue_list(ns->queue);
 	}
-- 
2.10.0

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 4/9] block: Move blk_freeze_queue() and blk_unfreeze_queue() code
  2016-09-26 18:25 ` Bart Van Assche
@ 2016-09-26 18:27   ` Bart Van Assche
  -1 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:27 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

Move the blk_freeze_queue() and blk_unfreeze_queue() implementations
from block/blk-mq.c to block/blk-core.c. Drop "_mq" from the name of
the functions that have been moved.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
---
 block/blk-core.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
 block/blk-mq.c   | 41 +++--------------------------------------
 block/blk.h      |  3 +++
 3 files changed, 51 insertions(+), 38 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index b75d688..8cc8006 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -682,6 +682,51 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
 	wake_up_all(&q->mq_freeze_wq);
 }
 
+void blk_freeze_queue_start(struct request_queue *q)
+{
+	int freeze_depth;
+
+	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
+	if (freeze_depth == 1) {
+		percpu_ref_kill(&q->q_usage_counter);
+		blk_mq_run_hw_queues(q, false);
+	}
+}
+
+void blk_freeze_queue_wait(struct request_queue *q)
+{
+	wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
+}
+
+/*
+ * Guarantee no request is in use, so we can change any data structure of
+ * the queue afterward.
+ */
+void blk_freeze_queue(struct request_queue *q)
+{
+	/*
+	 * In the !blk_mq case we are only calling this to kill the
+	 * q_usage_counter, otherwise this increases the freeze depth
+	 * and waits for it to return to zero.  For this reason there is
+	 * no blk_unfreeze_queue(), and blk_freeze_queue() is not
+	 * exported to drivers as the only user for unfreeze is blk_mq.
+	 */
+	blk_freeze_queue_start(q);
+	blk_freeze_queue_wait(q);
+}
+
+void blk_unfreeze_queue(struct request_queue *q)
+{
+	int freeze_depth;
+
+	freeze_depth = atomic_dec_return(&q->mq_freeze_depth);
+	WARN_ON_ONCE(freeze_depth < 0);
+	if (!freeze_depth) {
+		percpu_ref_reinit(&q->q_usage_counter);
+		wake_up_all(&q->mq_freeze_wq);
+	}
+}
+
 static void blk_rq_timed_out_timer(unsigned long data)
 {
 	struct request_queue *q = (struct request_queue *)data;
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 98d4812..50b26df 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -60,38 +60,10 @@ static void blk_mq_hctx_clear_pending(struct blk_mq_hw_ctx *hctx,
 
 void blk_mq_freeze_queue_start(struct request_queue *q)
 {
-	int freeze_depth;
-
-	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
-	if (freeze_depth == 1) {
-		percpu_ref_kill(&q->q_usage_counter);
-		blk_mq_run_hw_queues(q, false);
-	}
+	blk_freeze_queue_start(q);
 }
 EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_start);
 
-static void blk_mq_freeze_queue_wait(struct request_queue *q)
-{
-	wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
-}
-
-/*
- * Guarantee no request is in use, so we can change any data structure of
- * the queue afterward.
- */
-void blk_freeze_queue(struct request_queue *q)
-{
-	/*
-	 * In the !blk_mq case we are only calling this to kill the
-	 * q_usage_counter, otherwise this increases the freeze depth
-	 * and waits for it to return to zero.  For this reason there is
-	 * no blk_unfreeze_queue(), and blk_freeze_queue() is not
-	 * exported to drivers as the only user for unfreeze is blk_mq.
-	 */
-	blk_mq_freeze_queue_start(q);
-	blk_mq_freeze_queue_wait(q);
-}
-
 void blk_mq_freeze_queue(struct request_queue *q)
 {
 	/*
@@ -104,14 +76,7 @@ EXPORT_SYMBOL_GPL(blk_mq_freeze_queue);
 
 void blk_mq_unfreeze_queue(struct request_queue *q)
 {
-	int freeze_depth;
-
-	freeze_depth = atomic_dec_return(&q->mq_freeze_depth);
-	WARN_ON_ONCE(freeze_depth < 0);
-	if (!freeze_depth) {
-		percpu_ref_reinit(&q->q_usage_counter);
-		wake_up_all(&q->mq_freeze_wq);
-	}
+	blk_unfreeze_queue(q);
 }
 EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
 
@@ -2177,7 +2142,7 @@ static int blk_mq_queue_reinit_notify(struct notifier_block *nb,
 	list_for_each_entry(q, &all_q_list, all_q_node)
 		blk_mq_freeze_queue_start(q);
 	list_for_each_entry(q, &all_q_list, all_q_node) {
-		blk_mq_freeze_queue_wait(q);
+		blk_freeze_queue_wait(q);
 
 		/*
 		 * timeout handler can't touch hw queue during the
diff --git a/block/blk.h b/block/blk.h
index c37492f..12f7366 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -71,6 +71,9 @@ void __blk_queue_free_tags(struct request_queue *q);
 bool __blk_end_bidi_request(struct request *rq, int error,
 			    unsigned int nr_bytes, unsigned int bidi_bytes);
 void blk_freeze_queue(struct request_queue *q);
+void blk_freeze_queue_start(struct request_queue *q);
+void blk_freeze_queue_wait(struct request_queue *q);
+void blk_unfreeze_queue(struct request_queue *q);
 
 static inline void blk_queue_enter_live(struct request_queue *q)
 {
-- 
2.10.0


^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 4/9] block: Move blk_freeze_queue() and blk_unfreeze_queue() code
@ 2016-09-26 18:27   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:27 UTC (permalink / raw)


Move the blk_freeze_queue() and blk_unfreeze_queue() implementations
from block/blk-mq.c to block/blk-core.c. Drop "_mq" from the name of
the functions that have been moved.

Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
---
 block/blk-core.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
 block/blk-mq.c   | 41 +++--------------------------------------
 block/blk.h      |  3 +++
 3 files changed, 51 insertions(+), 38 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index b75d688..8cc8006 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -682,6 +682,51 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
 	wake_up_all(&q->mq_freeze_wq);
 }
 
+void blk_freeze_queue_start(struct request_queue *q)
+{
+	int freeze_depth;
+
+	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
+	if (freeze_depth == 1) {
+		percpu_ref_kill(&q->q_usage_counter);
+		blk_mq_run_hw_queues(q, false);
+	}
+}
+
+void blk_freeze_queue_wait(struct request_queue *q)
+{
+	wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
+}
+
+/*
+ * Guarantee no request is in use, so we can change any data structure of
+ * the queue afterward.
+ */
+void blk_freeze_queue(struct request_queue *q)
+{
+	/*
+	 * In the !blk_mq case we are only calling this to kill the
+	 * q_usage_counter, otherwise this increases the freeze depth
+	 * and waits for it to return to zero.  For this reason there is
+	 * no blk_unfreeze_queue(), and blk_freeze_queue() is not
+	 * exported to drivers as the only user for unfreeze is blk_mq.
+	 */
+	blk_freeze_queue_start(q);
+	blk_freeze_queue_wait(q);
+}
+
+void blk_unfreeze_queue(struct request_queue *q)
+{
+	int freeze_depth;
+
+	freeze_depth = atomic_dec_return(&q->mq_freeze_depth);
+	WARN_ON_ONCE(freeze_depth < 0);
+	if (!freeze_depth) {
+		percpu_ref_reinit(&q->q_usage_counter);
+		wake_up_all(&q->mq_freeze_wq);
+	}
+}
+
 static void blk_rq_timed_out_timer(unsigned long data)
 {
 	struct request_queue *q = (struct request_queue *)data;
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 98d4812..50b26df 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -60,38 +60,10 @@ static void blk_mq_hctx_clear_pending(struct blk_mq_hw_ctx *hctx,
 
 void blk_mq_freeze_queue_start(struct request_queue *q)
 {
-	int freeze_depth;
-
-	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
-	if (freeze_depth == 1) {
-		percpu_ref_kill(&q->q_usage_counter);
-		blk_mq_run_hw_queues(q, false);
-	}
+	blk_freeze_queue_start(q);
 }
 EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_start);
 
-static void blk_mq_freeze_queue_wait(struct request_queue *q)
-{
-	wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
-}
-
-/*
- * Guarantee no request is in use, so we can change any data structure of
- * the queue afterward.
- */
-void blk_freeze_queue(struct request_queue *q)
-{
-	/*
-	 * In the !blk_mq case we are only calling this to kill the
-	 * q_usage_counter, otherwise this increases the freeze depth
-	 * and waits for it to return to zero.  For this reason there is
-	 * no blk_unfreeze_queue(), and blk_freeze_queue() is not
-	 * exported to drivers as the only user for unfreeze is blk_mq.
-	 */
-	blk_mq_freeze_queue_start(q);
-	blk_mq_freeze_queue_wait(q);
-}
-
 void blk_mq_freeze_queue(struct request_queue *q)
 {
 	/*
@@ -104,14 +76,7 @@ EXPORT_SYMBOL_GPL(blk_mq_freeze_queue);
 
 void blk_mq_unfreeze_queue(struct request_queue *q)
 {
-	int freeze_depth;
-
-	freeze_depth = atomic_dec_return(&q->mq_freeze_depth);
-	WARN_ON_ONCE(freeze_depth < 0);
-	if (!freeze_depth) {
-		percpu_ref_reinit(&q->q_usage_counter);
-		wake_up_all(&q->mq_freeze_wq);
-	}
+	blk_unfreeze_queue(q);
 }
 EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
 
@@ -2177,7 +2142,7 @@ static int blk_mq_queue_reinit_notify(struct notifier_block *nb,
 	list_for_each_entry(q, &all_q_list, all_q_node)
 		blk_mq_freeze_queue_start(q);
 	list_for_each_entry(q, &all_q_list, all_q_node) {
-		blk_mq_freeze_queue_wait(q);
+		blk_freeze_queue_wait(q);
 
 		/*
 		 * timeout handler can't touch hw queue during the
diff --git a/block/blk.h b/block/blk.h
index c37492f..12f7366 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -71,6 +71,9 @@ void __blk_queue_free_tags(struct request_queue *q);
 bool __blk_end_bidi_request(struct request *rq, int error,
 			    unsigned int nr_bytes, unsigned int bidi_bytes);
 void blk_freeze_queue(struct request_queue *q);
+void blk_freeze_queue_start(struct request_queue *q);
+void blk_freeze_queue_wait(struct request_queue *q);
+void blk_unfreeze_queue(struct request_queue *q);
 
 static inline void blk_queue_enter_live(struct request_queue *q)
 {
-- 
2.10.0

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-26 18:27   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:27 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
---
 block/blk-core.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 8cc8006..5ecc7ab 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -689,7 +689,10 @@ void blk_freeze_queue_start(struct request_queue *q)
 	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
 	if (freeze_depth == 1) {
 		percpu_ref_kill(&q->q_usage_counter);
-		blk_mq_run_hw_queues(q, false);
+		if (q->mq_ops)
+			blk_mq_run_hw_queues(q, false);
+		else if (q->request_fn)
+			blk_run_queue(q);
 	}
 }
 
@@ -700,17 +703,11 @@ void blk_freeze_queue_wait(struct request_queue *q)
 
 /*
  * Guarantee no request is in use, so we can change any data structure of
- * the queue afterward.
+ * the queue afterward. Increases q->mq_freeze_depth and waits until
+ * q->q_usage_counter drops to zero.
  */
 void blk_freeze_queue(struct request_queue *q)
 {
-	/*
-	 * In the !blk_mq case we are only calling this to kill the
-	 * q_usage_counter, otherwise this increases the freeze depth
-	 * and waits for it to return to zero.  For this reason there is
-	 * no blk_unfreeze_queue(), and blk_freeze_queue() is not
-	 * exported to drivers as the only user for unfreeze is blk_mq.
-	 */
 	blk_freeze_queue_start(q);
 	blk_freeze_queue_wait(q);
 }
-- 
2.10.0


^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-26 18:27   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:27 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

Signed-off-by: Bart Van Assche <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
---
 block/blk-core.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 8cc8006..5ecc7ab 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -689,7 +689,10 @@ void blk_freeze_queue_start(struct request_queue *q)
 	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
 	if (freeze_depth == 1) {
 		percpu_ref_kill(&q->q_usage_counter);
-		blk_mq_run_hw_queues(q, false);
+		if (q->mq_ops)
+			blk_mq_run_hw_queues(q, false);
+		else if (q->request_fn)
+			blk_run_queue(q);
 	}
 }
 
@@ -700,17 +703,11 @@ void blk_freeze_queue_wait(struct request_queue *q)
 
 /*
  * Guarantee no request is in use, so we can change any data structure of
- * the queue afterward.
+ * the queue afterward. Increases q->mq_freeze_depth and waits until
+ * q->q_usage_counter drops to zero.
  */
 void blk_freeze_queue(struct request_queue *q)
 {
-	/*
-	 * In the !blk_mq case we are only calling this to kill the
-	 * q_usage_counter, otherwise this increases the freeze depth
-	 * and waits for it to return to zero.  For this reason there is
-	 * no blk_unfreeze_queue(), and blk_freeze_queue() is not
-	 * exported to drivers as the only user for unfreeze is blk_mq.
-	 */
 	blk_freeze_queue_start(q);
 	blk_freeze_queue_wait(q);
 }
-- 
2.10.0

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-26 18:27   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:27 UTC (permalink / raw)


Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
---
 block/blk-core.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 8cc8006..5ecc7ab 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -689,7 +689,10 @@ void blk_freeze_queue_start(struct request_queue *q)
 	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
 	if (freeze_depth == 1) {
 		percpu_ref_kill(&q->q_usage_counter);
-		blk_mq_run_hw_queues(q, false);
+		if (q->mq_ops)
+			blk_mq_run_hw_queues(q, false);
+		else if (q->request_fn)
+			blk_run_queue(q);
 	}
 }
 
@@ -700,17 +703,11 @@ void blk_freeze_queue_wait(struct request_queue *q)
 
 /*
  * Guarantee no request is in use, so we can change any data structure of
- * the queue afterward.
+ * the queue afterward. Increases q->mq_freeze_depth and waits until
+ * q->q_usage_counter drops to zero.
  */
 void blk_freeze_queue(struct request_queue *q)
 {
-	/*
-	 * In the !blk_mq case we are only calling this to kill the
-	 * q_usage_counter, otherwise this increases the freeze depth
-	 * and waits for it to return to zero.  For this reason there is
-	 * no blk_unfreeze_queue(), and blk_freeze_queue() is not
-	 * exported to drivers as the only user for unfreeze is blk_mq.
-	 */
 	blk_freeze_queue_start(q);
 	blk_freeze_queue_wait(q);
 }
-- 
2.10.0

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 6/9] block: Rename mq_freeze_wq and mq_freeze_depth
  2016-09-26 18:25 ` Bart Van Assche
@ 2016-09-26 18:28   ` Bart Van Assche
  -1 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:28 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

Since these two structure members are now used in blk-mq and !blk-mq
paths, remove the mq_prefix. This patch does not change any
functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
---
 block/blk-core.c       | 20 ++++++++++----------
 block/blk-mq.c         |  4 ++--
 include/linux/blkdev.h |  4 ++--
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 5ecc7ab..0ff5d57 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -659,9 +659,9 @@ int blk_queue_enter(struct request_queue *q, bool nowait)
 		if (nowait)
 			return -EBUSY;
 
-		ret = wait_event_interruptible(q->mq_freeze_wq,
-				!atomic_read(&q->mq_freeze_depth) ||
-				blk_queue_dying(q));
+		ret = wait_event_interruptible(q->freeze_wq,
+					       !atomic_read(&q->freeze_depth) ||
+					       blk_queue_dying(q));
 		if (blk_queue_dying(q))
 			return -ENODEV;
 		if (ret)
@@ -679,14 +679,14 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
 	struct request_queue *q =
 		container_of(ref, struct request_queue, q_usage_counter);
 
-	wake_up_all(&q->mq_freeze_wq);
+	wake_up_all(&q->freeze_wq);
 }
 
 void blk_freeze_queue_start(struct request_queue *q)
 {
 	int freeze_depth;
 
-	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
+	freeze_depth = atomic_inc_return(&q->freeze_depth);
 	if (freeze_depth == 1) {
 		percpu_ref_kill(&q->q_usage_counter);
 		if (q->mq_ops)
@@ -698,12 +698,12 @@ void blk_freeze_queue_start(struct request_queue *q)
 
 void blk_freeze_queue_wait(struct request_queue *q)
 {
-	wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
+	wait_event(q->freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
 }
 
 /*
  * Guarantee no request is in use, so we can change any data structure of
- * the queue afterward. Increases q->mq_freeze_depth and waits until
+ * the queue afterward. Increases q->freeze_depth and waits until
  * q->q_usage_counter drops to zero.
  */
 void blk_freeze_queue(struct request_queue *q)
@@ -716,11 +716,11 @@ void blk_unfreeze_queue(struct request_queue *q)
 {
 	int freeze_depth;
 
-	freeze_depth = atomic_dec_return(&q->mq_freeze_depth);
+	freeze_depth = atomic_dec_return(&q->freeze_depth);
 	WARN_ON_ONCE(freeze_depth < 0);
 	if (!freeze_depth) {
 		percpu_ref_reinit(&q->q_usage_counter);
-		wake_up_all(&q->mq_freeze_wq);
+		wake_up_all(&q->freeze_wq);
 	}
 }
 
@@ -790,7 +790,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
 	q->bypass_depth = 1;
 	__set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags);
 
-	init_waitqueue_head(&q->mq_freeze_wq);
+	init_waitqueue_head(&q->freeze_wq);
 
 	/*
 	 * Init percpu_ref in atomic mode so that it's faster to shutdown.
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 50b26df..e17a5bf 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -94,7 +94,7 @@ void blk_mq_wake_waiters(struct request_queue *q)
 	 * dying, we need to ensure that processes currently waiting on
 	 * the queue are notified as well.
 	 */
-	wake_up_all(&q->mq_freeze_wq);
+	wake_up_all(&q->freeze_wq);
 }
 
 bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx)
@@ -2071,7 +2071,7 @@ void blk_mq_free_queue(struct request_queue *q)
 static void blk_mq_queue_reinit(struct request_queue *q,
 				const struct cpumask *online_mask)
 {
-	WARN_ON_ONCE(!atomic_read(&q->mq_freeze_depth));
+	WARN_ON_ONCE(!atomic_read(&q->freeze_depth));
 
 	blk_mq_sysfs_unregister(q);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index c47c358..f08dc65 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -454,7 +454,7 @@ struct request_queue {
 	struct mutex		sysfs_lock;
 
 	int			bypass_depth;
-	atomic_t		mq_freeze_depth;
+	atomic_t		freeze_depth;
 
 #if defined(CONFIG_BLK_DEV_BSG)
 	bsg_job_fn		*bsg_job_fn;
@@ -467,7 +467,7 @@ struct request_queue {
 	struct throtl_data *td;
 #endif
 	struct rcu_head		rcu_head;
-	wait_queue_head_t	mq_freeze_wq;
+	wait_queue_head_t	freeze_wq;
 	struct percpu_ref	q_usage_counter;
 	struct list_head	all_q_node;
 
-- 
2.10.0


^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 6/9] block: Rename mq_freeze_wq and mq_freeze_depth
@ 2016-09-26 18:28   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:28 UTC (permalink / raw)


Since these two structure members are now used in blk-mq and !blk-mq
paths, remove the mq_prefix. This patch does not change any
functionality.

Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
---
 block/blk-core.c       | 20 ++++++++++----------
 block/blk-mq.c         |  4 ++--
 include/linux/blkdev.h |  4 ++--
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 5ecc7ab..0ff5d57 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -659,9 +659,9 @@ int blk_queue_enter(struct request_queue *q, bool nowait)
 		if (nowait)
 			return -EBUSY;
 
-		ret = wait_event_interruptible(q->mq_freeze_wq,
-				!atomic_read(&q->mq_freeze_depth) ||
-				blk_queue_dying(q));
+		ret = wait_event_interruptible(q->freeze_wq,
+					       !atomic_read(&q->freeze_depth) ||
+					       blk_queue_dying(q));
 		if (blk_queue_dying(q))
 			return -ENODEV;
 		if (ret)
@@ -679,14 +679,14 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
 	struct request_queue *q =
 		container_of(ref, struct request_queue, q_usage_counter);
 
-	wake_up_all(&q->mq_freeze_wq);
+	wake_up_all(&q->freeze_wq);
 }
 
 void blk_freeze_queue_start(struct request_queue *q)
 {
 	int freeze_depth;
 
-	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
+	freeze_depth = atomic_inc_return(&q->freeze_depth);
 	if (freeze_depth == 1) {
 		percpu_ref_kill(&q->q_usage_counter);
 		if (q->mq_ops)
@@ -698,12 +698,12 @@ void blk_freeze_queue_start(struct request_queue *q)
 
 void blk_freeze_queue_wait(struct request_queue *q)
 {
-	wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
+	wait_event(q->freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
 }
 
 /*
  * Guarantee no request is in use, so we can change any data structure of
- * the queue afterward. Increases q->mq_freeze_depth and waits until
+ * the queue afterward. Increases q->freeze_depth and waits until
  * q->q_usage_counter drops to zero.
  */
 void blk_freeze_queue(struct request_queue *q)
@@ -716,11 +716,11 @@ void blk_unfreeze_queue(struct request_queue *q)
 {
 	int freeze_depth;
 
-	freeze_depth = atomic_dec_return(&q->mq_freeze_depth);
+	freeze_depth = atomic_dec_return(&q->freeze_depth);
 	WARN_ON_ONCE(freeze_depth < 0);
 	if (!freeze_depth) {
 		percpu_ref_reinit(&q->q_usage_counter);
-		wake_up_all(&q->mq_freeze_wq);
+		wake_up_all(&q->freeze_wq);
 	}
 }
 
@@ -790,7 +790,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
 	q->bypass_depth = 1;
 	__set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags);
 
-	init_waitqueue_head(&q->mq_freeze_wq);
+	init_waitqueue_head(&q->freeze_wq);
 
 	/*
 	 * Init percpu_ref in atomic mode so that it's faster to shutdown.
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 50b26df..e17a5bf 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -94,7 +94,7 @@ void blk_mq_wake_waiters(struct request_queue *q)
 	 * dying, we need to ensure that processes currently waiting on
 	 * the queue are notified as well.
 	 */
-	wake_up_all(&q->mq_freeze_wq);
+	wake_up_all(&q->freeze_wq);
 }
 
 bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx)
@@ -2071,7 +2071,7 @@ void blk_mq_free_queue(struct request_queue *q)
 static void blk_mq_queue_reinit(struct request_queue *q,
 				const struct cpumask *online_mask)
 {
-	WARN_ON_ONCE(!atomic_read(&q->mq_freeze_depth));
+	WARN_ON_ONCE(!atomic_read(&q->freeze_depth));
 
 	blk_mq_sysfs_unregister(q);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index c47c358..f08dc65 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -454,7 +454,7 @@ struct request_queue {
 	struct mutex		sysfs_lock;
 
 	int			bypass_depth;
-	atomic_t		mq_freeze_depth;
+	atomic_t		freeze_depth;
 
 #if defined(CONFIG_BLK_DEV_BSG)
 	bsg_job_fn		*bsg_job_fn;
@@ -467,7 +467,7 @@ struct request_queue {
 	struct throtl_data *td;
 #endif
 	struct rcu_head		rcu_head;
-	wait_queue_head_t	mq_freeze_wq;
+	wait_queue_head_t	freeze_wq;
 	struct percpu_ref	q_usage_counter;
 	struct list_head	all_q_node;
 
-- 
2.10.0

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 7/9] blk-mq: Introduce blk_quiesce_queue() and blk_resume_queue()
  2016-09-26 18:25 ` Bart Van Assche
@ 2016-09-26 18:28   ` Bart Van Assche
  -1 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:28 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

blk_quiesce_queue() prevents that new queue_rq() invocations
occur and waits until ongoing invocations have finished. This
function does *not* wait until all outstanding requests have
finished (this means invocation of request.end_io()).
blk_resume_queue() resumes normal I/O processing.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
---
 block/blk-core.c       | 66 ++++++++++++++++++++++++++++++++++++++++++++++----
 block/blk-mq.c         | 24 +++++++++++++-----
 block/blk.h            |  2 +-
 include/linux/blkdev.h |  5 ++++
 4 files changed, 85 insertions(+), 12 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 0ff5d57..62cb6ae 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -682,18 +682,20 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
 	wake_up_all(&q->freeze_wq);
 }
 
-void blk_freeze_queue_start(struct request_queue *q)
+bool blk_freeze_queue_start(struct request_queue *q, bool kill_percpu_ref)
 {
 	int freeze_depth;
 
 	freeze_depth = atomic_inc_return(&q->freeze_depth);
 	if (freeze_depth == 1) {
-		percpu_ref_kill(&q->q_usage_counter);
+		if (kill_percpu_ref)
+			percpu_ref_kill(&q->q_usage_counter);
 		if (q->mq_ops)
 			blk_mq_run_hw_queues(q, false);
 		else if (q->request_fn)
 			blk_run_queue(q);
 	}
+	return freeze_depth == 1;
 }
 
 void blk_freeze_queue_wait(struct request_queue *q)
@@ -708,21 +710,75 @@ void blk_freeze_queue_wait(struct request_queue *q)
  */
 void blk_freeze_queue(struct request_queue *q)
 {
-	blk_freeze_queue_start(q);
+	blk_freeze_queue_start(q, true);
 	blk_freeze_queue_wait(q);
 }
 
-void blk_unfreeze_queue(struct request_queue *q)
+static bool __blk_unfreeze_queue(struct request_queue *q,
+				 bool reinit_percpu_ref)
 {
 	int freeze_depth;
 
 	freeze_depth = atomic_dec_return(&q->freeze_depth);
 	WARN_ON_ONCE(freeze_depth < 0);
 	if (!freeze_depth) {
-		percpu_ref_reinit(&q->q_usage_counter);
+		if (reinit_percpu_ref)
+			percpu_ref_reinit(&q->q_usage_counter);
 		wake_up_all(&q->freeze_wq);
 	}
+	return freeze_depth == 0;
+}
+
+void blk_unfreeze_queue(struct request_queue *q)
+{
+	__blk_unfreeze_queue(q, true);
+}
+
+/**
+ * blk_quiesce_queue() - wait until all pending queue_rq calls have finished
+ *
+ * Prevent that new I/O requests are queued and wait until all pending
+ * queue_rq() calls have finished. Must not be called if the queue has already
+ * been frozen. Additionally, freezing the queue after having quiesced the
+ * queue and before resuming the queue is not allowed.
+ *
+ * Note: this function does not prevent that the struct request end_io()
+ * callback function is invoked.
+ */
+void blk_quiesce_queue(struct request_queue *q)
+{
+	spin_lock_irq(q->queue_lock);
+	WARN_ON_ONCE(blk_queue_quiescing(q));
+	queue_flag_set(QUEUE_FLAG_QUIESCING, q);
+	spin_unlock_irq(q->queue_lock);
+
+	WARN_ON_ONCE(!blk_freeze_queue_start(q, false));
+	synchronize_rcu();
+
+	spin_lock_irq(q->queue_lock);
+	WARN_ON_ONCE(!blk_queue_quiescing(q));
+	queue_flag_clear(QUEUE_FLAG_QUIESCING, q);
+	spin_unlock_irq(q->queue_lock);
+}
+EXPORT_SYMBOL_GPL(blk_quiesce_queue);
+
+/**
+ * blk_resume_queue() - resume request processing
+ *
+ * The caller is responsible for serializing blk_quiesce_queue() and
+ * blk_resume_queue().
+ */
+void blk_resume_queue(struct request_queue *q)
+{
+	WARN_ON_ONCE(!__blk_unfreeze_queue(q, false));
+	WARN_ON_ONCE(blk_queue_quiescing(q));
+
+	if (q->mq_ops)
+		blk_mq_run_hw_queues(q, false);
+	else
+		blk_run_queue(q);
 }
+EXPORT_SYMBOL_GPL(blk_resume_queue);
 
 static void blk_rq_timed_out_timer(unsigned long data)
 {
diff --git a/block/blk-mq.c b/block/blk-mq.c
index e17a5bf..4df9e4f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -60,7 +60,7 @@ static void blk_mq_hctx_clear_pending(struct blk_mq_hw_ctx *hctx,
 
 void blk_mq_freeze_queue_start(struct request_queue *q)
 {
-	blk_freeze_queue_start(q);
+	blk_freeze_queue_start(q, true);
 }
 EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_start);
 
@@ -441,6 +441,9 @@ static void blk_mq_requeue_work(struct work_struct *work)
 	struct request *rq, *next;
 	unsigned long flags;
 
+	if (blk_queue_quiescing(q))
+		return;
+
 	spin_lock_irqsave(&q->requeue_lock, flags);
 	list_splice_init(&q->requeue_list, &rq_list);
 	spin_unlock_irqrestore(&q->requeue_lock, flags);
@@ -757,6 +760,8 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
 	 */
 	flush_busy_ctxs(hctx, &rq_list);
 
+	rcu_read_lock();
+
 	/*
 	 * If we have previous entries on our dispatch list, grab them
 	 * and stuff them at the front for more fair dispatch.
@@ -836,8 +841,11 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
 		 *
 		 * blk_mq_run_hw_queue() already checks the STOPPED bit
 		 **/
-		blk_mq_run_hw_queue(hctx, true);
+		if (!blk_queue_quiescing(q))
+			blk_mq_run_hw_queue(hctx, true);
 	}
+
+	rcu_read_unlock();
 }
 
 /*
@@ -1294,7 +1302,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 		blk_mq_bio_to_request(rq, bio);
 
 		/*
-		 * We do limited pluging. If the bio can be merged, do that.
+		 * We do limited plugging. If the bio can be merged, do that.
 		 * Otherwise the existing request in the plug list will be
 		 * issued. So the plug list will have one request at most
 		 */
@@ -1314,9 +1322,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 		blk_mq_put_ctx(data.ctx);
 		if (!old_rq)
 			goto done;
-		if (!blk_mq_direct_issue_request(old_rq, &cookie))
-			goto done;
-		blk_mq_insert_request(old_rq, false, true, true);
+
+		rcu_read_lock();
+		if (blk_queue_quiescing(q) ||
+		    blk_mq_direct_issue_request(old_rq, &cookie) != 0)
+			blk_mq_insert_request(old_rq, false, true, true);
+		rcu_read_unlock();
+
 		goto done;
 	}
 
diff --git a/block/blk.h b/block/blk.h
index 12f7366..0e934b5 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -71,7 +71,7 @@ void __blk_queue_free_tags(struct request_queue *q);
 bool __blk_end_bidi_request(struct request *rq, int error,
 			    unsigned int nr_bytes, unsigned int bidi_bytes);
 void blk_freeze_queue(struct request_queue *q);
-void blk_freeze_queue_start(struct request_queue *q);
+bool blk_freeze_queue_start(struct request_queue *q, bool kill_percpu_ref);
 void blk_freeze_queue_wait(struct request_queue *q);
 void blk_unfreeze_queue(struct request_queue *q);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f08dc65..06c9b21 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -505,6 +505,7 @@ struct request_queue {
 #define QUEUE_FLAG_FUA	       24	/* device supports FUA writes */
 #define QUEUE_FLAG_FLUSH_NQ    25	/* flush not queueuable */
 #define QUEUE_FLAG_DAX         26	/* device supports DAX */
+#define QUEUE_FLAG_QUIESCING   27
 
 #define QUEUE_FLAG_DEFAULT	((1 << QUEUE_FLAG_IO_STAT) |		\
 				 (1 << QUEUE_FLAG_STACKABLE)	|	\
@@ -595,6 +596,8 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q)
 #define blk_queue_secure_erase(q) \
 	(test_bit(QUEUE_FLAG_SECERASE, &(q)->queue_flags))
 #define blk_queue_dax(q)	test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
+#define blk_queue_quiescing(q)	test_bit(QUEUE_FLAG_QUIESCING,	\
+					 &(q)->queue_flags)
 
 #define blk_noretry_request(rq) \
 	((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
@@ -824,6 +827,8 @@ extern void __blk_run_queue(struct request_queue *q);
 extern void __blk_run_queue_uncond(struct request_queue *q);
 extern void blk_run_queue(struct request_queue *);
 extern void blk_run_queue_async(struct request_queue *q);
+extern void blk_quiesce_queue(struct request_queue *q);
+extern void blk_resume_queue(struct request_queue *q);
 extern int blk_rq_map_user(struct request_queue *, struct request *,
 			   struct rq_map_data *, void __user *, unsigned long,
 			   gfp_t);
-- 
2.10.0


^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 7/9] blk-mq: Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 18:28   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:28 UTC (permalink / raw)


blk_quiesce_queue() prevents that new queue_rq() invocations
occur and waits until ongoing invocations have finished. This
function does *not* wait until all outstanding requests have
finished (this means invocation of request.end_io()).
blk_resume_queue() resumes normal I/O processing.

Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
---
 block/blk-core.c       | 66 ++++++++++++++++++++++++++++++++++++++++++++++----
 block/blk-mq.c         | 24 +++++++++++++-----
 block/blk.h            |  2 +-
 include/linux/blkdev.h |  5 ++++
 4 files changed, 85 insertions(+), 12 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 0ff5d57..62cb6ae 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -682,18 +682,20 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
 	wake_up_all(&q->freeze_wq);
 }
 
-void blk_freeze_queue_start(struct request_queue *q)
+bool blk_freeze_queue_start(struct request_queue *q, bool kill_percpu_ref)
 {
 	int freeze_depth;
 
 	freeze_depth = atomic_inc_return(&q->freeze_depth);
 	if (freeze_depth == 1) {
-		percpu_ref_kill(&q->q_usage_counter);
+		if (kill_percpu_ref)
+			percpu_ref_kill(&q->q_usage_counter);
 		if (q->mq_ops)
 			blk_mq_run_hw_queues(q, false);
 		else if (q->request_fn)
 			blk_run_queue(q);
 	}
+	return freeze_depth == 1;
 }
 
 void blk_freeze_queue_wait(struct request_queue *q)
@@ -708,21 +710,75 @@ void blk_freeze_queue_wait(struct request_queue *q)
  */
 void blk_freeze_queue(struct request_queue *q)
 {
-	blk_freeze_queue_start(q);
+	blk_freeze_queue_start(q, true);
 	blk_freeze_queue_wait(q);
 }
 
-void blk_unfreeze_queue(struct request_queue *q)
+static bool __blk_unfreeze_queue(struct request_queue *q,
+				 bool reinit_percpu_ref)
 {
 	int freeze_depth;
 
 	freeze_depth = atomic_dec_return(&q->freeze_depth);
 	WARN_ON_ONCE(freeze_depth < 0);
 	if (!freeze_depth) {
-		percpu_ref_reinit(&q->q_usage_counter);
+		if (reinit_percpu_ref)
+			percpu_ref_reinit(&q->q_usage_counter);
 		wake_up_all(&q->freeze_wq);
 	}
+	return freeze_depth == 0;
+}
+
+void blk_unfreeze_queue(struct request_queue *q)
+{
+	__blk_unfreeze_queue(q, true);
+}
+
+/**
+ * blk_quiesce_queue() - wait until all pending queue_rq calls have finished
+ *
+ * Prevent that new I/O requests are queued and wait until all pending
+ * queue_rq() calls have finished. Must not be called if the queue has already
+ * been frozen. Additionally, freezing the queue after having quiesced the
+ * queue and before resuming the queue is not allowed.
+ *
+ * Note: this function does not prevent that the struct request end_io()
+ * callback function is invoked.
+ */
+void blk_quiesce_queue(struct request_queue *q)
+{
+	spin_lock_irq(q->queue_lock);
+	WARN_ON_ONCE(blk_queue_quiescing(q));
+	queue_flag_set(QUEUE_FLAG_QUIESCING, q);
+	spin_unlock_irq(q->queue_lock);
+
+	WARN_ON_ONCE(!blk_freeze_queue_start(q, false));
+	synchronize_rcu();
+
+	spin_lock_irq(q->queue_lock);
+	WARN_ON_ONCE(!blk_queue_quiescing(q));
+	queue_flag_clear(QUEUE_FLAG_QUIESCING, q);
+	spin_unlock_irq(q->queue_lock);
+}
+EXPORT_SYMBOL_GPL(blk_quiesce_queue);
+
+/**
+ * blk_resume_queue() - resume request processing
+ *
+ * The caller is responsible for serializing blk_quiesce_queue() and
+ * blk_resume_queue().
+ */
+void blk_resume_queue(struct request_queue *q)
+{
+	WARN_ON_ONCE(!__blk_unfreeze_queue(q, false));
+	WARN_ON_ONCE(blk_queue_quiescing(q));
+
+	if (q->mq_ops)
+		blk_mq_run_hw_queues(q, false);
+	else
+		blk_run_queue(q);
 }
+EXPORT_SYMBOL_GPL(blk_resume_queue);
 
 static void blk_rq_timed_out_timer(unsigned long data)
 {
diff --git a/block/blk-mq.c b/block/blk-mq.c
index e17a5bf..4df9e4f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -60,7 +60,7 @@ static void blk_mq_hctx_clear_pending(struct blk_mq_hw_ctx *hctx,
 
 void blk_mq_freeze_queue_start(struct request_queue *q)
 {
-	blk_freeze_queue_start(q);
+	blk_freeze_queue_start(q, true);
 }
 EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_start);
 
@@ -441,6 +441,9 @@ static void blk_mq_requeue_work(struct work_struct *work)
 	struct request *rq, *next;
 	unsigned long flags;
 
+	if (blk_queue_quiescing(q))
+		return;
+
 	spin_lock_irqsave(&q->requeue_lock, flags);
 	list_splice_init(&q->requeue_list, &rq_list);
 	spin_unlock_irqrestore(&q->requeue_lock, flags);
@@ -757,6 +760,8 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
 	 */
 	flush_busy_ctxs(hctx, &rq_list);
 
+	rcu_read_lock();
+
 	/*
 	 * If we have previous entries on our dispatch list, grab them
 	 * and stuff them at the front for more fair dispatch.
@@ -836,8 +841,11 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
 		 *
 		 * blk_mq_run_hw_queue() already checks the STOPPED bit
 		 **/
-		blk_mq_run_hw_queue(hctx, true);
+		if (!blk_queue_quiescing(q))
+			blk_mq_run_hw_queue(hctx, true);
 	}
+
+	rcu_read_unlock();
 }
 
 /*
@@ -1294,7 +1302,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 		blk_mq_bio_to_request(rq, bio);
 
 		/*
-		 * We do limited pluging. If the bio can be merged, do that.
+		 * We do limited plugging. If the bio can be merged, do that.
 		 * Otherwise the existing request in the plug list will be
 		 * issued. So the plug list will have one request at most
 		 */
@@ -1314,9 +1322,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 		blk_mq_put_ctx(data.ctx);
 		if (!old_rq)
 			goto done;
-		if (!blk_mq_direct_issue_request(old_rq, &cookie))
-			goto done;
-		blk_mq_insert_request(old_rq, false, true, true);
+
+		rcu_read_lock();
+		if (blk_queue_quiescing(q) ||
+		    blk_mq_direct_issue_request(old_rq, &cookie) != 0)
+			blk_mq_insert_request(old_rq, false, true, true);
+		rcu_read_unlock();
+
 		goto done;
 	}
 
diff --git a/block/blk.h b/block/blk.h
index 12f7366..0e934b5 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -71,7 +71,7 @@ void __blk_queue_free_tags(struct request_queue *q);
 bool __blk_end_bidi_request(struct request *rq, int error,
 			    unsigned int nr_bytes, unsigned int bidi_bytes);
 void blk_freeze_queue(struct request_queue *q);
-void blk_freeze_queue_start(struct request_queue *q);
+bool blk_freeze_queue_start(struct request_queue *q, bool kill_percpu_ref);
 void blk_freeze_queue_wait(struct request_queue *q);
 void blk_unfreeze_queue(struct request_queue *q);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f08dc65..06c9b21 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -505,6 +505,7 @@ struct request_queue {
 #define QUEUE_FLAG_FUA	       24	/* device supports FUA writes */
 #define QUEUE_FLAG_FLUSH_NQ    25	/* flush not queueuable */
 #define QUEUE_FLAG_DAX         26	/* device supports DAX */
+#define QUEUE_FLAG_QUIESCING   27
 
 #define QUEUE_FLAG_DEFAULT	((1 << QUEUE_FLAG_IO_STAT) |		\
 				 (1 << QUEUE_FLAG_STACKABLE)	|	\
@@ -595,6 +596,8 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q)
 #define blk_queue_secure_erase(q) \
 	(test_bit(QUEUE_FLAG_SECERASE, &(q)->queue_flags))
 #define blk_queue_dax(q)	test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
+#define blk_queue_quiescing(q)	test_bit(QUEUE_FLAG_QUIESCING,	\
+					 &(q)->queue_flags)
 
 #define blk_noretry_request(rq) \
 	((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
@@ -824,6 +827,8 @@ extern void __blk_run_queue(struct request_queue *q);
 extern void __blk_run_queue_uncond(struct request_queue *q);
 extern void blk_run_queue(struct request_queue *);
 extern void blk_run_queue_async(struct request_queue *q);
+extern void blk_quiesce_queue(struct request_queue *q);
+extern void blk_resume_queue(struct request_queue *q);
 extern int blk_rq_map_user(struct request_queue *, struct request *,
 			   struct rq_map_data *, void __user *, unsigned long,
 			   gfp_t);
-- 
2.10.0

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 8/9] SRP transport: Port srp_wait_for_queuecommand() to scsi-mq
  2016-09-26 18:25 ` Bart Van Assche
@ 2016-09-26 18:28   ` Bart Van Assche
  -1 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:28 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

Ensure that if scsi-mq is enabled that srp_wait_for_queuecommand()
waits until ongoing shost->hostt->queuecommand() calls have finished.
For the !scsi-mq path, use blk_quiesce_queue() and blk_resume_queue()
instead of busy-waiting.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: James Bottomley <jejb@linux.vnet.ibm.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Doug Ledford <dledford@redhat.com>
---
 drivers/scsi/scsi_transport_srp.c | 26 +++++---------------------
 1 file changed, 5 insertions(+), 21 deletions(-)

diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
index e3cd3ec..841a3eb 100644
--- a/drivers/scsi/scsi_transport_srp.c
+++ b/drivers/scsi/scsi_transport_srp.c
@@ -24,7 +24,7 @@
 #include <linux/err.h>
 #include <linux/slab.h>
 #include <linux/string.h>
-#include <linux/delay.h>
+#include <linux/blk-mq.h>
 
 #include <scsi/scsi.h>
 #include <scsi/scsi_cmnd.h>
@@ -402,34 +402,18 @@ static void srp_reconnect_work(struct work_struct *work)
 	}
 }
 
-/**
- * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn()
- * @shost: SCSI host for which to count the number of scsi_request_fn() callers.
- *
- * To do: add support for scsi-mq in this function.
- */
-static int scsi_request_fn_active(struct Scsi_Host *shost)
+/* Wait until ongoing shost->hostt->queuecommand() calls have finished. */
+static void srp_wait_for_queuecommand(struct Scsi_Host *shost)
 {
 	struct scsi_device *sdev;
 	struct request_queue *q;
-	int request_fn_active = 0;
 
 	shost_for_each_device(sdev, shost) {
 		q = sdev->request_queue;
 
-		spin_lock_irq(q->queue_lock);
-		request_fn_active += q->request_fn_active;
-		spin_unlock_irq(q->queue_lock);
+		blk_quiesce_queue(q);
+		blk_resume_queue(q);
 	}
-
-	return request_fn_active;
-}
-
-/* Wait until ongoing shost->hostt->queuecommand() calls have finished. */
-static void srp_wait_for_queuecommand(struct Scsi_Host *shost)
-{
-	while (scsi_request_fn_active(shost))
-		msleep(20);
 }
 
 static void __rport_fail_io_fast(struct srp_rport *rport)
-- 
2.10.0


^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 8/9] SRP transport: Port srp_wait_for_queuecommand() to scsi-mq
@ 2016-09-26 18:28   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:28 UTC (permalink / raw)


Ensure that if scsi-mq is enabled that srp_wait_for_queuecommand()
waits until ongoing shost->hostt->queuecommand() calls have finished.
For the !scsi-mq path, use blk_quiesce_queue() and blk_resume_queue()
instead of busy-waiting.

Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
Cc: James Bottomley <jejb at linux.vnet.ibm.com>
Cc: Martin K. Petersen <martin.petersen at oracle.com>
Cc: Doug Ledford <dledford at redhat.com>
---
 drivers/scsi/scsi_transport_srp.c | 26 +++++---------------------
 1 file changed, 5 insertions(+), 21 deletions(-)

diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
index e3cd3ec..841a3eb 100644
--- a/drivers/scsi/scsi_transport_srp.c
+++ b/drivers/scsi/scsi_transport_srp.c
@@ -24,7 +24,7 @@
 #include <linux/err.h>
 #include <linux/slab.h>
 #include <linux/string.h>
-#include <linux/delay.h>
+#include <linux/blk-mq.h>
 
 #include <scsi/scsi.h>
 #include <scsi/scsi_cmnd.h>
@@ -402,34 +402,18 @@ static void srp_reconnect_work(struct work_struct *work)
 	}
 }
 
-/**
- * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn()
- * @shost: SCSI host for which to count the number of scsi_request_fn() callers.
- *
- * To do: add support for scsi-mq in this function.
- */
-static int scsi_request_fn_active(struct Scsi_Host *shost)
+/* Wait until ongoing shost->hostt->queuecommand() calls have finished. */
+static void srp_wait_for_queuecommand(struct Scsi_Host *shost)
 {
 	struct scsi_device *sdev;
 	struct request_queue *q;
-	int request_fn_active = 0;
 
 	shost_for_each_device(sdev, shost) {
 		q = sdev->request_queue;
 
-		spin_lock_irq(q->queue_lock);
-		request_fn_active += q->request_fn_active;
-		spin_unlock_irq(q->queue_lock);
+		blk_quiesce_queue(q);
+		blk_resume_queue(q);
 	}
-
-	return request_fn_active;
-}
-
-/* Wait until ongoing shost->hostt->queuecommand() calls have finished. */
-static void srp_wait_for_queuecommand(struct Scsi_Host *shost)
-{
-	while (scsi_request_fn_active(shost))
-		msleep(20);
 }
 
 static void __rport_fail_io_fast(struct srp_rport *rport)
-- 
2.10.0

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 9/9] [RFC] nvme: Fix a race condition
  2016-09-26 18:25 ` Bart Van Assche
@ 2016-09-26 18:28   ` Bart Van Assche
  -1 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:28 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

Avoid that nvme_queue_rq() is still running when nvme_stop_queues()
returns. Untested.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 057f1fa..6e2bf6a 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -201,13 +201,9 @@ fail:
 
 void nvme_requeue_req(struct request *req)
 {
-	unsigned long flags;
-
 	blk_mq_requeue_request(req);
-	spin_lock_irqsave(req->q->queue_lock, flags);
-	if (!blk_mq_queue_stopped(req->q))
-		blk_mq_kick_requeue_list(req->q);
-	spin_unlock_irqrestore(req->q->queue_lock, flags);
+	WARN_ON_ONCE(blk_mq_queue_stopped(req->q));
+	blk_mq_kick_requeue_list(req->q);
 }
 EXPORT_SYMBOL_GPL(nvme_requeue_req);
 
@@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
 void nvme_stop_queues(struct nvme_ctrl *ctrl)
 {
 	struct nvme_ns *ns;
+	struct request_queue *q;
 
 	mutex_lock(&ctrl->namespaces_mutex);
 	list_for_each_entry(ns, &ctrl->namespaces, list) {
-		blk_mq_cancel_requeue_work(ns->queue);
-		blk_mq_stop_hw_queues(ns->queue);
+		q = ns->queue;
+		blk_quiesce_queue(q);
+		blk_mq_cancel_requeue_work(q);
+		blk_mq_stop_hw_queues(q);
+		blk_resume_queue(q);
 	}
 	mutex_unlock(&ctrl->namespaces_mutex);
 }
-- 
2.10.0


^ permalink raw reply related	[flat|nested] 82+ messages in thread

* [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-26 18:28   ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:28 UTC (permalink / raw)


Avoid that nvme_queue_rq() is still running when nvme_stop_queues()
returns. Untested.

Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
Cc: Keith Busch <keith.busch at intel.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/host/core.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 057f1fa..6e2bf6a 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -201,13 +201,9 @@ fail:
 
 void nvme_requeue_req(struct request *req)
 {
-	unsigned long flags;
-
 	blk_mq_requeue_request(req);
-	spin_lock_irqsave(req->q->queue_lock, flags);
-	if (!blk_mq_queue_stopped(req->q))
-		blk_mq_kick_requeue_list(req->q);
-	spin_unlock_irqrestore(req->q->queue_lock, flags);
+	WARN_ON_ONCE(blk_mq_queue_stopped(req->q));
+	blk_mq_kick_requeue_list(req->q);
 }
 EXPORT_SYMBOL_GPL(nvme_requeue_req);
 
@@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
 void nvme_stop_queues(struct nvme_ctrl *ctrl)
 {
 	struct nvme_ns *ns;
+	struct request_queue *q;
 
 	mutex_lock(&ctrl->namespaces_mutex);
 	list_for_each_entry(ns, &ctrl->namespaces, list) {
-		blk_mq_cancel_requeue_work(ns->queue);
-		blk_mq_stop_hw_queues(ns->queue);
+		q = ns->queue;
+		blk_quiesce_queue(q);
+		blk_mq_cancel_requeue_work(q);
+		blk_mq_stop_hw_queues(q);
+		blk_resume_queue(q);
 	}
 	mutex_unlock(&ctrl->namespaces_mutex);
 }
-- 
2.10.0

^ permalink raw reply related	[flat|nested] 82+ messages in thread

* Re: [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 18:33   ` Mike Snitzer
  0 siblings, 0 replies; 82+ messages in thread
From: Mike Snitzer @ 2016-09-26 18:33 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Doug Ledford, Keith Busch, linux-block,
	linux-scsi, linux-rdma, linux-nvme

On Mon, Sep 26 2016 at  2:25pm -0400,
Bart Van Assche <bart.vanassche@sandisk.com> wrote:

> Hello Jens,
> 
> Multiple block drivers need the functionality to stop a request
> queue and to wait until all ongoing request_fn() / queue_rq() calls
> have finished without waiting until all outstanding requests have
> finished. Hence this patch series that introduces the
> blk_quiesce_queue() and blk_resume_queue() functions. The dm-mq, SRP
> and nvme patches in this patch series are three examples of where
> these functions are useful. These patches apply on top of the
> September 21 version of your for-4.9/block branch. The individual
> patches in this series are:
> 
> 0001-blk-mq-Introduce-blk_mq_queue_stopped.patch
> 0002-dm-Fix-a-race-condition-related-to-stopping-and-star.patch
> 0003-RFC-nvme-Use-BLK_MQ_S_STOPPED-instead-of-QUEUE_FLAG_.patch
> 0004-block-Move-blk_freeze_queue-and-blk_unfreeze_queue-c.patch
> 0005-block-Extend-blk_freeze_queue_start-to-the-non-blk-m.patch
> 0006-block-Rename-mq_freeze_wq-and-mq_freeze_depth.patch
> 0007-blk-mq-Introduce-blk_quiesce_queue-and-blk_resume_qu.patch
> 0008-SRP-transport-Port-srp_wait_for_queuecommand-to-scsi.patch
> 0009-RFC-nvme-Fix-a-race-condition.patch

Hi Bart,

How much testing has this series seen?  Did you run it against the
mptest testsuite? https://github.com/snitm/mptest

I did notice patch 2 should come after patch 7 (not sure if other
patches are out of order).

Mike

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 18:33   ` Mike Snitzer
  0 siblings, 0 replies; 82+ messages in thread
From: Mike Snitzer @ 2016-09-26 18:33 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

On Mon, Sep 26 2016 at  2:25pm -0400,
Bart Van Assche <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:

> Hello Jens,
> 
> Multiple block drivers need the functionality to stop a request
> queue and to wait until all ongoing request_fn() / queue_rq() calls
> have finished without waiting until all outstanding requests have
> finished. Hence this patch series that introduces the
> blk_quiesce_queue() and blk_resume_queue() functions. The dm-mq, SRP
> and nvme patches in this patch series are three examples of where
> these functions are useful. These patches apply on top of the
> September 21 version of your for-4.9/block branch. The individual
> patches in this series are:
> 
> 0001-blk-mq-Introduce-blk_mq_queue_stopped.patch
> 0002-dm-Fix-a-race-condition-related-to-stopping-and-star.patch
> 0003-RFC-nvme-Use-BLK_MQ_S_STOPPED-instead-of-QUEUE_FLAG_.patch
> 0004-block-Move-blk_freeze_queue-and-blk_unfreeze_queue-c.patch
> 0005-block-Extend-blk_freeze_queue_start-to-the-non-blk-m.patch
> 0006-block-Rename-mq_freeze_wq-and-mq_freeze_depth.patch
> 0007-blk-mq-Introduce-blk_quiesce_queue-and-blk_resume_qu.patch
> 0008-SRP-transport-Port-srp_wait_for_queuecommand-to-scsi.patch
> 0009-RFC-nvme-Fix-a-race-condition.patch

Hi Bart,

How much testing has this series seen?  Did you run it against the
mptest testsuite? https://github.com/snitm/mptest

I did notice patch 2 should come after patch 7 (not sure if other
patches are out of order).

Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 18:33   ` Mike Snitzer
  0 siblings, 0 replies; 82+ messages in thread
From: Mike Snitzer @ 2016-09-26 18:33 UTC (permalink / raw)


On Mon, Sep 26 2016 at  2:25pm -0400,
Bart Van Assche <bart.vanassche@sandisk.com> wrote:

> Hello Jens,
> 
> Multiple block drivers need the functionality to stop a request
> queue and to wait until all ongoing request_fn() / queue_rq() calls
> have finished without waiting until all outstanding requests have
> finished. Hence this patch series that introduces the
> blk_quiesce_queue() and blk_resume_queue() functions. The dm-mq, SRP
> and nvme patches in this patch series are three examples of where
> these functions are useful. These patches apply on top of the
> September 21 version of your for-4.9/block branch. The individual
> patches in this series are:
> 
> 0001-blk-mq-Introduce-blk_mq_queue_stopped.patch
> 0002-dm-Fix-a-race-condition-related-to-stopping-and-star.patch
> 0003-RFC-nvme-Use-BLK_MQ_S_STOPPED-instead-of-QUEUE_FLAG_.patch
> 0004-block-Move-blk_freeze_queue-and-blk_unfreeze_queue-c.patch
> 0005-block-Extend-blk_freeze_queue_start-to-the-non-blk-m.patch
> 0006-block-Rename-mq_freeze_wq-and-mq_freeze_depth.patch
> 0007-blk-mq-Introduce-blk_quiesce_queue-and-blk_resume_qu.patch
> 0008-SRP-transport-Port-srp_wait_for_queuecommand-to-scsi.patch
> 0009-RFC-nvme-Fix-a-race-condition.patch

Hi Bart,

How much testing has this series seen?  Did you run it against the
mptest testsuite? https://github.com/snitm/mptest

I did notice patch 2 should come after patch 7 (not sure if other
patches are out of order).

Mike

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 18:46     ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:46 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Doug Ledford, Keith Busch, linux-block,
	linux-scsi, linux-rdma, linux-nvme

On 09/26/2016 11:33 AM, Mike Snitzer wrote:
> On Mon, Sep 26 2016 at  2:25pm -0400,
> Bart Van Assche <bart.vanassche@sandisk.com> wrote:
>
>> Hello Jens,
>>
>> Multiple block drivers need the functionality to stop a request
>> queue and to wait until all ongoing request_fn() / queue_rq() calls
>> have finished without waiting until all outstanding requests have
>> finished. Hence this patch series that introduces the
>> blk_quiesce_queue() and blk_resume_queue() functions. The dm-mq, SRP
>> and nvme patches in this patch series are three examples of where
>> these functions are useful. These patches apply on top of the
>> September 21 version of your for-4.9/block branch. The individual
>> patches in this series are:
>>
>> 0001-blk-mq-Introduce-blk_mq_queue_stopped.patch
>> 0002-dm-Fix-a-race-condition-related-to-stopping-and-star.patch
>> 0003-RFC-nvme-Use-BLK_MQ_S_STOPPED-instead-of-QUEUE_FLAG_.patch
>> 0004-block-Move-blk_freeze_queue-and-blk_unfreeze_queue-c.patch
>> 0005-block-Extend-blk_freeze_queue_start-to-the-non-blk-m.patch
>> 0006-block-Rename-mq_freeze_wq-and-mq_freeze_depth.patch
>> 0007-blk-mq-Introduce-blk_quiesce_queue-and-blk_resume_qu.patch
>> 0008-SRP-transport-Port-srp_wait_for_queuecommand-to-scsi.patch
>> 0009-RFC-nvme-Fix-a-race-condition.patch
>
> Hi Bart,
>
> How much testing has this series seen?  Did you run it against the
> mptest testsuite? https://github.com/snitm/mptest
>
> I did notice patch 2 should come after patch 7 (not sure if other
> patches are out of order).

Hello Mike,

Regarding testing: I have primarily used my srp-test regression test 
suite to test this patch series because that test suite uncovered a 
dm-mq race that was not discovered by mptest. I'm currently running 
xfstests (to verify an ib_srp change that is not in this patch series) 
and will run mptest next.

You are right that patch 2 should come after patch 7. The order of the 
other patches in this series should be fine.

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 18:46     ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:46 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

On 09/26/2016 11:33 AM, Mike Snitzer wrote:
> On Mon, Sep 26 2016 at  2:25pm -0400,
> Bart Van Assche <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>
>> Hello Jens,
>>
>> Multiple block drivers need the functionality to stop a request
>> queue and to wait until all ongoing request_fn() / queue_rq() calls
>> have finished without waiting until all outstanding requests have
>> finished. Hence this patch series that introduces the
>> blk_quiesce_queue() and blk_resume_queue() functions. The dm-mq, SRP
>> and nvme patches in this patch series are three examples of where
>> these functions are useful. These patches apply on top of the
>> September 21 version of your for-4.9/block branch. The individual
>> patches in this series are:
>>
>> 0001-blk-mq-Introduce-blk_mq_queue_stopped.patch
>> 0002-dm-Fix-a-race-condition-related-to-stopping-and-star.patch
>> 0003-RFC-nvme-Use-BLK_MQ_S_STOPPED-instead-of-QUEUE_FLAG_.patch
>> 0004-block-Move-blk_freeze_queue-and-blk_unfreeze_queue-c.patch
>> 0005-block-Extend-blk_freeze_queue_start-to-the-non-blk-m.patch
>> 0006-block-Rename-mq_freeze_wq-and-mq_freeze_depth.patch
>> 0007-blk-mq-Introduce-blk_quiesce_queue-and-blk_resume_qu.patch
>> 0008-SRP-transport-Port-srp_wait_for_queuecommand-to-scsi.patch
>> 0009-RFC-nvme-Fix-a-race-condition.patch
>
> Hi Bart,
>
> How much testing has this series seen?  Did you run it against the
> mptest testsuite? https://github.com/snitm/mptest
>
> I did notice patch 2 should come after patch 7 (not sure if other
> patches are out of order).

Hello Mike,

Regarding testing: I have primarily used my srp-test regression test 
suite to test this patch series because that test suite uncovered a 
dm-mq race that was not discovered by mptest. I'm currently running 
xfstests (to verify an ib_srp change that is not in this patch series) 
and will run mptest next.

You are right that patch 2 should come after patch 7. The order of the 
other patches in this series should be fine.

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 18:46     ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 18:46 UTC (permalink / raw)


On 09/26/2016 11:33 AM, Mike Snitzer wrote:
> On Mon, Sep 26 2016 at  2:25pm -0400,
> Bart Van Assche <bart.vanassche@sandisk.com> wrote:
>
>> Hello Jens,
>>
>> Multiple block drivers need the functionality to stop a request
>> queue and to wait until all ongoing request_fn() / queue_rq() calls
>> have finished without waiting until all outstanding requests have
>> finished. Hence this patch series that introduces the
>> blk_quiesce_queue() and blk_resume_queue() functions. The dm-mq, SRP
>> and nvme patches in this patch series are three examples of where
>> these functions are useful. These patches apply on top of the
>> September 21 version of your for-4.9/block branch. The individual
>> patches in this series are:
>>
>> 0001-blk-mq-Introduce-blk_mq_queue_stopped.patch
>> 0002-dm-Fix-a-race-condition-related-to-stopping-and-star.patch
>> 0003-RFC-nvme-Use-BLK_MQ_S_STOPPED-instead-of-QUEUE_FLAG_.patch
>> 0004-block-Move-blk_freeze_queue-and-blk_unfreeze_queue-c.patch
>> 0005-block-Extend-blk_freeze_queue_start-to-the-non-blk-m.patch
>> 0006-block-Rename-mq_freeze_wq-and-mq_freeze_depth.patch
>> 0007-blk-mq-Introduce-blk_quiesce_queue-and-blk_resume_qu.patch
>> 0008-SRP-transport-Port-srp_wait_for_queuecommand-to-scsi.patch
>> 0009-RFC-nvme-Fix-a-race-condition.patch
>
> Hi Bart,
>
> How much testing has this series seen?  Did you run it against the
> mptest testsuite? https://github.com/snitm/mptest
>
> I did notice patch 2 should come after patch 7 (not sure if other
> patches are out of order).

Hello Mike,

Regarding testing: I have primarily used my srp-test regression test 
suite to test this patch series because that test suite uncovered a 
dm-mq race that was not discovered by mptest. I'm currently running 
xfstests (to verify an ib_srp change that is not in this patch series) 
and will run mptest next.

You are right that patch 2 should come after patch 7. The order of the 
other patches in this series should be fine.

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 22:26     ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 22:26 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Doug Ledford, Keith Busch, linux-block,
	linux-scsi, linux-rdma, linux-nvme

On 09/26/2016 11:33 AM, Mike Snitzer wrote:
> How much testing has this series seen?  Did you run it against the
> mptest testsuite? https://github.com/snitm/mptest

Hello Mike,

The output of mptest with MULTIPATH_BACKEND_MODULE="scsidebug":
# ./runtest
[ ... ]
SUCCESS
** summary **
PASSED:  test_00_no_failure test_01_sdev_offline test_02_sdev_delete 
test_03_dm_failpath test_04_dm_switchpg
FAILED:

I think this is the output we were hoping to see :-)

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 22:26     ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 22:26 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

On 09/26/2016 11:33 AM, Mike Snitzer wrote:
> How much testing has this series seen?  Did you run it against the
> mptest testsuite? https://github.com/snitm/mptest

Hello Mike,

The output of mptest with MULTIPATH_BACKEND_MODULE="scsidebug":
# ./runtest
[ ... ]
SUCCESS
** summary **
PASSED:  test_00_no_failure test_01_sdev_offline test_02_sdev_delete 
test_03_dm_failpath test_04_dm_switchpg
FAILED:

I think this is the output we were hoping to see :-)

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-09-26 22:26     ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-26 22:26 UTC (permalink / raw)


On 09/26/2016 11:33 AM, Mike Snitzer wrote:
> How much testing has this series seen?  Did you run it against the
> mptest testsuite? https://github.com/snitm/mptest

Hello Mike,

The output of mptest with MULTIPATH_BACKEND_MODULE="scsidebug":
# ./runtest
[ ... ]
SUCCESS
** summary **
PASSED:  test_00_no_failure test_01_sdev_offline test_02_sdev_delete 
test_03_dm_failpath test_04_dm_switchpg
FAILED:

I think this is the output we were hoping to see :-)

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 1/9] blk-mq: Introduce blk_mq_queue_stopped()
  2016-09-26 18:26   ` Bart Van Assche
@ 2016-09-27  6:20     ` Hannes Reinecke
  -1 siblings, 0 replies; 82+ messages in thread
From: Hannes Reinecke @ 2016-09-27  6:20 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

On 09/26/2016 08:26 PM, Bart Van Assche wrote:
> The function blk_queue_stopped() allows to test whether or not a
> traditional request queue has been stopped. Introduce a helper
> function that allows block drivers to query easily whether or not
> one or more hardware contexts of a blk-mq queue have been stopped.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Jens Axboe <axboe@fb.com>
> Cc: Christoph Hellwig <hch@lst.de>
> ---
>  block/blk-mq.c         | 20 ++++++++++++++++++++
>  include/linux/blk-mq.h |  1 +
>  2 files changed, 21 insertions(+)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 1/9] blk-mq: Introduce blk_mq_queue_stopped()
@ 2016-09-27  6:20     ` Hannes Reinecke
  0 siblings, 0 replies; 82+ messages in thread
From: Hannes Reinecke @ 2016-09-27  6:20 UTC (permalink / raw)


On 09/26/2016 08:26 PM, Bart Van Assche wrote:
> The function blk_queue_stopped() allows to test whether or not a
> traditional request queue has been stopped. Introduce a helper
> function that allows block drivers to query easily whether or not
> one or more hardware contexts of a blk-mq queue have been stopped.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
> Cc: Jens Axboe <axboe at fb.com>
> Cc: Christoph Hellwig <hch at lst.de>
> ---
>  block/blk-mq.c         | 20 ++++++++++++++++++++
>  include/linux/blk-mq.h |  1 +
>  2 files changed, 21 insertions(+)
> 
Reviewed-by: Hannes Reinecke <hare at suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 2/9] dm: Fix a race condition related to stopping and starting queues
@ 2016-09-27  6:21     ` Hannes Reinecke
  0 siblings, 0 replies; 82+ messages in thread
From: Hannes Reinecke @ 2016-09-27  6:21 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

On 09/26/2016 08:26 PM, Bart Van Assche wrote:
> Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
> calls have stopped before setting the "queue stopped" flag. This
> allows to remove the "queue stopped" test from dm_mq_queue_rq() and
> dm_mq_requeue_request(). This patch fixes a race condition because
> dm_mq_queue_rq() is called without holding the queue lock and hence
> BLK_MQ_S_STOPPED can be set at any time while dm_mq_queue_rq() is
> in progress.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Mike Snitzer <snitzer@redhat.com>
> ---
>  drivers/md/dm-rq.c | 14 +++-----------
>  1 file changed, 3 insertions(+), 11 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 2/9] dm: Fix a race condition related to stopping and starting queues
@ 2016-09-27  6:21     ` Hannes Reinecke
  0 siblings, 0 replies; 82+ messages in thread
From: Hannes Reinecke @ 2016-09-27  6:21 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

On 09/26/2016 08:26 PM, Bart Van Assche wrote:
> Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
> calls have stopped before setting the "queue stopped" flag. This
> allows to remove the "queue stopped" test from dm_mq_queue_rq() and
> dm_mq_requeue_request(). This patch fixes a race condition because
> dm_mq_queue_rq() is called without holding the queue lock and hence
> BLK_MQ_S_STOPPED can be set at any time while dm_mq_queue_rq() is
> in progress.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> Cc: Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>  drivers/md/dm-rq.c | 14 +++-----------
>  1 file changed, 3 insertions(+), 11 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare-IBi9RG/b67k@public.gmane.org>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare-l3A5Bk7waGM@public.gmane.org			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 2/9] dm: Fix a race condition related to stopping and starting queues
@ 2016-09-27  6:21     ` Hannes Reinecke
  0 siblings, 0 replies; 82+ messages in thread
From: Hannes Reinecke @ 2016-09-27  6:21 UTC (permalink / raw)


On 09/26/2016 08:26 PM, Bart Van Assche wrote:
> Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
> calls have stopped before setting the "queue stopped" flag. This
> allows to remove the "queue stopped" test from dm_mq_queue_rq() and
> dm_mq_requeue_request(). This patch fixes a race condition because
> dm_mq_queue_rq() is called without holding the queue lock and hence
> BLK_MQ_S_STOPPED can be set at any time while dm_mq_queue_rq() is
> in progress.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
> Cc: Mike Snitzer <snitzer at redhat.com>
> ---
>  drivers/md/dm-rq.c | 14 +++-----------
>  1 file changed, 3 insertions(+), 11 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare at suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 4/9] block: Move blk_freeze_queue() and blk_unfreeze_queue() code
@ 2016-09-27  6:26     ` Hannes Reinecke
  0 siblings, 0 replies; 82+ messages in thread
From: Hannes Reinecke @ 2016-09-27  6:26 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch, linux-block, linux-scsi,
	linux-rdma, linux-nvme

On 09/26/2016 08:27 PM, Bart Van Assche wrote:
> Move the blk_freeze_queue() and blk_unfreeze_queue() implementations
> from block/blk-mq.c to block/blk-core.c. Drop "_mq" from the name of
> the functions that have been moved.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> ---
>  block/blk-core.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  block/blk-mq.c   | 41 +++--------------------------------------
>  block/blk.h      |  3 +++
>  3 files changed, 51 insertions(+), 38 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index b75d688..8cc8006 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -682,6 +682,51 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
>  	wake_up_all(&q->mq_freeze_wq);
>  }
>  
> +void blk_freeze_queue_start(struct request_queue *q)
> +{
> +	int freeze_depth;
> +
> +	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
> +	if (freeze_depth == 1) {
> +		percpu_ref_kill(&q->q_usage_counter);
> +		blk_mq_run_hw_queues(q, false);
> +	}
> +}
> +
As you dropped the 'mq_' prefix, maybe you should rename the counter to
'freeze_depth', to?
And are you sure this works in the !mq case, too?

> +void blk_freeze_queue_wait(struct request_queue *q)
> +{
> +	wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
> +}
> +
Same here, rename ->mq_freeze_wq to ->freeze_wq.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 4/9] block: Move blk_freeze_queue() and blk_unfreeze_queue() code
@ 2016-09-27  6:26     ` Hannes Reinecke
  0 siblings, 0 replies; 82+ messages in thread
From: Hannes Reinecke @ 2016-09-27  6:26 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: Christoph Hellwig, James Bottomley, Martin K. Petersen,
	Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

On 09/26/2016 08:27 PM, Bart Van Assche wrote:
> Move the blk_freeze_queue() and blk_unfreeze_queue() implementations
> from block/blk-mq.c to block/blk-core.c. Drop "_mq" from the name of
> the functions that have been moved.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> ---
>  block/blk-core.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  block/blk-mq.c   | 41 +++--------------------------------------
>  block/blk.h      |  3 +++
>  3 files changed, 51 insertions(+), 38 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index b75d688..8cc8006 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -682,6 +682,51 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
>  	wake_up_all(&q->mq_freeze_wq);
>  }
>  
> +void blk_freeze_queue_start(struct request_queue *q)
> +{
> +	int freeze_depth;
> +
> +	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
> +	if (freeze_depth == 1) {
> +		percpu_ref_kill(&q->q_usage_counter);
> +		blk_mq_run_hw_queues(q, false);
> +	}
> +}
> +
As you dropped the 'mq_' prefix, maybe you should rename the counter to
'freeze_depth', to?
And are you sure this works in the !mq case, too?

> +void blk_freeze_queue_wait(struct request_queue *q)
> +{
> +	wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
> +}
> +
Same here, rename ->mq_freeze_wq to ->freeze_wq.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare-l3A5Bk7waGM@public.gmane.org			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 4/9] block: Move blk_freeze_queue() and blk_unfreeze_queue() code
@ 2016-09-27  6:26     ` Hannes Reinecke
  0 siblings, 0 replies; 82+ messages in thread
From: Hannes Reinecke @ 2016-09-27  6:26 UTC (permalink / raw)


On 09/26/2016 08:27 PM, Bart Van Assche wrote:
> Move the blk_freeze_queue() and blk_unfreeze_queue() implementations
> from block/blk-mq.c to block/blk-core.c. Drop "_mq" from the name of
> the functions that have been moved.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
> ---
>  block/blk-core.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  block/blk-mq.c   | 41 +++--------------------------------------
>  block/blk.h      |  3 +++
>  3 files changed, 51 insertions(+), 38 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index b75d688..8cc8006 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -682,6 +682,51 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
>  	wake_up_all(&q->mq_freeze_wq);
>  }
>  
> +void blk_freeze_queue_start(struct request_queue *q)
> +{
> +	int freeze_depth;
> +
> +	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
> +	if (freeze_depth == 1) {
> +		percpu_ref_kill(&q->q_usage_counter);
> +		blk_mq_run_hw_queues(q, false);
> +	}
> +}
> +
As you dropped the 'mq_' prefix, maybe you should rename the counter to
'freeze_depth', to?
And are you sure this works in the !mq case, too?

> +void blk_freeze_queue_wait(struct request_queue *q)
> +{
> +	wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
> +}
> +
Same here, rename ->mq_freeze_wq to ->freeze_wq.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 1/9] blk-mq: Introduce blk_mq_queue_stopped()
  2016-09-26 18:26   ` Bart Van Assche
  (?)
@ 2016-09-27  7:38     ` Johannes Thumshirn
  -1 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:38 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On Mon, Sep 26, 2016 at 11:26:26AM -0700, Bart Van Assche wrote:
> The function blk_queue_stopped() allows to test whether or not a
> traditional request queue has been stopped. Introduce a helper
> function that allows block drivers to query easily whether or not
> one or more hardware contexts of a blk-mq queue have been stopped.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Jens Axboe <axboe@fb.com>
> Cc: Christoph Hellwig <hch@lst.de>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N�rnberg
GF: Felix Imend�rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N�rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 1/9] blk-mq: Introduce blk_mq_queue_stopped()
@ 2016-09-27  7:38     ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:38 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On Mon, Sep 26, 2016 at 11:26:26AM -0700, Bart Van Assche wrote:
> The function blk_queue_stopped() allows to test whether or not a
> traditional request queue has been stopped. Introduce a helper
> function that allows block drivers to query easily whether or not
> one or more hardware contexts of a blk-mq queue have been stopped.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Jens Axboe <axboe@fb.com>
> Cc: Christoph Hellwig <hch@lst.de>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 1/9] blk-mq: Introduce blk_mq_queue_stopped()
@ 2016-09-27  7:38     ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:38 UTC (permalink / raw)


On Mon, Sep 26, 2016@11:26:26AM -0700, Bart Van Assche wrote:
> The function blk_queue_stopped() allows to test whether or not a
> traditional request queue has been stopped. Introduce a helper
> function that allows block drivers to query easily whether or not
> one or more hardware contexts of a blk-mq queue have been stopped.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
> Cc: Jens Axboe <axboe at fb.com>
> Cc: Christoph Hellwig <hch at lst.de>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn at suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 2/9] dm: Fix a race condition related to stopping and starting queues
  2016-09-26 18:26   ` Bart Van Assche
  (?)
@ 2016-09-27  7:47     ` Johannes Thumshirn
  -1 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:47 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On Mon, Sep 26, 2016 at 11:26:50AM -0700, Bart Van Assche wrote:
> Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
> calls have stopped before setting the "queue stopped" flag. This
> allows to remove the "queue stopped" test from dm_mq_queue_rq() and
> dm_mq_requeue_request(). This patch fixes a race condition because
> dm_mq_queue_rq() is called without holding the queue lock and hence
> BLK_MQ_S_STOPPED can be set at any time while dm_mq_queue_rq() is
> in progress.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Mike Snitzer <snitzer@redhat.com>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N�rnberg
GF: Felix Imend�rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N�rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 2/9] dm: Fix a race condition related to stopping and starting queues
@ 2016-09-27  7:47     ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:47 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On Mon, Sep 26, 2016 at 11:26:50AM -0700, Bart Van Assche wrote:
> Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
> calls have stopped before setting the "queue stopped" flag. This
> allows to remove the "queue stopped" test from dm_mq_queue_rq() and
> dm_mq_requeue_request(). This patch fixes a race condition because
> dm_mq_queue_rq() is called without holding the queue lock and hence
> BLK_MQ_S_STOPPED can be set at any time while dm_mq_queue_rq() is
> in progress.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Mike Snitzer <snitzer@redhat.com>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 2/9] dm: Fix a race condition related to stopping and starting queues
@ 2016-09-27  7:47     ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:47 UTC (permalink / raw)


On Mon, Sep 26, 2016@11:26:50AM -0700, Bart Van Assche wrote:
> Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
> calls have stopped before setting the "queue stopped" flag. This
> allows to remove the "queue stopped" test from dm_mq_queue_rq() and
> dm_mq_requeue_request(). This patch fixes a race condition because
> dm_mq_queue_rq() is called without holding the queue lock and hence
> BLK_MQ_S_STOPPED can be set at any time while dm_mq_queue_rq() is
> in progress.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
> Cc: Mike Snitzer <snitzer at redhat.com>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn at suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
  2016-09-26 18:27   ` Bart Van Assche
  (?)
@ 2016-09-27  7:50     ` Johannes Thumshirn
  -1 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:50 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On Mon, Sep 26, 2016 at 11:27:49AM -0700, Bart Van Assche wrote:
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N�rnberg
GF: Felix Imend�rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N�rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-27  7:50     ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:50 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On Mon, Sep 26, 2016 at 11:27:49AM -0700, Bart Van Assche wrote:
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-27  7:50     ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:50 UTC (permalink / raw)


On Mon, Sep 26, 2016@11:27:49AM -0700, Bart Van Assche wrote:
> Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn at suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 6/9] block: Rename mq_freeze_wq and mq_freeze_depth
  2016-09-26 18:28   ` Bart Van Assche
  (?)
@ 2016-09-27  7:51     ` Johannes Thumshirn
  -1 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:51 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On Mon, Sep 26, 2016 at 11:28:08AM -0700, Bart Van Assche wrote:
> Since these two structure members are now used in blk-mq and !blk-mq
> paths, remove the mq_prefix. This patch does not change any
> functionality.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N�rnberg
GF: Felix Imend�rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N�rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 6/9] block: Rename mq_freeze_wq and mq_freeze_depth
@ 2016-09-27  7:51     ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:51 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On Mon, Sep 26, 2016 at 11:28:08AM -0700, Bart Van Assche wrote:
> Since these two structure members are now used in blk-mq and !blk-mq
> paths, remove the mq_prefix. This patch does not change any
> functionality.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 6/9] block: Rename mq_freeze_wq and mq_freeze_depth
@ 2016-09-27  7:51     ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:51 UTC (permalink / raw)


On Mon, Sep 26, 2016@11:28:08AM -0700, Bart Van Assche wrote:
> Since these two structure members are now used in blk-mq and !blk-mq
> paths, remove the mq_prefix. This patch does not change any
> functionality.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn at suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 4/9] block: Move blk_freeze_queue() and blk_unfreeze_queue() code
@ 2016-09-27  7:52       ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:52 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Bart Van Assche, Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On Tue, Sep 27, 2016 at 08:26:19AM +0200, Hannes Reinecke wrote:
> On 09/26/2016 08:27 PM, Bart Van Assche wrote:
> > Move the blk_freeze_queue() and blk_unfreeze_queue() implementations
> > from block/blk-mq.c to block/blk-core.c. Drop "_mq" from the name of
> > the functions that have been moved.
> > 
> > Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> > ---
> >  block/blk-core.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
> >  block/blk-mq.c   | 41 +++--------------------------------------
> >  block/blk.h      |  3 +++
> >  3 files changed, 51 insertions(+), 38 deletions(-)
> > 
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index b75d688..8cc8006 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -682,6 +682,51 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
> >  	wake_up_all(&q->mq_freeze_wq);
> >  }
> >  
> > +void blk_freeze_queue_start(struct request_queue *q)
> > +{
> > +	int freeze_depth;
> > +
> > +	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
> > +	if (freeze_depth == 1) {
> > +		percpu_ref_kill(&q->q_usage_counter);
> > +		blk_mq_run_hw_queues(q, false);
> > +	}
> > +}
> > +
> As you dropped the 'mq_' prefix, maybe you should rename the counter to
> 'freeze_depth', to?

See PATCH 6/9

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N�rnberg
GF: Felix Imend�rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N�rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 4/9] block: Move blk_freeze_queue() and blk_unfreeze_queue() code
@ 2016-09-27  7:52       ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:52 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Bart Van Assche, Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

On Tue, Sep 27, 2016 at 08:26:19AM +0200, Hannes Reinecke wrote:
> On 09/26/2016 08:27 PM, Bart Van Assche wrote:
> > Move the blk_freeze_queue() and blk_unfreeze_queue() implementations
> > from block/blk-mq.c to block/blk-core.c. Drop "_mq" from the name of
> > the functions that have been moved.
> > 
> > Signed-off-by: Bart Van Assche <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > ---
> >  block/blk-core.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
> >  block/blk-mq.c   | 41 +++--------------------------------------
> >  block/blk.h      |  3 +++
> >  3 files changed, 51 insertions(+), 38 deletions(-)
> > 
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index b75d688..8cc8006 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -682,6 +682,51 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
> >  	wake_up_all(&q->mq_freeze_wq);
> >  }
> >  
> > +void blk_freeze_queue_start(struct request_queue *q)
> > +{
> > +	int freeze_depth;
> > +
> > +	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
> > +	if (freeze_depth == 1) {
> > +		percpu_ref_kill(&q->q_usage_counter);
> > +		blk_mq_run_hw_queues(q, false);
> > +	}
> > +}
> > +
> As you dropped the 'mq_' prefix, maybe you should rename the counter to
> 'freeze_depth', to?

See PATCH 6/9

-- 
Johannes Thumshirn                                          Storage
jthumshirn-l3A5Bk7waGM@public.gmane.org                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 4/9] block: Move blk_freeze_queue() and blk_unfreeze_queue() code
@ 2016-09-27  7:52       ` Johannes Thumshirn
  0 siblings, 0 replies; 82+ messages in thread
From: Johannes Thumshirn @ 2016-09-27  7:52 UTC (permalink / raw)


On Tue, Sep 27, 2016@08:26:19AM +0200, Hannes Reinecke wrote:
> On 09/26/2016 08:27 PM, Bart Van Assche wrote:
> > Move the blk_freeze_queue() and blk_unfreeze_queue() implementations
> > from block/blk-mq.c to block/blk-core.c. Drop "_mq" from the name of
> > the functions that have been moved.
> > 
> > Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
> > ---
> >  block/blk-core.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
> >  block/blk-mq.c   | 41 +++--------------------------------------
> >  block/blk.h      |  3 +++
> >  3 files changed, 51 insertions(+), 38 deletions(-)
> > 
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index b75d688..8cc8006 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -682,6 +682,51 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
> >  	wake_up_all(&q->mq_freeze_wq);
> >  }
> >  
> > +void blk_freeze_queue_start(struct request_queue *q)
> > +{
> > +	int freeze_depth;
> > +
> > +	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
> > +	if (freeze_depth == 1) {
> > +		percpu_ref_kill(&q->q_usage_counter);
> > +		blk_mq_run_hw_queues(q, false);
> > +	}
> > +}
> > +
> As you dropped the 'mq_' prefix, maybe you should rename the counter to
> 'freeze_depth', to?

See PATCH 6/9

-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
  2016-09-26 18:27   ` Bart Van Assche
@ 2016-09-27 13:22     ` Ming Lei
  -1 siblings, 0 replies; 82+ messages in thread
From: Ming Lei @ 2016-09-27 13:22 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On Tue, Sep 27, 2016 at 2:27 AM, Bart Van Assche
<bart.vanassche@sandisk.com> wrote:
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> ---
>  block/blk-core.c | 15 ++++++---------
>  1 file changed, 6 insertions(+), 9 deletions(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 8cc8006..5ecc7ab 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -689,7 +689,10 @@ void blk_freeze_queue_start(struct request_queue *q)
>         freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
>         if (freeze_depth == 1) {
>                 percpu_ref_kill(&q->q_usage_counter);
> -               blk_mq_run_hw_queues(q, false);
> +               if (q->mq_ops)
> +                       blk_mq_run_hw_queues(q, false);
> +               else if (q->request_fn)
> +                       blk_run_queue(q);

Just wondering if you have a non-blk-mq drivers which need this change,
cause we only hold .q_usage_counter for sync bio.

>         }
>  }
>
> @@ -700,17 +703,11 @@ void blk_freeze_queue_wait(struct request_queue *q)
>
>  /*
>   * Guarantee no request is in use, so we can change any data structure of
> - * the queue afterward.
> + * the queue afterward. Increases q->mq_freeze_depth and waits until
> + * q->q_usage_counter drops to zero.
>   */
>  void blk_freeze_queue(struct request_queue *q)
>  {
> -       /*
> -        * In the !blk_mq case we are only calling this to kill the
> -        * q_usage_counter, otherwise this increases the freeze depth
> -        * and waits for it to return to zero.  For this reason there is
> -        * no blk_unfreeze_queue(), and blk_freeze_queue() is not
> -        * exported to drivers as the only user for unfreeze is blk_mq.
> -        */
>         blk_freeze_queue_start(q);
>         blk_freeze_queue_wait(q);
>  }
> --
> 2.10.0
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Ming Lei

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-27 13:22     ` Ming Lei
  0 siblings, 0 replies; 82+ messages in thread
From: Ming Lei @ 2016-09-27 13:22 UTC (permalink / raw)


On Tue, Sep 27, 2016 at 2:27 AM, Bart Van Assche
<bart.vanassche@sandisk.com> wrote:
> Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
> ---
>  block/blk-core.c | 15 ++++++---------
>  1 file changed, 6 insertions(+), 9 deletions(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 8cc8006..5ecc7ab 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -689,7 +689,10 @@ void blk_freeze_queue_start(struct request_queue *q)
>         freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
>         if (freeze_depth == 1) {
>                 percpu_ref_kill(&q->q_usage_counter);
> -               blk_mq_run_hw_queues(q, false);
> +               if (q->mq_ops)
> +                       blk_mq_run_hw_queues(q, false);
> +               else if (q->request_fn)
> +                       blk_run_queue(q);

Just wondering if you have a non-blk-mq drivers which need this change,
cause we only hold .q_usage_counter for sync bio.

>         }
>  }
>
> @@ -700,17 +703,11 @@ void blk_freeze_queue_wait(struct request_queue *q)
>
>  /*
>   * Guarantee no request is in use, so we can change any data structure of
> - * the queue afterward.
> + * the queue afterward. Increases q->mq_freeze_depth and waits until
> + * q->q_usage_counter drops to zero.
>   */
>  void blk_freeze_queue(struct request_queue *q)
>  {
> -       /*
> -        * In the !blk_mq case we are only calling this to kill the
> -        * q_usage_counter, otherwise this increases the freeze depth
> -        * and waits for it to return to zero.  For this reason there is
> -        * no blk_unfreeze_queue(), and blk_freeze_queue() is not
> -        * exported to drivers as the only user for unfreeze is blk_mq.
> -        */
>         blk_freeze_queue_start(q);
>         blk_freeze_queue_wait(q);
>  }
> --
> 2.10.0
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Ming Lei

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-27 14:42       ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 14:42 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On 09/27/16 06:22, Ming Lei wrote:
> On Tue, Sep 27, 2016 at 2:27 AM, Bart Van Assche
> <bart.vanassche@sandisk.com> wrote:
>> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
>> ---
>>  block/blk-core.c | 15 ++++++---------
>>  1 file changed, 6 insertions(+), 9 deletions(-)
>>
>> diff --git a/block/blk-core.c b/block/blk-core.c
>> index 8cc8006..5ecc7ab 100644
>> --- a/block/blk-core.c
>> +++ b/block/blk-core.c
>> @@ -689,7 +689,10 @@ void blk_freeze_queue_start(struct request_queue *q)
>>         freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
>>         if (freeze_depth == 1) {
>>                 percpu_ref_kill(&q->q_usage_counter);
>> -               blk_mq_run_hw_queues(q, false);
>> +               if (q->mq_ops)
>> +                       blk_mq_run_hw_queues(q, false);
>> +               else if (q->request_fn)
>> +                       blk_run_queue(q);
>
> Just wondering if you have a non-blk-mq drivers which need this change,
> cause we only hold .q_usage_counter for sync bio.

Hello Ming Lei,

Patch 8/9 calls blk_quiesce_queue() and blk_resume_queue() from a code 
path that is used in both blk-mq and non-blk-mq mode. Although it 
wouldn't be hard to modify that patch such that it only uses these two 
functions in blk-mq mode, that wouldn't be very elegant.

Jens, regarding non-blk-mq mode and q_usage_counter: do you prefer that 
I rework patch 8/9 such that blk_quiesce_queue() and blk_resume_queue() 
are only used in blk-mq mode or are you OK with adding a 
blk_queue_enter() call in get_request() and a blk_queue_exit() call to 
__blk_put_request()?

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-27 14:42       ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 14:42 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

On 09/27/16 06:22, Ming Lei wrote:
> On Tue, Sep 27, 2016 at 2:27 AM, Bart Van Assche
> <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>> Signed-off-by: Bart Van Assche <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
>> ---
>>  block/blk-core.c | 15 ++++++---------
>>  1 file changed, 6 insertions(+), 9 deletions(-)
>>
>> diff --git a/block/blk-core.c b/block/blk-core.c
>> index 8cc8006..5ecc7ab 100644
>> --- a/block/blk-core.c
>> +++ b/block/blk-core.c
>> @@ -689,7 +689,10 @@ void blk_freeze_queue_start(struct request_queue *q)
>>         freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
>>         if (freeze_depth == 1) {
>>                 percpu_ref_kill(&q->q_usage_counter);
>> -               blk_mq_run_hw_queues(q, false);
>> +               if (q->mq_ops)
>> +                       blk_mq_run_hw_queues(q, false);
>> +               else if (q->request_fn)
>> +                       blk_run_queue(q);
>
> Just wondering if you have a non-blk-mq drivers which need this change,
> cause we only hold .q_usage_counter for sync bio.

Hello Ming Lei,

Patch 8/9 calls blk_quiesce_queue() and blk_resume_queue() from a code 
path that is used in both blk-mq and non-blk-mq mode. Although it 
wouldn't be hard to modify that patch such that it only uses these two 
functions in blk-mq mode, that wouldn't be very elegant.

Jens, regarding non-blk-mq mode and q_usage_counter: do you prefer that 
I rework patch 8/9 such that blk_quiesce_queue() and blk_resume_queue() 
are only used in blk-mq mode or are you OK with adding a 
blk_queue_enter() call in get_request() and a blk_queue_exit() call to 
__blk_put_request()?

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-27 14:42       ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 14:42 UTC (permalink / raw)


On 09/27/16 06:22, Ming Lei wrote:
> On Tue, Sep 27, 2016 at 2:27 AM, Bart Van Assche
> <bart.vanassche@sandisk.com> wrote:
>> Signed-off-by: Bart Van Assche <bart.vanassche at sandisk.com>
>> ---
>>  block/blk-core.c | 15 ++++++---------
>>  1 file changed, 6 insertions(+), 9 deletions(-)
>>
>> diff --git a/block/blk-core.c b/block/blk-core.c
>> index 8cc8006..5ecc7ab 100644
>> --- a/block/blk-core.c
>> +++ b/block/blk-core.c
>> @@ -689,7 +689,10 @@ void blk_freeze_queue_start(struct request_queue *q)
>>         freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
>>         if (freeze_depth == 1) {
>>                 percpu_ref_kill(&q->q_usage_counter);
>> -               blk_mq_run_hw_queues(q, false);
>> +               if (q->mq_ops)
>> +                       blk_mq_run_hw_queues(q, false);
>> +               else if (q->request_fn)
>> +                       blk_run_queue(q);
>
> Just wondering if you have a non-blk-mq drivers which need this change,
> cause we only hold .q_usage_counter for sync bio.

Hello Ming Lei,

Patch 8/9 calls blk_quiesce_queue() and blk_resume_queue() from a code 
path that is used in both blk-mq and non-blk-mq mode. Although it 
wouldn't be hard to modify that patch such that it only uses these two 
functions in blk-mq mode, that wouldn't be very elegant.

Jens, regarding non-blk-mq mode and q_usage_counter: do you prefer that 
I rework patch 8/9 such that blk_quiesce_queue() and blk_resume_queue() 
are only used in blk-mq mode or are you OK with adding a 
blk_queue_enter() call in get_request() and a blk_queue_exit() call to 
__blk_put_request()?

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-27 15:55         ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 15:55 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme

On 09/27/2016 07:42 AM, Bart Van Assche wrote:
> Jens, regarding non-blk-mq mode and q_usage_counter: do you prefer that
> I rework patch 8/9 such that blk_quiesce_queue() and blk_resume_queue()
> are only used in blk-mq mode or are you OK with adding a
> blk_queue_enter() call in get_request() and a blk_queue_exit() call to
> __blk_put_request()?

(replying to my own e-mail)

Although it is easy to make q_usage_counter count non-blk-mq requests, 
extending the blk_quiesce_queue() waiting mechanism to the non-blk-mq 
path is non-trivial. To limit the number of changes in this patch series 
I will drop the non-blk-mq changes from this patch series.

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-27 15:55         ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 15:55 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

On 09/27/2016 07:42 AM, Bart Van Assche wrote:
> Jens, regarding non-blk-mq mode and q_usage_counter: do you prefer that
> I rework patch 8/9 such that blk_quiesce_queue() and blk_resume_queue()
> are only used in blk-mq mode or are you OK with adding a
> blk_queue_enter() call in get_request() and a blk_queue_exit() call to
> __blk_put_request()?

(replying to my own e-mail)

Although it is easy to make q_usage_counter count non-blk-mq requests, 
extending the blk_quiesce_queue() waiting mechanism to the non-blk-mq 
path is non-trivial. To limit the number of changes in this patch series 
I will drop the non-blk-mq changes from this patch series.

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path
@ 2016-09-27 15:55         ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 15:55 UTC (permalink / raw)


On 09/27/2016 07:42 AM, Bart Van Assche wrote:
> Jens, regarding non-blk-mq mode and q_usage_counter: do you prefer that
> I rework patch 8/9 such that blk_quiesce_queue() and blk_resume_queue()
> are only used in blk-mq mode or are you OK with adding a
> blk_queue_enter() call in get_request() and a blk_queue_exit() call to
> __blk_put_request()?

(replying to my own e-mail)

Although it is easy to make q_usage_counter count non-blk-mq requests, 
extending the blk_quiesce_queue() waiting mechanism to the non-blk-mq 
path is non-trivial. To limit the number of changes in this patch series 
I will drop the non-blk-mq changes from this patch series.

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 9/9] [RFC] nvme: Fix a race condition
  2016-09-26 18:28   ` Bart Van Assche
  (?)
@ 2016-09-27 16:31     ` Steve Wise
  -1 siblings, 0 replies; 82+ messages in thread
From: Steve Wise @ 2016-09-27 16:31 UTC (permalink / raw)
  To: 'Bart Van Assche', 'Jens Axboe'
  Cc: linux-block, 'James Bottomley',
	'Martin K. Petersen', 'Mike Snitzer',
	linux-rdma, linux-nvme, 'Keith Busch',
	'Doug Ledford', linux-scsi, 'Christoph Hellwig'

> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
>  {
>  	struct nvme_ns *ns;
> +	struct request_queue *q;
> 
>  	mutex_lock(&ctrl->namespaces_mutex);
>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
> -		blk_mq_cancel_requeue_work(ns->queue);
> -		blk_mq_stop_hw_queues(ns->queue);
> +		q = ns->queue;
> +		blk_quiesce_queue(q);
> +		blk_mq_cancel_requeue_work(q);
> +		blk_mq_stop_hw_queues(q);
> +		blk_resume_queue(q);
>  	}
>  	mutex_unlock(&ctrl->namespaces_mutex);

Hey Bart, should nvme_stop_queues() really be resuming the blk queue?




^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-27 16:31     ` Steve Wise
  0 siblings, 0 replies; 82+ messages in thread
From: Steve Wise @ 2016-09-27 16:31 UTC (permalink / raw)
  To: 'Bart Van Assche', 'Jens Axboe'
  Cc: linux-block, 'James Bottomley',
	'Martin K. Petersen', 'Mike Snitzer',
	linux-rdma, linux-nvme, 'Keith Busch',
	'Doug Ledford', linux-scsi, 'Christoph Hellwig'

> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
>  {
>  	struct nvme_ns *ns;
> +	struct request_queue *q;
> 
>  	mutex_lock(&ctrl->namespaces_mutex);
>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
> -		blk_mq_cancel_requeue_work(ns->queue);
> -		blk_mq_stop_hw_queues(ns->queue);
> +		q = ns->queue;
> +		blk_quiesce_queue(q);
> +		blk_mq_cancel_requeue_work(q);
> +		blk_mq_stop_hw_queues(q);
> +		blk_resume_queue(q);
>  	}
>  	mutex_unlock(&ctrl->namespaces_mutex);

Hey Bart, should nvme_stop_queues() really be resuming the blk queue?




^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-27 16:31     ` Steve Wise
  0 siblings, 0 replies; 82+ messages in thread
From: Steve Wise @ 2016-09-27 16:31 UTC (permalink / raw)


> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
>  {
>  	struct nvme_ns *ns;
> +	struct request_queue *q;
> 
>  	mutex_lock(&ctrl->namespaces_mutex);
>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
> -		blk_mq_cancel_requeue_work(ns->queue);
> -		blk_mq_stop_hw_queues(ns->queue);
> +		q = ns->queue;
> +		blk_quiesce_queue(q);
> +		blk_mq_cancel_requeue_work(q);
> +		blk_mq_stop_hw_queues(q);
> +		blk_resume_queue(q);
>  	}
>  	mutex_unlock(&ctrl->namespaces_mutex);

Hey Bart, should nvme_stop_queues() really be resuming the blk queue?

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 9/9] [RFC] nvme: Fix a race condition
  2016-09-27 16:31     ` Steve Wise
  (?)
@ 2016-09-27 16:43       ` Bart Van Assche
  -1 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 16:43 UTC (permalink / raw)
  To: Steve Wise, 'Jens Axboe'
  Cc: linux-block, 'James Bottomley',
	'Martin K. Petersen', 'Mike Snitzer',
	linux-rdma, linux-nvme, 'Keith Busch',
	'Doug Ledford', linux-scsi, 'Christoph Hellwig'

On 09/27/2016 09:31 AM, Steve Wise wrote:
>> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
>>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
>>  {
>>  	struct nvme_ns *ns;
>> +	struct request_queue *q;
>>
>>  	mutex_lock(&ctrl->namespaces_mutex);
>>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
>> -		blk_mq_cancel_requeue_work(ns->queue);
>> -		blk_mq_stop_hw_queues(ns->queue);
>> +		q = ns->queue;
>> +		blk_quiesce_queue(q);
>> +		blk_mq_cancel_requeue_work(q);
>> +		blk_mq_stop_hw_queues(q);
>> +		blk_resume_queue(q);
>>  	}
>>  	mutex_unlock(&ctrl->namespaces_mutex);
>
> Hey Bart, should nvme_stop_queues() really be resuming the blk queue?

Hello Steve,

Would you perhaps prefer that blk_resume_queue(q) is called from 
nvme_start_queues()? I think that would make the NVMe code harder to 
review. The above code won't cause any unexpected side effects if an 
NVMe namespace is removed after nvme_stop_queues() has been called and 
before nvme_start_queues() is called. Moving the blk_resume_queue(q) 
call into nvme_start_queues() will only work as expected if no 
namespaces are added nor removed between the nvme_stop_queues() and 
nvme_start_queues() calls. I'm not familiar enough with the NVMe code to 
know whether or not this change is safe ...

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-27 16:43       ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 16:43 UTC (permalink / raw)
  To: Steve Wise, 'Jens Axboe'
  Cc: linux-block, 'James Bottomley',
	'Martin K. Petersen', 'Mike Snitzer',
	linux-rdma, linux-nvme, 'Keith Busch',
	'Doug Ledford', linux-scsi, 'Christoph Hellwig'

On 09/27/2016 09:31 AM, Steve Wise wrote:
>> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
>>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
>>  {
>>  	struct nvme_ns *ns;
>> +	struct request_queue *q;
>>
>>  	mutex_lock(&ctrl->namespaces_mutex);
>>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
>> -		blk_mq_cancel_requeue_work(ns->queue);
>> -		blk_mq_stop_hw_queues(ns->queue);
>> +		q = ns->queue;
>> +		blk_quiesce_queue(q);
>> +		blk_mq_cancel_requeue_work(q);
>> +		blk_mq_stop_hw_queues(q);
>> +		blk_resume_queue(q);
>>  	}
>>  	mutex_unlock(&ctrl->namespaces_mutex);
>
> Hey Bart, should nvme_stop_queues() really be resuming the blk queue?

Hello Steve,

Would you perhaps prefer that blk_resume_queue(q) is called from 
nvme_start_queues()? I think that would make the NVMe code harder to 
review. The above code won't cause any unexpected side effects if an 
NVMe namespace is removed after nvme_stop_queues() has been called and 
before nvme_start_queues() is called. Moving the blk_resume_queue(q) 
call into nvme_start_queues() will only work as expected if no 
namespaces are added nor removed between the nvme_stop_queues() and 
nvme_start_queues() calls. I'm not familiar enough with the NVMe code to 
know whether or not this change is safe ...

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-27 16:43       ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 16:43 UTC (permalink / raw)


On 09/27/2016 09:31 AM, Steve Wise wrote:
>> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
>>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
>>  {
>>  	struct nvme_ns *ns;
>> +	struct request_queue *q;
>>
>>  	mutex_lock(&ctrl->namespaces_mutex);
>>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
>> -		blk_mq_cancel_requeue_work(ns->queue);
>> -		blk_mq_stop_hw_queues(ns->queue);
>> +		q = ns->queue;
>> +		blk_quiesce_queue(q);
>> +		blk_mq_cancel_requeue_work(q);
>> +		blk_mq_stop_hw_queues(q);
>> +		blk_resume_queue(q);
>>  	}
>>  	mutex_unlock(&ctrl->namespaces_mutex);
>
> Hey Bart, should nvme_stop_queues() really be resuming the blk queue?

Hello Steve,

Would you perhaps prefer that blk_resume_queue(q) is called from 
nvme_start_queues()? I think that would make the NVMe code harder to 
review. The above code won't cause any unexpected side effects if an 
NVMe namespace is removed after nvme_stop_queues() has been called and 
before nvme_start_queues() is called. Moving the blk_resume_queue(q) 
call into nvme_start_queues() will only work as expected if no 
namespaces are added nor removed between the nvme_stop_queues() and 
nvme_start_queues() calls. I'm not familiar enough with the NVMe code to 
know whether or not this change is safe ...

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 9/9] [RFC] nvme: Fix a race condition
  2016-09-27 16:43       ` Bart Van Assche
@ 2016-09-27 16:56         ` James Bottomley
  -1 siblings, 0 replies; 82+ messages in thread
From: James Bottomley @ 2016-09-27 16:56 UTC (permalink / raw)
  To: Bart Van Assche, Steve Wise, 'Jens Axboe'
  Cc: linux-block, 'Martin K. Petersen', 'Mike Snitzer',
	linux-rdma, linux-nvme, 'Keith Busch',
	'Doug Ledford', linux-scsi, 'Christoph Hellwig'

On Tue, 2016-09-27 at 09:43 -0700, Bart Van Assche wrote:
> On 09/27/2016 09:31 AM, Steve Wise wrote:
> > > @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
> > >  void nvme_stop_queues(struct nvme_ctrl *ctrl)
> > >  {
> > >  	struct nvme_ns *ns;
> > > +	struct request_queue *q;
> > > 
> > >  	mutex_lock(&ctrl->namespaces_mutex);
> > >  	list_for_each_entry(ns, &ctrl->namespaces, list) {
> > > -		blk_mq_cancel_requeue_work(ns->queue);
> > > -		blk_mq_stop_hw_queues(ns->queue);
> > > +		q = ns->queue;
> > > +		blk_quiesce_queue(q);
> > > +		blk_mq_cancel_requeue_work(q);
> > > +		blk_mq_stop_hw_queues(q);
> > > +		blk_resume_queue(q);
> > >  	}
> > >  	mutex_unlock(&ctrl->namespaces_mutex);
> > 
> > Hey Bart, should nvme_stop_queues() really be resuming the blk
> > queue?
> 
> Hello Steve,
> 
> Would you perhaps prefer that blk_resume_queue(q) is called from 
> nvme_start_queues()? I think that would make the NVMe code harder to 
> review. The above code won't cause any unexpected side effects if an 
> NVMe namespace is removed after nvme_stop_queues() has been called 
> and before nvme_start_queues() is called. Moving the 
> blk_resume_queue(q) call into nvme_start_queues() will only work as 
> expected if no namespaces are added nor removed between the 
> nvme_stop_queues() and nvme_start_queues() calls. I'm not familiar 
> enough with the NVMe code to know whether or not this change is safe
> ...

It's something that looks obviously wrong, so explain why you need to
do it, preferably in a comment above the function.

James



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-27 16:56         ` James Bottomley
  0 siblings, 0 replies; 82+ messages in thread
From: James Bottomley @ 2016-09-27 16:56 UTC (permalink / raw)


On Tue, 2016-09-27@09:43 -0700, Bart Van Assche wrote:
> On 09/27/2016 09:31 AM, Steve Wise wrote:
> > > @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
> > >  void nvme_stop_queues(struct nvme_ctrl *ctrl)
> > >  {
> > >  	struct nvme_ns *ns;
> > > +	struct request_queue *q;
> > > 
> > >  	mutex_lock(&ctrl->namespaces_mutex);
> > >  	list_for_each_entry(ns, &ctrl->namespaces, list) {
> > > -		blk_mq_cancel_requeue_work(ns->queue);
> > > -		blk_mq_stop_hw_queues(ns->queue);
> > > +		q = ns->queue;
> > > +		blk_quiesce_queue(q);
> > > +		blk_mq_cancel_requeue_work(q);
> > > +		blk_mq_stop_hw_queues(q);
> > > +		blk_resume_queue(q);
> > >  	}
> > >  	mutex_unlock(&ctrl->namespaces_mutex);
> > 
> > Hey Bart, should nvme_stop_queues() really be resuming the blk
> > queue?
> 
> Hello Steve,
> 
> Would you perhaps prefer that blk_resume_queue(q) is called from 
> nvme_start_queues()? I think that would make the NVMe code harder to 
> review. The above code won't cause any unexpected side effects if an 
> NVMe namespace is removed after nvme_stop_queues() has been called 
> and before nvme_start_queues() is called. Moving the 
> blk_resume_queue(q) call into nvme_start_queues() will only work as 
> expected if no namespaces are added nor removed between the 
> nvme_stop_queues() and nvme_start_queues() calls. I'm not familiar 
> enough with the NVMe code to know whether or not this change is safe
> ...

It's something that looks obviously wrong, so explain why you need to
do it, preferably in a comment above the function.

James

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 9/9] [RFC] nvme: Fix a race condition
  2016-09-27 16:43       ` Bart Van Assche
  (?)
@ 2016-09-27 16:56         ` Steve Wise
  -1 siblings, 0 replies; 82+ messages in thread
From: Steve Wise @ 2016-09-27 16:56 UTC (permalink / raw)
  To: 'Bart Van Assche', 'Jens Axboe'
  Cc: linux-block, 'James Bottomley',
	'Martin K. Petersen', 'Mike Snitzer',
	linux-rdma, linux-nvme, 'Keith Busch',
	'Doug Ledford', linux-scsi, 'Christoph Hellwig'

> On 09/27/2016 09:31 AM, Steve Wise wrote:
> >> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
> >>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
> >>  {
> >>  	struct nvme_ns *ns;
> >> +	struct request_queue *q;
> >>
> >>  	mutex_lock(&ctrl->namespaces_mutex);
> >>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
> >> -		blk_mq_cancel_requeue_work(ns->queue);
> >> -		blk_mq_stop_hw_queues(ns->queue);
> >> +		q = ns->queue;
> >> +		blk_quiesce_queue(q);
> >> +		blk_mq_cancel_requeue_work(q);
> >> +		blk_mq_stop_hw_queues(q);
> >> +		blk_resume_queue(q);
> >>  	}
> >>  	mutex_unlock(&ctrl->namespaces_mutex);
> >
> > Hey Bart, should nvme_stop_queues() really be resuming the blk queue?
> 
> Hello Steve,
> 
> Would you perhaps prefer that blk_resume_queue(q) is called from
> nvme_start_queues()? I think that would make the NVMe code harder to
> review. 

I'm still learning the blk code (and nvme code :)), but I would think
blk_resume_queue() would cause requests to start being submit on the NVME
queues, which I believe shouldn't happen when they are stopped.  I'm currently
debugging a problem where requests are submitted to the nvme-rdma driver while
it has supposedly stopped all the nvme and blk mqs.  I tried your series at
Christoph's request to see if it resolved my problem, but it didn't.  

> The above code won't cause any unexpected side effects if an
> NVMe namespace is removed after nvme_stop_queues() has been called and
> before nvme_start_queues() is called. Moving the blk_resume_queue(q)
> call into nvme_start_queues() will only work as expected if no
> namespaces are added nor removed between the nvme_stop_queues() and
> nvme_start_queues() calls. I'm not familiar enough with the NVMe code to
> know whether or not this change is safe ...
> 

I'll have to look and see if new namespaces can be added/deleted while a nvme
controller is in the RECONNECTING state.   In the meantime, I'm going to move
the blk_resume_queue() to nvme_start_queues() and see if it helps my problem.

Christoph:  Thoughts?

Steve.


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-27 16:56         ` Steve Wise
  0 siblings, 0 replies; 82+ messages in thread
From: Steve Wise @ 2016-09-27 16:56 UTC (permalink / raw)
  To: 'Bart Van Assche', 'Jens Axboe'
  Cc: linux-block, 'James Bottomley',
	'Martin K. Petersen', 'Mike Snitzer',
	linux-rdma, linux-nvme, 'Keith Busch',
	'Doug Ledford', linux-scsi, 'Christoph Hellwig'

> On 09/27/2016 09:31 AM, Steve Wise wrote:
> >> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
> >>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
> >>  {
> >>  	struct nvme_ns *ns;
> >> +	struct request_queue *q;
> >>
> >>  	mutex_lock(&ctrl->namespaces_mutex);
> >>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
> >> -		blk_mq_cancel_requeue_work(ns->queue);
> >> -		blk_mq_stop_hw_queues(ns->queue);
> >> +		q = ns->queue;
> >> +		blk_quiesce_queue(q);
> >> +		blk_mq_cancel_requeue_work(q);
> >> +		blk_mq_stop_hw_queues(q);
> >> +		blk_resume_queue(q);
> >>  	}
> >>  	mutex_unlock(&ctrl->namespaces_mutex);
> >
> > Hey Bart, should nvme_stop_queues() really be resuming the blk queue?
> 
> Hello Steve,
> 
> Would you perhaps prefer that blk_resume_queue(q) is called from
> nvme_start_queues()? I think that would make the NVMe code harder to
> review. 

I'm still learning the blk code (and nvme code :)), but I would think
blk_resume_queue() would cause requests to start being submit on the NVME
queues, which I believe shouldn't happen when they are stopped.  I'm currently
debugging a problem where requests are submitted to the nvme-rdma driver while
it has supposedly stopped all the nvme and blk mqs.  I tried your series at
Christoph's request to see if it resolved my problem, but it didn't.  

> The above code won't cause any unexpected side effects if an
> NVMe namespace is removed after nvme_stop_queues() has been called and
> before nvme_start_queues() is called. Moving the blk_resume_queue(q)
> call into nvme_start_queues() will only work as expected if no
> namespaces are added nor removed between the nvme_stop_queues() and
> nvme_start_queues() calls. I'm not familiar enough with the NVMe code to
> know whether or not this change is safe ...
> 

I'll have to look and see if new namespaces can be added/deleted while a nvme
controller is in the RECONNECTING state.   In the meantime, I'm going to move
the blk_resume_queue() to nvme_start_queues() and see if it helps my problem.

Christoph:  Thoughts?

Steve.


^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-27 16:56         ` Steve Wise
  0 siblings, 0 replies; 82+ messages in thread
From: Steve Wise @ 2016-09-27 16:56 UTC (permalink / raw)


> On 09/27/2016 09:31 AM, Steve Wise wrote:
> >> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
> >>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
> >>  {
> >>  	struct nvme_ns *ns;
> >> +	struct request_queue *q;
> >>
> >>  	mutex_lock(&ctrl->namespaces_mutex);
> >>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
> >> -		blk_mq_cancel_requeue_work(ns->queue);
> >> -		blk_mq_stop_hw_queues(ns->queue);
> >> +		q = ns->queue;
> >> +		blk_quiesce_queue(q);
> >> +		blk_mq_cancel_requeue_work(q);
> >> +		blk_mq_stop_hw_queues(q);
> >> +		blk_resume_queue(q);
> >>  	}
> >>  	mutex_unlock(&ctrl->namespaces_mutex);
> >
> > Hey Bart, should nvme_stop_queues() really be resuming the blk queue?
> 
> Hello Steve,
> 
> Would you perhaps prefer that blk_resume_queue(q) is called from
> nvme_start_queues()? I think that would make the NVMe code harder to
> review. 

I'm still learning the blk code (and nvme code :)), but I would think
blk_resume_queue() would cause requests to start being submit on the NVME
queues, which I believe shouldn't happen when they are stopped.  I'm currently
debugging a problem where requests are submitted to the nvme-rdma driver while
it has supposedly stopped all the nvme and blk mqs.  I tried your series at
Christoph's request to see if it resolved my problem, but it didn't.  

> The above code won't cause any unexpected side effects if an
> NVMe namespace is removed after nvme_stop_queues() has been called and
> before nvme_start_queues() is called. Moving the blk_resume_queue(q)
> call into nvme_start_queues() will only work as expected if no
> namespaces are added nor removed between the nvme_stop_queues() and
> nvme_start_queues() calls. I'm not familiar enough with the NVMe code to
> know whether or not this change is safe ...
> 

I'll have to look and see if new namespaces can be added/deleted while a nvme
controller is in the RECONNECTING state.   In the meantime, I'm going to move
the blk_resume_queue() to nvme_start_queues() and see if it helps my problem.

Christoph:  Thoughts?

Steve.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-27 17:09           ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 17:09 UTC (permalink / raw)
  To: James Bottomley, Steve Wise, 'Jens Axboe'
  Cc: linux-block, 'Martin K. Petersen', 'Mike Snitzer',
	linux-rdma, linux-nvme, 'Keith Busch',
	'Doug Ledford', linux-scsi, 'Christoph Hellwig'



On 09/27/2016 09:56 AM, James Bottomley wrote:
> On Tue, 2016-09-27 at 09:43 -0700, Bart Van Assche wrote:
>> On 09/27/2016 09:31 AM, Steve Wise wrote:
>>>> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
>>>>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
>>>>  {
>>>>  	struct nvme_ns *ns;
>>>> +	struct request_queue *q;
>>>>
>>>>  	mutex_lock(&ctrl->namespaces_mutex);
>>>>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
>>>> -		blk_mq_cancel_requeue_work(ns->queue);
>>>> -		blk_mq_stop_hw_queues(ns->queue);
>>>> +		q = ns->queue;
>>>> +		blk_quiesce_queue(q);
>>>> +		blk_mq_cancel_requeue_work(q);
>>>> +		blk_mq_stop_hw_queues(q);
>>>> +		blk_resume_queue(q);
>>>>  	}
>>>>  	mutex_unlock(&ctrl->namespaces_mutex);
>>>
>>> Hey Bart, should nvme_stop_queues() really be resuming the blk
>>> queue?
>>
>> Hello Steve,
>>
>> Would you perhaps prefer that blk_resume_queue(q) is called from
>> nvme_start_queues()? I think that would make the NVMe code harder to
>> review. The above code won't cause any unexpected side effects if an
>> NVMe namespace is removed after nvme_stop_queues() has been called
>> and before nvme_start_queues() is called. Moving the
>> blk_resume_queue(q) call into nvme_start_queues() will only work as
>> expected if no namespaces are added nor removed between the
>> nvme_stop_queues() and nvme_start_queues() calls. I'm not familiar
>> enough with the NVMe code to know whether or not this change is safe
>> ...
>
> It's something that looks obviously wrong, so explain why you need to
> do it, preferably in a comment above the function.

Hello James and Steve,

I will add a comment.

Please note that the above patch does not change the behavior of 
nvme_stop_queues() except that it causes nvme_stop_queues() to wait 
until any ongoing nvme_queue_rq() calls have finished. 
blk_resume_queue() does not affect the value of the BLK_MQ_S_STOPPED bit 
that has been set by blk_mq_stop_hw_queues(). All it does is to resume 
pending blk_queue_enter() calls and to ensure that future 
blk_queue_enter() calls do not block. Even after blk_resume_queue() has 
been called if a new request is queued queue_rq() won't be invoked 
because the BLK_MQ_S_STOPPED bit is still set. Patch "dm: Fix a race 
condition related to stopping and starting queues" realizes a similar 
change in the dm driver and that change has been tested extensively.

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-27 17:09           ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 17:09 UTC (permalink / raw)
  To: James Bottomley, Steve Wise, 'Jens Axboe'
  Cc: linux-block-u79uwXL29TY76Z2rM5mHXA, 'Martin K. Petersen',
	'Mike Snitzer',
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	'Keith Busch', 'Doug Ledford',
	linux-scsi-u79uwXL29TY76Z2rM5mHXA, 'Christoph Hellwig'



On 09/27/2016 09:56 AM, James Bottomley wrote:
> On Tue, 2016-09-27 at 09:43 -0700, Bart Van Assche wrote:
>> On 09/27/2016 09:31 AM, Steve Wise wrote:
>>>> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
>>>>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
>>>>  {
>>>>  	struct nvme_ns *ns;
>>>> +	struct request_queue *q;
>>>>
>>>>  	mutex_lock(&ctrl->namespaces_mutex);
>>>>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
>>>> -		blk_mq_cancel_requeue_work(ns->queue);
>>>> -		blk_mq_stop_hw_queues(ns->queue);
>>>> +		q = ns->queue;
>>>> +		blk_quiesce_queue(q);
>>>> +		blk_mq_cancel_requeue_work(q);
>>>> +		blk_mq_stop_hw_queues(q);
>>>> +		blk_resume_queue(q);
>>>>  	}
>>>>  	mutex_unlock(&ctrl->namespaces_mutex);
>>>
>>> Hey Bart, should nvme_stop_queues() really be resuming the blk
>>> queue?
>>
>> Hello Steve,
>>
>> Would you perhaps prefer that blk_resume_queue(q) is called from
>> nvme_start_queues()? I think that would make the NVMe code harder to
>> review. The above code won't cause any unexpected side effects if an
>> NVMe namespace is removed after nvme_stop_queues() has been called
>> and before nvme_start_queues() is called. Moving the
>> blk_resume_queue(q) call into nvme_start_queues() will only work as
>> expected if no namespaces are added nor removed between the
>> nvme_stop_queues() and nvme_start_queues() calls. I'm not familiar
>> enough with the NVMe code to know whether or not this change is safe
>> ...
>
> It's something that looks obviously wrong, so explain why you need to
> do it, preferably in a comment above the function.

Hello James and Steve,

I will add a comment.

Please note that the above patch does not change the behavior of 
nvme_stop_queues() except that it causes nvme_stop_queues() to wait 
until any ongoing nvme_queue_rq() calls have finished. 
blk_resume_queue() does not affect the value of the BLK_MQ_S_STOPPED bit 
that has been set by blk_mq_stop_hw_queues(). All it does is to resume 
pending blk_queue_enter() calls and to ensure that future 
blk_queue_enter() calls do not block. Even after blk_resume_queue() has 
been called if a new request is queued queue_rq() won't be invoked 
because the BLK_MQ_S_STOPPED bit is still set. Patch "dm: Fix a race 
condition related to stopping and starting queues" realizes a similar 
change in the dm driver and that change has been tested extensively.

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-27 17:09           ` Bart Van Assche
  0 siblings, 0 replies; 82+ messages in thread
From: Bart Van Assche @ 2016-09-27 17:09 UTC (permalink / raw)




On 09/27/2016 09:56 AM, James Bottomley wrote:
> On Tue, 2016-09-27@09:43 -0700, Bart Van Assche wrote:
>> On 09/27/2016 09:31 AM, Steve Wise wrote:
>>>> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
>>>>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
>>>>  {
>>>>  	struct nvme_ns *ns;
>>>> +	struct request_queue *q;
>>>>
>>>>  	mutex_lock(&ctrl->namespaces_mutex);
>>>>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
>>>> -		blk_mq_cancel_requeue_work(ns->queue);
>>>> -		blk_mq_stop_hw_queues(ns->queue);
>>>> +		q = ns->queue;
>>>> +		blk_quiesce_queue(q);
>>>> +		blk_mq_cancel_requeue_work(q);
>>>> +		blk_mq_stop_hw_queues(q);
>>>> +		blk_resume_queue(q);
>>>>  	}
>>>>  	mutex_unlock(&ctrl->namespaces_mutex);
>>>
>>> Hey Bart, should nvme_stop_queues() really be resuming the blk
>>> queue?
>>
>> Hello Steve,
>>
>> Would you perhaps prefer that blk_resume_queue(q) is called from
>> nvme_start_queues()? I think that would make the NVMe code harder to
>> review. The above code won't cause any unexpected side effects if an
>> NVMe namespace is removed after nvme_stop_queues() has been called
>> and before nvme_start_queues() is called. Moving the
>> blk_resume_queue(q) call into nvme_start_queues() will only work as
>> expected if no namespaces are added nor removed between the
>> nvme_stop_queues() and nvme_start_queues() calls. I'm not familiar
>> enough with the NVMe code to know whether or not this change is safe
>> ...
>
> It's something that looks obviously wrong, so explain why you need to
> do it, preferably in a comment above the function.

Hello James and Steve,

I will add a comment.

Please note that the above patch does not change the behavior of 
nvme_stop_queues() except that it causes nvme_stop_queues() to wait 
until any ongoing nvme_queue_rq() calls have finished. 
blk_resume_queue() does not affect the value of the BLK_MQ_S_STOPPED bit 
that has been set by blk_mq_stop_hw_queues(). All it does is to resume 
pending blk_queue_enter() calls and to ensure that future 
blk_queue_enter() calls do not block. Even after blk_resume_queue() has 
been called if a new request is queued queue_rq() won't be invoked 
because the BLK_MQ_S_STOPPED bit is still set. Patch "dm: Fix a race 
condition related to stopping and starting queues" realizes a similar 
change in the dm driver and that change has been tested extensively.

Bart.

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-28 14:23             ` Steve Wise
  0 siblings, 0 replies; 82+ messages in thread
From: Steve Wise @ 2016-09-28 14:23 UTC (permalink / raw)
  To: 'Bart Van Assche', 'James Bottomley',
	'Jens Axboe'
  Cc: linux-block, 'Martin K. Petersen', 'Mike Snitzer',
	linux-rdma, linux-nvme, 'Keith Busch',
	'Doug Ledford', linux-scsi, 'Christoph Hellwig'

> 
> Hello James and Steve,
> 
> I will add a comment.
> 
> Please note that the above patch does not change the behavior of
> nvme_stop_queues() except that it causes nvme_stop_queues() to wait
> until any ongoing nvme_queue_rq() calls have finished.
> blk_resume_queue() does not affect the value of the BLK_MQ_S_STOPPED bit
> that has been set by blk_mq_stop_hw_queues(). All it does is to resume
> pending blk_queue_enter() calls and to ensure that future
> blk_queue_enter() calls do not block. Even after blk_resume_queue() has
> been called if a new request is queued queue_rq() won't be invoked
> because the BLK_MQ_S_STOPPED bit is still set. Patch "dm: Fix a race
> condition related to stopping and starting queues" realizes a similar
> change in the dm driver and that change has been tested extensively.
> 

Thanks for the detailed explanation!  I think your code, then, is correct as-is.   And this series doesn't fix the issue I'm hitting, so I'll keep digging. :)

Steve.  


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-28 14:23             ` Steve Wise
  0 siblings, 0 replies; 82+ messages in thread
From: Steve Wise @ 2016-09-28 14:23 UTC (permalink / raw)
  To: 'Bart Van Assche', 'James Bottomley',
	'Jens Axboe'
  Cc: linux-block-u79uwXL29TY76Z2rM5mHXA, 'Martin K. Petersen',
	'Mike Snitzer',
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	'Keith Busch', 'Doug Ledford',
	linux-scsi-u79uwXL29TY76Z2rM5mHXA, 'Christoph Hellwig'

> 
> Hello James and Steve,
> 
> I will add a comment.
> 
> Please note that the above patch does not change the behavior of
> nvme_stop_queues() except that it causes nvme_stop_queues() to wait
> until any ongoing nvme_queue_rq() calls have finished.
> blk_resume_queue() does not affect the value of the BLK_MQ_S_STOPPED bit
> that has been set by blk_mq_stop_hw_queues(). All it does is to resume
> pending blk_queue_enter() calls and to ensure that future
> blk_queue_enter() calls do not block. Even after blk_resume_queue() has
> been called if a new request is queued queue_rq() won't be invoked
> because the BLK_MQ_S_STOPPED bit is still set. Patch "dm: Fix a race
> condition related to stopping and starting queues" realizes a similar
> change in the dm driver and that change has been tested extensively.
> 

Thanks for the detailed explanation!  I think your code, then, is correct as-is.   And this series doesn't fix the issue I'm hitting, so I'll keep digging. :)

Steve.  

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 9/9] [RFC] nvme: Fix a race condition
@ 2016-09-28 14:23             ` Steve Wise
  0 siblings, 0 replies; 82+ messages in thread
From: Steve Wise @ 2016-09-28 14:23 UTC (permalink / raw)


> 
> Hello James and Steve,
> 
> I will add a comment.
> 
> Please note that the above patch does not change the behavior of
> nvme_stop_queues() except that it causes nvme_stop_queues() to wait
> until any ongoing nvme_queue_rq() calls have finished.
> blk_resume_queue() does not affect the value of the BLK_MQ_S_STOPPED bit
> that has been set by blk_mq_stop_hw_queues(). All it does is to resume
> pending blk_queue_enter() calls and to ensure that future
> blk_queue_enter() calls do not block. Even after blk_resume_queue() has
> been called if a new request is queued queue_rq() won't be invoked
> because the BLK_MQ_S_STOPPED bit is still set. Patch "dm: Fix a race
> condition related to stopping and starting queues" realizes a similar
> change in the dm driver and that change has been tested extensively.
> 

Thanks for the detailed explanation!  I think your code, then, is correct as-is.   And this series doesn't fix the issue I'm hitting, so I'll keep digging. :)

Steve.  

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
  2016-09-26 18:25 ` Bart Van Assche
@ 2016-10-11 16:27   ` Laurence Oberman
  -1 siblings, 0 replies; 82+ messages in thread
From: Laurence Oberman @ 2016-10-11 16:27 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, James Bottomley,
	Martin K. Petersen, Mike Snitzer, Doug Ledford, Keith Busch,
	linux-block, linux-scsi, linux-rdma, linux-nvme



----- Original Message -----
> From: "Bart Van Assche" <bart.vanassche@sandisk.com>
> To: "Jens Axboe" <axboe@fb.com>
> Cc: "Christoph Hellwig" <hch@lst.de>, "James Bottomley" <jejb@linux.vnet.ibm.com>, "Martin K. Petersen"
> <martin.petersen@oracle.com>, "Mike Snitzer" <snitzer@redhat.com>, "Doug Ledford" <dledford@redhat.com>, "Keith
> Busch" <keith.busch@intel.com>, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org,
> linux-nvme@lists.infradead.org
> Sent: Monday, September 26, 2016 2:25:54 PM
> Subject: [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
> 
> Hello Jens,
> 
> Multiple block drivers need the functionality to stop a request queue
> and to wait until all ongoing request_fn() / queue_rq() calls have
> finished without waiting until all outstanding requests have finished.
> Hence this patch series that introduces the blk_quiesce_queue() and
> blk_resume_queue() functions. The dm-mq, SRP and nvme patches in this
> patch series are three examples of where these functions are useful.
> These patches apply on top of the September 21 version of your
> for-4.9/block branch. The individual patches in this series are:
> 
> 0001-blk-mq-Introduce-blk_mq_queue_stopped.patch
> 0002-dm-Fix-a-race-condition-related-to-stopping-and-star.patch
> 0003-RFC-nvme-Use-BLK_MQ_S_STOPPED-instead-of-QUEUE_FLAG_.patch
> 0004-block-Move-blk_freeze_queue-and-blk_unfreeze_queue-c.patch
> 0005-block-Extend-blk_freeze_queue_start-to-the-non-blk-m.patch
> 0006-block-Rename-mq_freeze_wq-and-mq_freeze_depth.patch
> 0007-blk-mq-Introduce-blk_quiesce_queue-and-blk_resume_qu.patch
> 0008-SRP-transport-Port-srp_wait_for_queuecommand-to-scsi.patch
> 0009-RFC-nvme-Fix-a-race-condition.patch
> 
> Thanks,
> 
> Bart.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
Hello

I took Bart's latest patches from his tree and tested all the SRP/RDMA and as many of the nvme tests.

Everything is passing my tests, including SRP port resets etc.
The nvme tests were all on a small intel nvme card.

Tested-by: Laurence Oberman <loberman@redhat.com>

^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
@ 2016-10-11 16:27   ` Laurence Oberman
  0 siblings, 0 replies; 82+ messages in thread
From: Laurence Oberman @ 2016-10-11 16:27 UTC (permalink / raw)




----- Original Message -----
> From: "Bart Van Assche" <bart.vanassche at sandisk.com>
> To: "Jens Axboe" <axboe at fb.com>
> Cc: "Christoph Hellwig" <hch at lst.de>, "James Bottomley" <jejb at linux.vnet.ibm.com>, "Martin K. Petersen"
> <martin.petersen at oracle.com>, "Mike Snitzer" <snitzer at redhat.com>, "Doug Ledford" <dledford at redhat.com>, "Keith
> Busch" <keith.busch at intel.com>, linux-block at vger.kernel.org, linux-scsi at vger.kernel.org, linux-rdma at vger.kernel.org,
> linux-nvme at lists.infradead.org
> Sent: Monday, September 26, 2016 2:25:54 PM
> Subject: [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue()
> 
> Hello Jens,
> 
> Multiple block drivers need the functionality to stop a request queue
> and to wait until all ongoing request_fn() / queue_rq() calls have
> finished without waiting until all outstanding requests have finished.
> Hence this patch series that introduces the blk_quiesce_queue() and
> blk_resume_queue() functions. The dm-mq, SRP and nvme patches in this
> patch series are three examples of where these functions are useful.
> These patches apply on top of the September 21 version of your
> for-4.9/block branch. The individual patches in this series are:
> 
> 0001-blk-mq-Introduce-blk_mq_queue_stopped.patch
> 0002-dm-Fix-a-race-condition-related-to-stopping-and-star.patch
> 0003-RFC-nvme-Use-BLK_MQ_S_STOPPED-instead-of-QUEUE_FLAG_.patch
> 0004-block-Move-blk_freeze_queue-and-blk_unfreeze_queue-c.patch
> 0005-block-Extend-blk_freeze_queue_start-to-the-non-blk-m.patch
> 0006-block-Rename-mq_freeze_wq-and-mq_freeze_depth.patch
> 0007-blk-mq-Introduce-blk_quiesce_queue-and-blk_resume_qu.patch
> 0008-SRP-transport-Port-srp_wait_for_queuecommand-to-scsi.patch
> 0009-RFC-nvme-Fix-a-race-condition.patch
> 
> Thanks,
> 
> Bart.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
Hello

I took Bart's latest patches from his tree and tested all the SRP/RDMA and as many of the nvme tests.

Everything is passing my tests, including SRP port resets etc.
The nvme tests were all on a small intel nvme card.

Tested-by: Laurence Oberman <loberman at redhat.com>

^ permalink raw reply	[flat|nested] 82+ messages in thread

end of thread, other threads:[~2016-10-11 16:29 UTC | newest]

Thread overview: 82+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-26 18:25 [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue() Bart Van Assche
2016-09-26 18:25 ` Bart Van Assche
2016-09-26 18:26 ` [PATCH 1/9] blk-mq: Introduce blk_mq_queue_stopped() Bart Van Assche
2016-09-26 18:26   ` Bart Van Assche
2016-09-26 18:26   ` Bart Van Assche
2016-09-27  6:20   ` Hannes Reinecke
2016-09-27  6:20     ` Hannes Reinecke
2016-09-27  7:38   ` Johannes Thumshirn
2016-09-27  7:38     ` Johannes Thumshirn
2016-09-27  7:38     ` Johannes Thumshirn
2016-09-26 18:26 ` [PATCH 2/9] dm: Fix a race condition related to stopping and starting queues Bart Van Assche
2016-09-26 18:26   ` Bart Van Assche
2016-09-27  6:21   ` Hannes Reinecke
2016-09-27  6:21     ` Hannes Reinecke
2016-09-27  6:21     ` Hannes Reinecke
2016-09-27  7:47   ` Johannes Thumshirn
2016-09-27  7:47     ` Johannes Thumshirn
2016-09-27  7:47     ` Johannes Thumshirn
2016-09-26 18:27 ` [PATCH 3/9] [RFC] nvme: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code Bart Van Assche
2016-09-26 18:27   ` Bart Van Assche
2016-09-26 18:27   ` Bart Van Assche
2016-09-26 18:27 ` [PATCH 4/9] block: Move blk_freeze_queue() and blk_unfreeze_queue() code Bart Van Assche
2016-09-26 18:27   ` Bart Van Assche
2016-09-27  6:26   ` Hannes Reinecke
2016-09-27  6:26     ` Hannes Reinecke
2016-09-27  6:26     ` Hannes Reinecke
2016-09-27  7:52     ` Johannes Thumshirn
2016-09-27  7:52       ` Johannes Thumshirn
2016-09-27  7:52       ` Johannes Thumshirn
2016-09-26 18:27 ` [PATCH 5/9] block: Extend blk_freeze_queue_start() to the non-blk-mq path Bart Van Assche
2016-09-26 18:27   ` Bart Van Assche
2016-09-26 18:27   ` Bart Van Assche
2016-09-27  7:50   ` Johannes Thumshirn
2016-09-27  7:50     ` Johannes Thumshirn
2016-09-27  7:50     ` Johannes Thumshirn
2016-09-27 13:22   ` Ming Lei
2016-09-27 13:22     ` Ming Lei
2016-09-27 14:42     ` Bart Van Assche
2016-09-27 14:42       ` Bart Van Assche
2016-09-27 14:42       ` Bart Van Assche
2016-09-27 15:55       ` Bart Van Assche
2016-09-27 15:55         ` Bart Van Assche
2016-09-27 15:55         ` Bart Van Assche
2016-09-26 18:28 ` [PATCH 6/9] block: Rename mq_freeze_wq and mq_freeze_depth Bart Van Assche
2016-09-26 18:28   ` Bart Van Assche
2016-09-27  7:51   ` Johannes Thumshirn
2016-09-27  7:51     ` Johannes Thumshirn
2016-09-27  7:51     ` Johannes Thumshirn
2016-09-26 18:28 ` [PATCH 7/9] blk-mq: Introduce blk_quiesce_queue() and blk_resume_queue() Bart Van Assche
2016-09-26 18:28   ` Bart Van Assche
2016-09-26 18:28 ` [PATCH 8/9] SRP transport: Port srp_wait_for_queuecommand() to scsi-mq Bart Van Assche
2016-09-26 18:28   ` Bart Van Assche
2016-09-26 18:28 ` [PATCH 9/9] [RFC] nvme: Fix a race condition Bart Van Assche
2016-09-26 18:28   ` Bart Van Assche
2016-09-27 16:31   ` Steve Wise
2016-09-27 16:31     ` Steve Wise
2016-09-27 16:31     ` Steve Wise
2016-09-27 16:43     ` Bart Van Assche
2016-09-27 16:43       ` Bart Van Assche
2016-09-27 16:43       ` Bart Van Assche
2016-09-27 16:56       ` James Bottomley
2016-09-27 16:56         ` James Bottomley
2016-09-27 17:09         ` Bart Van Assche
2016-09-27 17:09           ` Bart Van Assche
2016-09-27 17:09           ` Bart Van Assche
2016-09-28 14:23           ` Steve Wise
2016-09-28 14:23             ` Steve Wise
2016-09-28 14:23             ` Steve Wise
2016-09-27 16:56       ` Steve Wise
2016-09-27 16:56         ` Steve Wise
2016-09-27 16:56         ` Steve Wise
2016-09-26 18:33 ` [PATCH 0/9] Introduce blk_quiesce_queue() and blk_resume_queue() Mike Snitzer
2016-09-26 18:33   ` Mike Snitzer
2016-09-26 18:33   ` Mike Snitzer
2016-09-26 18:46   ` Bart Van Assche
2016-09-26 18:46     ` Bart Van Assche
2016-09-26 18:46     ` Bart Van Assche
2016-09-26 22:26   ` Bart Van Assche
2016-09-26 22:26     ` Bart Van Assche
2016-09-26 22:26     ` Bart Van Assche
2016-10-11 16:27 ` Laurence Oberman
2016-10-11 16:27   ` Laurence Oberman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.