All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] correct quiescing in several block drivers
@ 2017-07-04  7:55 ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme; +Cc: Christoph Hellwig, Keith Busch

Before we either iterate on tags or cleanup the request queue
we must guarantee that the hw queues are stop and no inflight
.queue_rq is active. Thats what blk_mq_quiesce_queue is for, so
use it where appropriate.

Sagi Grimberg (8):
  nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw
    queues
  nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
  nvme-loop: quiesce/unquiesce admin_q instead of start/stop its hw
    queues
  nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw
    queues
  nbd: quiesce request queues to make sure no submissions are inflight
  mtip32xx: quiesce request queues to make sure no submissions are
    inflight
  virtio_blk: quiesce/unquiesce live IO when entering PM states
  xen-blockfront: quiesce IO before device removal

 drivers/block/mtip32xx/mtip32xx.c |  8 ++++----
 drivers/block/nbd.c               |  4 ++--
 drivers/block/virtio_blk.c        |  4 ++--
 drivers/block/xen-blkfront.c      |  8 ++++----
 drivers/nvme/host/fc.c            |  8 +++++---
 drivers/nvme/host/pci.c           | 10 ++++++----
 drivers/nvme/host/rdma.c          |  7 ++++---
 drivers/nvme/target/loop.c        |  2 +-
 8 files changed, 28 insertions(+), 23 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 0/8] correct quiescing in several block drivers
@ 2017-07-04  7:55 ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)


Before we either iterate on tags or cleanup the request queue
we must guarantee that the hw queues are stop and no inflight
.queue_rq is active. Thats what blk_mq_quiesce_queue is for, so
use it where appropriate.

Sagi Grimberg (8):
  nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw
    queues
  nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
  nvme-loop: quiesce/unquiesce admin_q instead of start/stop its hw
    queues
  nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw
    queues
  nbd: quiesce request queues to make sure no submissions are inflight
  mtip32xx: quiesce request queues to make sure no submissions are
    inflight
  virtio_blk: quiesce/unquiesce live IO when entering PM states
  xen-blockfront: quiesce IO before device removal

 drivers/block/mtip32xx/mtip32xx.c |  8 ++++----
 drivers/block/nbd.c               |  4 ++--
 drivers/block/virtio_blk.c        |  4 ++--
 drivers/block/xen-blkfront.c      |  8 ++++----
 drivers/nvme/host/fc.c            |  8 +++++---
 drivers/nvme/host/pci.c           | 10 ++++++----
 drivers/nvme/host/rdma.c          |  7 ++++---
 drivers/nvme/target/loop.c        |  2 +-
 8 files changed, 28 insertions(+), 23 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-04  7:55 ` Sagi Grimberg
@ 2017-07-04  7:55   ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme; +Cc: Christoph Hellwig, Keith Busch

unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
quiescing/unquiescing respects the submission path rcu grace.
Also make sure to kick the requeue list when appropriate.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/rdma.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index cfb22531fc16..cec2c89cc8da 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -778,7 +778,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 
 	if (ctrl->ctrl.queue_count > 1)
 		nvme_stop_queues(&ctrl->ctrl);
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 
 	/* We must take care of fastfail/requeue all our inflight requests */
 	if (ctrl->ctrl.queue_count > 1)
@@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 	 * queues are not a live anymore, so restart the queues to fail fast
 	 * new IO
 	 */
-	blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
+	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+	blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
 	nvme_start_queues(&ctrl->ctrl);
 
 	nvme_rdma_reconnect_or_remove(ctrl);
@@ -1636,7 +1637,7 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl)
 	if (test_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags))
 		nvme_shutdown_ctrl(&ctrl->ctrl);
 
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
 				nvme_cancel_request, &ctrl->ctrl);
 	nvme_rdma_destroy_admin_queue(ctrl);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
@ 2017-07-04  7:55   ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)


unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
quiescing/unquiescing respects the submission path rcu grace.
Also make sure to kick the requeue list when appropriate.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/host/rdma.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index cfb22531fc16..cec2c89cc8da 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -778,7 +778,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 
 	if (ctrl->ctrl.queue_count > 1)
 		nvme_stop_queues(&ctrl->ctrl);
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 
 	/* We must take care of fastfail/requeue all our inflight requests */
 	if (ctrl->ctrl.queue_count > 1)
@@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 	 * queues are not a live anymore, so restart the queues to fail fast
 	 * new IO
 	 */
-	blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
+	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+	blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
 	nvme_start_queues(&ctrl->ctrl);
 
 	nvme_rdma_reconnect_or_remove(ctrl);
@@ -1636,7 +1637,7 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl)
 	if (test_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags))
 		nvme_shutdown_ctrl(&ctrl->ctrl);
 
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
 				nvme_cancel_request, &ctrl->ctrl);
 	nvme_rdma_destroy_admin_queue(ctrl);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 2/8] nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-04  7:55 ` Sagi Grimberg
@ 2017-07-04  7:55   ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme; +Cc: Christoph Hellwig, Keith Busch

unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
quiescing/unquiescing respects the submission path rcu grace.
Also make sure to kick the requeue list when appropriate.

Reviewed-By: James Smart <james.smart@broadcom.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/fc.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 50cc3b2b0e11..d63263f0b530 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2320,8 +2320,10 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 	if (ret)
 		goto out_delete_hw_queue;
 
-	if (ctrl->ctrl.state != NVME_CTRL_NEW)
-		blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
+	if (ctrl->ctrl.state != NVME_CTRL_NEW) {
+		blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+		blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
+	}
 
 	ret = nvmf_connect_admin_queue(&ctrl->ctrl);
 	if (ret)
@@ -2475,7 +2477,7 @@ nvme_fc_delete_association(struct nvme_fc_ctrl *ctrl)
 	 * use blk_mq_tagset_busy_itr() and the transport routine to
 	 * terminate the exchanges.
 	 */
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
 				nvme_fc_terminate_exchange, &ctrl->ctrl);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 2/8] nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
@ 2017-07-04  7:55   ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)


unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
quiescing/unquiescing respects the submission path rcu grace.
Also make sure to kick the requeue list when appropriate.

Reviewed-By: James Smart <james.smart at broadcom.com>
Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/host/fc.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 50cc3b2b0e11..d63263f0b530 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2320,8 +2320,10 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 	if (ret)
 		goto out_delete_hw_queue;
 
-	if (ctrl->ctrl.state != NVME_CTRL_NEW)
-		blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
+	if (ctrl->ctrl.state != NVME_CTRL_NEW) {
+		blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+		blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
+	}
 
 	ret = nvmf_connect_admin_queue(&ctrl->ctrl);
 	if (ret)
@@ -2475,7 +2477,7 @@ nvme_fc_delete_association(struct nvme_fc_ctrl *ctrl)
 	 * use blk_mq_tagset_busy_itr() and the transport routine to
 	 * terminate the exchanges.
 	 */
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
 				nvme_fc_terminate_exchange, &ctrl->ctrl);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 3/8] nvme-loop: quiesce admin_q instead of stopping hw queues
  2017-07-04  7:55 ` Sagi Grimberg
@ 2017-07-04  7:55   ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme; +Cc: Christoph Hellwig, Keith Busch

Before we iterate over the tags, we need to make sure that
no inflight requests are being queued, so use quiesce
helper for that.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/target/loop.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 3d51341e62ee..891002ee005f 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -434,7 +434,7 @@ static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl)
 	if (ctrl->ctrl.state == NVME_CTRL_LIVE)
 		nvme_shutdown_ctrl(&ctrl->ctrl);
 
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
 				nvme_cancel_request, &ctrl->ctrl);
 	nvme_loop_destroy_admin_queue(ctrl);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 3/8] nvme-loop: quiesce admin_q instead of stopping hw queues
@ 2017-07-04  7:55   ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)


Before we iterate over the tags, we need to make sure that
no inflight requests are being queued, so use quiesce
helper for that.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/target/loop.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 3d51341e62ee..891002ee005f 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -434,7 +434,7 @@ static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl)
 	if (ctrl->ctrl.state == NVME_CTRL_LIVE)
 		nvme_shutdown_ctrl(&ctrl->ctrl);
 
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
 				nvme_cancel_request, &ctrl->ctrl);
 	nvme_loop_destroy_admin_queue(ctrl);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 4/8] nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-04  7:55 ` Sagi Grimberg
@ 2017-07-04  7:55   ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme; +Cc: Christoph Hellwig, Keith Busch

unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
quiescing/unquiescing respects the submission path rcu grace.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/pci.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index eb729ff70e7d..df7c8a355075 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1125,7 +1125,7 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq)
 	spin_unlock_irq(&nvmeq->q_lock);
 
 	if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q)
-		blk_mq_stop_hw_queues(nvmeq->dev->ctrl.admin_q);
+		blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q);
 
 	pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq);
 
@@ -1315,7 +1315,7 @@ static void nvme_dev_remove_admin(struct nvme_dev *dev)
 		 * user requests may be waiting on a stopped queue. Start the
 		 * queue to flush these to completion.
 		 */
-		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
+		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
 		blk_cleanup_queue(dev->ctrl.admin_q);
 		blk_mq_free_tag_set(&dev->admin_tagset);
 	}
@@ -1351,8 +1351,10 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
 			dev->ctrl.admin_q = NULL;
 			return -ENODEV;
 		}
-	} else
-		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
+	} else {
+		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
+		blk_mq_kick_requeue_list(dev->ctrl.admin_q);
+	}
 
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 4/8] nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw queues
@ 2017-07-04  7:55   ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)


unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
quiescing/unquiescing respects the submission path rcu grace.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/host/pci.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index eb729ff70e7d..df7c8a355075 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1125,7 +1125,7 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq)
 	spin_unlock_irq(&nvmeq->q_lock);
 
 	if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q)
-		blk_mq_stop_hw_queues(nvmeq->dev->ctrl.admin_q);
+		blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q);
 
 	pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq);
 
@@ -1315,7 +1315,7 @@ static void nvme_dev_remove_admin(struct nvme_dev *dev)
 		 * user requests may be waiting on a stopped queue. Start the
 		 * queue to flush these to completion.
 		 */
-		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
+		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
 		blk_cleanup_queue(dev->ctrl.admin_q);
 		blk_mq_free_tag_set(&dev->admin_tagset);
 	}
@@ -1351,8 +1351,10 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
 			dev->ctrl.admin_q = NULL;
 			return -ENODEV;
 		}
-	} else
-		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
+	} else {
+		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
+		blk_mq_kick_requeue_list(dev->ctrl.admin_q);
+	}
 
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 5/8] nbd: quiesce request queues to make sure no submissions are inflight
  2017-07-04  7:55 ` Sagi Grimberg
@ 2017-07-04  7:55   ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme
  Cc: Christoph Hellwig, Keith Busch, Josef Bacik

We must make sure that no requests are being queued before we iterate
over the tags. quiesce/unquiesce the request queue istead of start/stop
hw queues.

Cc: Josef Bacik <jbacik@fb.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/block/nbd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 977ec960dd2f..dea7d85134ee 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -661,9 +661,9 @@ static void nbd_clear_req(struct request *req, void *data, bool reserved)
 
 static void nbd_clear_que(struct nbd_device *nbd)
 {
-	blk_mq_stop_hw_queues(nbd->disk->queue);
+	blk_mq_quiesce_queue(nbd->disk->queue);
 	blk_mq_tagset_busy_iter(&nbd->tag_set, nbd_clear_req, NULL);
-	blk_mq_start_hw_queues(nbd->disk->queue);
+	blk_mq_unquiesce_queue(nbd->disk->queue);
 	dev_dbg(disk_to_dev(nbd->disk), "queue cleared\n");
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 5/8] nbd: quiesce request queues to make sure no submissions are inflight
@ 2017-07-04  7:55   ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)


We must make sure that no requests are being queued before we iterate
over the tags. quiesce/unquiesce the request queue istead of start/stop
hw queues.

Cc: Josef Bacik <jbacik at fb.com>
Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/block/nbd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 977ec960dd2f..dea7d85134ee 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -661,9 +661,9 @@ static void nbd_clear_req(struct request *req, void *data, bool reserved)
 
 static void nbd_clear_que(struct nbd_device *nbd)
 {
-	blk_mq_stop_hw_queues(nbd->disk->queue);
+	blk_mq_quiesce_queue(nbd->disk->queue);
 	blk_mq_tagset_busy_iter(&nbd->tag_set, nbd_clear_req, NULL);
-	blk_mq_start_hw_queues(nbd->disk->queue);
+	blk_mq_unquiesce_queue(nbd->disk->queue);
 	dev_dbg(disk_to_dev(nbd->disk), "queue cleared\n");
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 6/8] mtip32xx: quiesce request queues to make sure no submissions are inflight
  2017-07-04  7:55 ` Sagi Grimberg
@ 2017-07-04  7:55   ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme; +Cc: Christoph Hellwig, Keith Busch

Unlike blk_mq_stop_hw_queues, blk_mq_quiesce_queue respects the
submission path rcu grace. quiesce the queue before iterating
on live tags, or performing device io quiescing.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/block/mtip32xx/mtip32xx.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 61b046f256ca..69a62aeace2a 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -950,7 +950,7 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
 	unsigned long to;
 	bool active = true;
 
-	blk_mq_stop_hw_queues(port->dd->queue);
+	blk_mq_quiesce_queue(port->dd->queue);
 
 	to = jiffies + msecs_to_jiffies(timeout);
 	do {
@@ -970,10 +970,10 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
 			break;
 	} while (time_before(jiffies, to));
 
-	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
+	blk_mq_unquiesce_queue(port->dd->queue);
 	return active ? -EBUSY : 0;
 err_fault:
-	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
+	blk_mq_unquiesce_queue(port->dd->queue);
 	return -EFAULT;
 }
 
@@ -3995,7 +3995,7 @@ static int mtip_block_remove(struct driver_data *dd)
 						dd->disk->disk_name);
 
 	blk_freeze_queue_start(dd->queue);
-	blk_mq_stop_hw_queues(dd->queue);
+	blk_mq_quiesce_queue(dd->queue);
 	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
 
 	/*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 6/8] mtip32xx: quiesce request queues to make sure no submissions are inflight
@ 2017-07-04  7:55   ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)


Unlike blk_mq_stop_hw_queues, blk_mq_quiesce_queue respects the
submission path rcu grace. quiesce the queue before iterating
on live tags, or performing device io quiescing.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/block/mtip32xx/mtip32xx.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 61b046f256ca..69a62aeace2a 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -950,7 +950,7 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
 	unsigned long to;
 	bool active = true;
 
-	blk_mq_stop_hw_queues(port->dd->queue);
+	blk_mq_quiesce_queue(port->dd->queue);
 
 	to = jiffies + msecs_to_jiffies(timeout);
 	do {
@@ -970,10 +970,10 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
 			break;
 	} while (time_before(jiffies, to));
 
-	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
+	blk_mq_unquiesce_queue(port->dd->queue);
 	return active ? -EBUSY : 0;
 err_fault:
-	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
+	blk_mq_unquiesce_queue(port->dd->queue);
 	return -EFAULT;
 }
 
@@ -3995,7 +3995,7 @@ static int mtip_block_remove(struct driver_data *dd)
 						dd->disk->disk_name);
 
 	blk_freeze_queue_start(dd->queue);
-	blk_mq_stop_hw_queues(dd->queue);
+	blk_mq_quiesce_queue(dd->queue);
 	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
 
 	/*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 7/8] virtio_blk: quiesce/unquiesce live IO when entering PM states
  2017-07-04  7:55 ` Sagi Grimberg
@ 2017-07-04  7:55   ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme
  Cc: Christoph Hellwig, Keith Busch, Michael S . Tsirkin, Jason Wang

We must make sure that no requests are being queued before we iterate
delete vqs. quiesce/unquiesce the request queue istead of start/stop
hw queues.

Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/block/virtio_blk.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 0297ad7c1452..4e02aa5fdac0 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -840,7 +840,7 @@ static int virtblk_freeze(struct virtio_device *vdev)
 	/* Make sure no work handler is accessing the device. */
 	flush_work(&vblk->config_work);
 
-	blk_mq_stop_hw_queues(vblk->disk->queue);
+	blk_mq_quiesce_queue(vblk->disk->queue);
 
 	vdev->config->del_vqs(vdev);
 	return 0;
@@ -857,7 +857,7 @@ static int virtblk_restore(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
-	blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
+	blk_mq_unquiesce_queue(vblk->disk->queue);
 	return 0;
 }
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 7/8] virtio_blk: quiesce/unquiesce live IO when entering PM states
@ 2017-07-04  7:55   ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)


We must make sure that no requests are being queued before we iterate
delete vqs. quiesce/unquiesce the request queue istead of start/stop
hw queues.

Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Jason Wang <jasowang at redhat.com>
Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/block/virtio_blk.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 0297ad7c1452..4e02aa5fdac0 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -840,7 +840,7 @@ static int virtblk_freeze(struct virtio_device *vdev)
 	/* Make sure no work handler is accessing the device. */
 	flush_work(&vblk->config_work);
 
-	blk_mq_stop_hw_queues(vblk->disk->queue);
+	blk_mq_quiesce_queue(vblk->disk->queue);
 
 	vdev->config->del_vqs(vdev);
 	return 0;
@@ -857,7 +857,7 @@ static int virtblk_restore(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
-	blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
+	blk_mq_unquiesce_queue(vblk->disk->queue);
 	return 0;
 }
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 8/8] xen-blockfront: quiesce IO before device removal
  2017-07-04  7:55 ` Sagi Grimberg
@ 2017-07-04  7:55   ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme
  Cc: Christoph Hellwig, Keith Busch, Konrad Rzeszutek Wilk,
	Roger Pau Monné,
	Boris Ostrovsky, Juergen Gross

Before calling blk_cleanup_queue one must make sure
that no request is being queued. In order to guarantee that
we need to use blk_mq_quiesce as it respects the submission
path rcu grace.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Roger Pau Monné <roger.pau@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/block/xen-blkfront.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c852ed3c01d5..5272ca8fb0dc 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1187,7 +1187,7 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
 		return;
 
 	/* No more blkif_request(). */
-	blk_mq_stop_hw_queues(info->rq);
+	blk_mq_quiesce_queue(info->rq);
 
 	for (i = 0; i < info->nr_rings; i++) {
 		struct blkfront_ring_info *rinfo = &info->rinfo[i];
@@ -1217,7 +1217,7 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
 static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
 {
 	if (!RING_FULL(&rinfo->ring))
-		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
+		blk_mq_unquiesce_queue(rinfo->dev_info->rq);
 }
 
 static void kick_pending_request_queues(struct blkfront_ring_info *rinfo)
@@ -1346,7 +1346,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
 	/* No more blkif_request(). */
 	if (info->rq)
-		blk_mq_stop_hw_queues(info->rq);
+		blk_mq_quiesce_queue(info->rq);
 
 	for (i = 0; i < info->nr_rings; i++)
 		blkif_free_ring(&info->rinfo[i]);
@@ -2032,7 +2032,7 @@ static int blkif_recover(struct blkfront_info *info)
 		BUG_ON(req->nr_phys_segments > segs);
 		blk_mq_requeue_request(req, false);
 	}
-	blk_mq_start_stopped_hw_queues(info->rq, true);
+	blk_mq_unquiesce_queue(info->rq);
 	blk_mq_kick_requeue_list(info->rq);
 
 	while ((bio = bio_list_pop(&info->bio_list)) != NULL) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 8/8] xen-blockfront: quiesce IO before device removal
@ 2017-07-04  7:55   ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  7:55 UTC (permalink / raw)


Before calling blk_cleanup_queue one must make sure
that no request is being queued. In order to guarantee that
we need to use blk_mq_quiesce as it respects the submission
path rcu grace.

Cc: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>
Cc: Roger Pau Monn? <roger.pau at citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky at oracle.com>
Cc: Juergen Gross <jgross at suse.com>
Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/block/xen-blkfront.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c852ed3c01d5..5272ca8fb0dc 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1187,7 +1187,7 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
 		return;
 
 	/* No more blkif_request(). */
-	blk_mq_stop_hw_queues(info->rq);
+	blk_mq_quiesce_queue(info->rq);
 
 	for (i = 0; i < info->nr_rings; i++) {
 		struct blkfront_ring_info *rinfo = &info->rinfo[i];
@@ -1217,7 +1217,7 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
 static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
 {
 	if (!RING_FULL(&rinfo->ring))
-		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
+		blk_mq_unquiesce_queue(rinfo->dev_info->rq);
 }
 
 static void kick_pending_request_queues(struct blkfront_ring_info *rinfo)
@@ -1346,7 +1346,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
 	/* No more blkif_request(). */
 	if (info->rq)
-		blk_mq_stop_hw_queues(info->rq);
+		blk_mq_quiesce_queue(info->rq);
 
 	for (i = 0; i < info->nr_rings; i++)
 		blkif_free_ring(&info->rinfo[i]);
@@ -2032,7 +2032,7 @@ static int blkif_recover(struct blkfront_info *info)
 		BUG_ON(req->nr_phys_segments > segs);
 		blk_mq_requeue_request(req, false);
 	}
-	blk_mq_start_stopped_hw_queues(info->rq, true);
+	blk_mq_unquiesce_queue(info->rq);
 	blk_mq_kick_requeue_list(info->rq);
 
 	while ((bio = bio_list_pop(&info->bio_list)) != NULL) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/8] correct quiescing in several block drivers
  2017-07-04  7:55 ` Sagi Grimberg
@ 2017-07-04  8:12   ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:12 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch

On Tue, Jul 04, 2017 at 10:55:04AM +0300, Sagi Grimberg wrote:
> Before we either iterate on tags or cleanup the request queue
> we must guarantee that the hw queues are stop and no inflight
> .queue_rq is active. Thats what blk_mq_quiesce_queue is for, so
> use it where appropriate.

queue freezing is used in cleanup path, and not required to
quiesce queue.

quiesce is required for canceling request via blk_mq_tagset_busy_iter()
for avoiding double release.

I think we should make it clear in comment log.

> 
> Sagi Grimberg (8):
>   nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw
>     queues
>   nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
>   nvme-loop: quiesce/unquiesce admin_q instead of start/stop its hw
>     queues
>   nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw
>     queues
>   nbd: quiesce request queues to make sure no submissions are inflight
>   mtip32xx: quiesce request queues to make sure no submissions are
>     inflight
>   virtio_blk: quiesce/unquiesce live IO when entering PM states
>   xen-blockfront: quiesce IO before device removal
> 
>  drivers/block/mtip32xx/mtip32xx.c |  8 ++++----
>  drivers/block/nbd.c               |  4 ++--
>  drivers/block/virtio_blk.c        |  4 ++--
>  drivers/block/xen-blkfront.c      |  8 ++++----
>  drivers/nvme/host/fc.c            |  8 +++++---
>  drivers/nvme/host/pci.c           | 10 ++++++----
>  drivers/nvme/host/rdma.c          |  7 ++++---
>  drivers/nvme/target/loop.c        |  2 +-
>  8 files changed, 28 insertions(+), 23 deletions(-)
> 
> -- 
> 2.7.4
> 

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 0/8] correct quiescing in several block drivers
@ 2017-07-04  8:12   ` Ming Lei
  0 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:12 UTC (permalink / raw)


On Tue, Jul 04, 2017@10:55:04AM +0300, Sagi Grimberg wrote:
> Before we either iterate on tags or cleanup the request queue
> we must guarantee that the hw queues are stop and no inflight
> .queue_rq is active. Thats what blk_mq_quiesce_queue is for, so
> use it where appropriate.

queue freezing is used in cleanup path, and not required to
quiesce queue.

quiesce is required for canceling request via blk_mq_tagset_busy_iter()
for avoiding double release.

I think we should make it clear in comment log.

> 
> Sagi Grimberg (8):
>   nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw
>     queues
>   nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
>   nvme-loop: quiesce/unquiesce admin_q instead of start/stop its hw
>     queues
>   nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw
>     queues
>   nbd: quiesce request queues to make sure no submissions are inflight
>   mtip32xx: quiesce request queues to make sure no submissions are
>     inflight
>   virtio_blk: quiesce/unquiesce live IO when entering PM states
>   xen-blockfront: quiesce IO before device removal
> 
>  drivers/block/mtip32xx/mtip32xx.c |  8 ++++----
>  drivers/block/nbd.c               |  4 ++--
>  drivers/block/virtio_blk.c        |  4 ++--
>  drivers/block/xen-blkfront.c      |  8 ++++----
>  drivers/nvme/host/fc.c            |  8 +++++---
>  drivers/nvme/host/pci.c           | 10 ++++++----
>  drivers/nvme/host/rdma.c          |  7 ++++---
>  drivers/nvme/target/loop.c        |  2 +-
>  8 files changed, 28 insertions(+), 23 deletions(-)
> 
> -- 
> 2.7.4
> 

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-04  7:55   ` Sagi Grimberg
@ 2017-07-04  8:15     ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:15 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch

On Tue, Jul 04, 2017 at 10:55:05AM +0300, Sagi Grimberg wrote:
> unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
> quiescing/unquiescing respects the submission path rcu grace.
> Also make sure to kick the requeue list when appropriate.
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/nvme/host/rdma.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index cfb22531fc16..cec2c89cc8da 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -778,7 +778,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>  
>  	if (ctrl->ctrl.queue_count > 1)
>  		nvme_stop_queues(&ctrl->ctrl);
> -	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
> +	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
>  
>  	/* We must take care of fastfail/requeue all our inflight requests */
>  	if (ctrl->ctrl.queue_count > 1)
> @@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>  	 * queues are not a live anymore, so restart the queues to fail fast
>  	 * new IO
>  	 */
> -	blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
> +	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
> +	blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);

Now the queue won't be stopped via blk_mq_quiesce_queue(), so why do
you add blk_mq_kick_requeue_list() here?


Thanks,
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
@ 2017-07-04  8:15     ` Ming Lei
  0 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:15 UTC (permalink / raw)


On Tue, Jul 04, 2017@10:55:05AM +0300, Sagi Grimberg wrote:
> unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
> quiescing/unquiescing respects the submission path rcu grace.
> Also make sure to kick the requeue list when appropriate.
> 
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>  drivers/nvme/host/rdma.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index cfb22531fc16..cec2c89cc8da 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -778,7 +778,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>  
>  	if (ctrl->ctrl.queue_count > 1)
>  		nvme_stop_queues(&ctrl->ctrl);
> -	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
> +	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
>  
>  	/* We must take care of fastfail/requeue all our inflight requests */
>  	if (ctrl->ctrl.queue_count > 1)
> @@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>  	 * queues are not a live anymore, so restart the queues to fail fast
>  	 * new IO
>  	 */
> -	blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
> +	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
> +	blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);

Now the queue won't be stopped via blk_mq_quiesce_queue(), so why do
you add blk_mq_kick_requeue_list() here?


Thanks,
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/8] nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-04  7:55   ` Sagi Grimberg
@ 2017-07-04  8:18     ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:18 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch

On Tue, Jul 04, 2017 at 10:55:06AM +0300, Sagi Grimberg wrote:
> unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
> quiescing/unquiescing respects the submission path rcu grace.
> Also make sure to kick the requeue list when appropriate.
> 
> Reviewed-By: James Smart <james.smart@broadcom.com>
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/nvme/host/fc.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 50cc3b2b0e11..d63263f0b530 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2320,8 +2320,10 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
>  	if (ret)
>  		goto out_delete_hw_queue;
>  
> -	if (ctrl->ctrl.state != NVME_CTRL_NEW)
> -		blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
> +	if (ctrl->ctrl.state != NVME_CTRL_NEW) {
> +		blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
> +		blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);

Same question as that in patch 1 about blk_mq_kick_requeue_list().

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 2/8] nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
@ 2017-07-04  8:18     ` Ming Lei
  0 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:18 UTC (permalink / raw)


On Tue, Jul 04, 2017@10:55:06AM +0300, Sagi Grimberg wrote:
> unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
> quiescing/unquiescing respects the submission path rcu grace.
> Also make sure to kick the requeue list when appropriate.
> 
> Reviewed-By: James Smart <james.smart at broadcom.com>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>  drivers/nvme/host/fc.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 50cc3b2b0e11..d63263f0b530 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2320,8 +2320,10 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
>  	if (ret)
>  		goto out_delete_hw_queue;
>  
> -	if (ctrl->ctrl.state != NVME_CTRL_NEW)
> -		blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
> +	if (ctrl->ctrl.state != NVME_CTRL_NEW) {
> +		blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
> +		blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);

Same question as that in patch 1 about blk_mq_kick_requeue_list().

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/8] nvme-loop: quiesce admin_q instead of stopping hw queues
  2017-07-04  7:55   ` Sagi Grimberg
@ 2017-07-04  8:23     ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:23 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch

On Tue, Jul 04, 2017 at 10:55:07AM +0300, Sagi Grimberg wrote:
> Before we iterate over the tags, we need to make sure that
> no inflight requests are being queued, so use quiesce
> helper for that.
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/nvme/target/loop.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
> index 3d51341e62ee..891002ee005f 100644
> --- a/drivers/nvme/target/loop.c
> +++ b/drivers/nvme/target/loop.c
> @@ -434,7 +434,7 @@ static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl)
>  	if (ctrl->ctrl.state == NVME_CTRL_LIVE)
>  		nvme_shutdown_ctrl(&ctrl->ctrl);
>  
> -	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
> +	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
>  	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
>  				nvme_cancel_request, &ctrl->ctrl);
>  	nvme_loop_destroy_admin_queue(ctrl);

Is the queue killed before calling nvme_loop_destroy_admin_queue()? If
not, simply quiescing and not unquiescing will hang in
blk_cleanup_queue().  So I suggest to put blk_mq_unquiesce_queue()
after blk_mq_tagset_busy_iter().

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 3/8] nvme-loop: quiesce admin_q instead of stopping hw queues
@ 2017-07-04  8:23     ` Ming Lei
  0 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:23 UTC (permalink / raw)


On Tue, Jul 04, 2017@10:55:07AM +0300, Sagi Grimberg wrote:
> Before we iterate over the tags, we need to make sure that
> no inflight requests are being queued, so use quiesce
> helper for that.
> 
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>  drivers/nvme/target/loop.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
> index 3d51341e62ee..891002ee005f 100644
> --- a/drivers/nvme/target/loop.c
> +++ b/drivers/nvme/target/loop.c
> @@ -434,7 +434,7 @@ static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl)
>  	if (ctrl->ctrl.state == NVME_CTRL_LIVE)
>  		nvme_shutdown_ctrl(&ctrl->ctrl);
>  
> -	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
> +	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
>  	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
>  				nvme_cancel_request, &ctrl->ctrl);
>  	nvme_loop_destroy_admin_queue(ctrl);

Is the queue killed before calling nvme_loop_destroy_admin_queue()? If
not, simply quiescing and not unquiescing will hang in
blk_cleanup_queue().  So I suggest to put blk_mq_unquiesce_queue()
after blk_mq_tagset_busy_iter().

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 4/8] nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-04  7:55   ` Sagi Grimberg
@ 2017-07-04  8:26     ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:26 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch

On Tue, Jul 04, 2017 at 10:55:08AM +0300, Sagi Grimberg wrote:
> unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
> quiescing/unquiescing respects the submission path rcu grace.
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/nvme/host/pci.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index eb729ff70e7d..df7c8a355075 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1125,7 +1125,7 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq)
>  	spin_unlock_irq(&nvmeq->q_lock);
>  
>  	if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q)
> -		blk_mq_stop_hw_queues(nvmeq->dev->ctrl.admin_q);
> +		blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q);
>  
>  	pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq);
>  
> @@ -1315,7 +1315,7 @@ static void nvme_dev_remove_admin(struct nvme_dev *dev)
>  		 * user requests may be waiting on a stopped queue. Start the
>  		 * queue to flush these to completion.
>  		 */
> -		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
> +		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
>  		blk_cleanup_queue(dev->ctrl.admin_q);
>  		blk_mq_free_tag_set(&dev->admin_tagset);
>  	}
> @@ -1351,8 +1351,10 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
>  			dev->ctrl.admin_q = NULL;
>  			return -ENODEV;
>  		}
> -	} else
> -		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
> +	} else {
> +		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
> +		blk_mq_kick_requeue_list(dev->ctrl.admin_q);

Again, blk_mq_kick_requeue_list() may not be required.

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 4/8] nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw queues
@ 2017-07-04  8:26     ` Ming Lei
  0 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:26 UTC (permalink / raw)


On Tue, Jul 04, 2017@10:55:08AM +0300, Sagi Grimberg wrote:
> unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
> quiescing/unquiescing respects the submission path rcu grace.
> 
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>  drivers/nvme/host/pci.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index eb729ff70e7d..df7c8a355075 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1125,7 +1125,7 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq)
>  	spin_unlock_irq(&nvmeq->q_lock);
>  
>  	if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q)
> -		blk_mq_stop_hw_queues(nvmeq->dev->ctrl.admin_q);
> +		blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q);
>  
>  	pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq);
>  
> @@ -1315,7 +1315,7 @@ static void nvme_dev_remove_admin(struct nvme_dev *dev)
>  		 * user requests may be waiting on a stopped queue. Start the
>  		 * queue to flush these to completion.
>  		 */
> -		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
> +		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
>  		blk_cleanup_queue(dev->ctrl.admin_q);
>  		blk_mq_free_tag_set(&dev->admin_tagset);
>  	}
> @@ -1351,8 +1351,10 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
>  			dev->ctrl.admin_q = NULL;
>  			return -ENODEV;
>  		}
> -	} else
> -		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
> +	} else {
> +		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
> +		blk_mq_kick_requeue_list(dev->ctrl.admin_q);

Again, blk_mq_kick_requeue_list() may not be required.

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 5/8] nbd: quiesce request queues to make sure no submissions are inflight
  2017-07-04  7:55   ` Sagi Grimberg
@ 2017-07-04  8:28     ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:28 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch, Josef Bacik

On Tue, Jul 04, 2017 at 10:55:09AM +0300, Sagi Grimberg wrote:
> We must make sure that no requests are being queued before we iterate
> over the tags. quiesce/unquiesce the request queue istead of start/stop
> hw queues.
> 
> Cc: Josef Bacik <jbacik@fb.com>
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/block/nbd.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
> index 977ec960dd2f..dea7d85134ee 100644
> --- a/drivers/block/nbd.c
> +++ b/drivers/block/nbd.c
> @@ -661,9 +661,9 @@ static void nbd_clear_req(struct request *req, void *data, bool reserved)
>  
>  static void nbd_clear_que(struct nbd_device *nbd)
>  {
> -	blk_mq_stop_hw_queues(nbd->disk->queue);
> +	blk_mq_quiesce_queue(nbd->disk->queue);
>  	blk_mq_tagset_busy_iter(&nbd->tag_set, nbd_clear_req, NULL);
> -	blk_mq_start_hw_queues(nbd->disk->queue);
> +	blk_mq_unquiesce_queue(nbd->disk->queue);
>  	dev_dbg(disk_to_dev(nbd->disk), "queue cleared\n");

Reviewed-by: Ming Lei <ming.lei@redhat.com>

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 5/8] nbd: quiesce request queues to make sure no submissions are inflight
@ 2017-07-04  8:28     ` Ming Lei
  0 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:28 UTC (permalink / raw)


On Tue, Jul 04, 2017@10:55:09AM +0300, Sagi Grimberg wrote:
> We must make sure that no requests are being queued before we iterate
> over the tags. quiesce/unquiesce the request queue istead of start/stop
> hw queues.
> 
> Cc: Josef Bacik <jbacik at fb.com>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>  drivers/block/nbd.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
> index 977ec960dd2f..dea7d85134ee 100644
> --- a/drivers/block/nbd.c
> +++ b/drivers/block/nbd.c
> @@ -661,9 +661,9 @@ static void nbd_clear_req(struct request *req, void *data, bool reserved)
>  
>  static void nbd_clear_que(struct nbd_device *nbd)
>  {
> -	blk_mq_stop_hw_queues(nbd->disk->queue);
> +	blk_mq_quiesce_queue(nbd->disk->queue);
>  	blk_mq_tagset_busy_iter(&nbd->tag_set, nbd_clear_req, NULL);
> -	blk_mq_start_hw_queues(nbd->disk->queue);
> +	blk_mq_unquiesce_queue(nbd->disk->queue);
>  	dev_dbg(disk_to_dev(nbd->disk), "queue cleared\n");

Reviewed-by: Ming Lei <ming.lei at redhat.com>

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 7/8] virtio_blk: quiesce/unquiesce live IO when entering PM states
  2017-07-04  7:55   ` Sagi Grimberg
@ 2017-07-04  8:41     ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:41 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch, Michael S . Tsirkin, Jason Wang

On Tue, Jul 04, 2017 at 10:55:11AM +0300, Sagi Grimberg wrote:
> We must make sure that no requests are being queued before we iterate
> delete vqs. quiesce/unquiesce the request queue istead of start/stop
> hw queues.
> 
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/block/virtio_blk.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 0297ad7c1452..4e02aa5fdac0 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -840,7 +840,7 @@ static int virtblk_freeze(struct virtio_device *vdev)
>  	/* Make sure no work handler is accessing the device. */
>  	flush_work(&vblk->config_work);
>  
> -	blk_mq_stop_hw_queues(vblk->disk->queue);
> +	blk_mq_quiesce_queue(vblk->disk->queue);
>  
>  	vdev->config->del_vqs(vdev);
>  	return 0;
> @@ -857,7 +857,7 @@ static int virtblk_restore(struct virtio_device *vdev)
>  
>  	virtio_device_ready(vdev);
>  
> -	blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
> +	blk_mq_unquiesce_queue(vblk->disk->queue);
>  	return 0;
>  }
>  #endif
> -- 
> 2.7.4
> 

Looks fine, 

Reviewed-by: Ming Lei <ming.lei@redhat.com>
	

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 7/8] virtio_blk: quiesce/unquiesce live IO when entering PM states
@ 2017-07-04  8:41     ` Ming Lei
  0 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04  8:41 UTC (permalink / raw)


On Tue, Jul 04, 2017@10:55:11AM +0300, Sagi Grimberg wrote:
> We must make sure that no requests are being queued before we iterate
> delete vqs. quiesce/unquiesce the request queue istead of start/stop
> hw queues.
> 
> Cc: Michael S. Tsirkin <mst at redhat.com>
> Cc: Jason Wang <jasowang at redhat.com>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>  drivers/block/virtio_blk.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 0297ad7c1452..4e02aa5fdac0 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -840,7 +840,7 @@ static int virtblk_freeze(struct virtio_device *vdev)
>  	/* Make sure no work handler is accessing the device. */
>  	flush_work(&vblk->config_work);
>  
> -	blk_mq_stop_hw_queues(vblk->disk->queue);
> +	blk_mq_quiesce_queue(vblk->disk->queue);
>  
>  	vdev->config->del_vqs(vdev);
>  	return 0;
> @@ -857,7 +857,7 @@ static int virtblk_restore(struct virtio_device *vdev)
>  
>  	virtio_device_ready(vdev);
>  
> -	blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
> +	blk_mq_unquiesce_queue(vblk->disk->queue);
>  	return 0;
>  }
>  #endif
> -- 
> 2.7.4
> 

Looks fine, 

Reviewed-by: Ming Lei <ming.lei at redhat.com>
	

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-04  8:15     ` Ming Lei
@ 2017-07-04  8:59       ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  8:59 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch


>> @@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>>   	 * queues are not a live anymore, so restart the queues to fail fast
>>   	 * new IO
>>   	 */
>> -	blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
>> +	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>> +	blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
> 
> Now the queue won't be stopped via blk_mq_quiesce_queue(), so why do
> you add blk_mq_kick_requeue_list() here?

I think you're right.

We now quiesce the queue and fast fail inflight io, in
nvme_complete_rq we call blk_mq_requeue_request with
!blk_mq_queue_stopped(req->q) which is now true.

So the requeue_work is triggered and requeue the request,
and when we unquiesce we simply run the hw queues again.

If we were to call it with !blk_queue_quiesced(req->q)
I think it would be needed though...

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
@ 2017-07-04  8:59       ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  8:59 UTC (permalink / raw)



>> @@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>>   	 * queues are not a live anymore, so restart the queues to fail fast
>>   	 * new IO
>>   	 */
>> -	blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
>> +	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>> +	blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
> 
> Now the queue won't be stopped via blk_mq_quiesce_queue(), so why do
> you add blk_mq_kick_requeue_list() here?

I think you're right.

We now quiesce the queue and fast fail inflight io, in
nvme_complete_rq we call blk_mq_requeue_request with
!blk_mq_queue_stopped(req->q) which is now true.

So the requeue_work is triggered and requeue the request,
and when we unquiesce we simply run the hw queues again.

If we were to call it with !blk_queue_quiesced(req->q)
I think it would be needed though...

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-04  8:59       ` Sagi Grimberg
@ 2017-07-04  9:07         ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  9:07 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch


>>> @@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct 
>>> work_struct *work)
>>>        * queues are not a live anymore, so restart the queues to fail 
>>> fast
>>>        * new IO
>>>        */
>>> -    blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
>>> +    blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>>> +    blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
>>
>> Now the queue won't be stopped via blk_mq_quiesce_queue(), so why do
>> you add blk_mq_kick_requeue_list() here?
> 
> I think you're right.
> 
> We now quiesce the queue and fast fail inflight io, in
> nvme_complete_rq we call blk_mq_requeue_request with
> !blk_mq_queue_stopped(req->q) which is now true.
> 
> So the requeue_work is triggered and requeue the request,
> and when we unquiesce we simply run the hw queues again.
> 
> If we were to call it with !blk_queue_quiesced(req->q)
> I think it would be needed though...

If you look at nvme_start_queues, it also kicks the requeue
work. I think that the proper fix for this is _keep_ the
requeue kick and in nvme_complete_rq call:

blk_mq_requeue_request(req, !blk_queue_quiesced(req->q));

Thoughts?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
@ 2017-07-04  9:07         ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  9:07 UTC (permalink / raw)



>>> @@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct 
>>> work_struct *work)
>>>        * queues are not a live anymore, so restart the queues to fail 
>>> fast
>>>        * new IO
>>>        */
>>> -    blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
>>> +    blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>>> +    blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
>>
>> Now the queue won't be stopped via blk_mq_quiesce_queue(), so why do
>> you add blk_mq_kick_requeue_list() here?
> 
> I think you're right.
> 
> We now quiesce the queue and fast fail inflight io, in
> nvme_complete_rq we call blk_mq_requeue_request with
> !blk_mq_queue_stopped(req->q) which is now true.
> 
> So the requeue_work is triggered and requeue the request,
> and when we unquiesce we simply run the hw queues again.
> 
> If we were to call it with !blk_queue_quiesced(req->q)
> I think it would be needed though...

If you look at nvme_start_queues, it also kicks the requeue
work. I think that the proper fix for this is _keep_ the
requeue kick and in nvme_complete_rq call:

blk_mq_requeue_request(req, !blk_queue_quiesced(req->q));

Thoughts?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/8] nvme-loop: quiesce admin_q instead of stopping hw queues
  2017-07-04  8:23     ` Ming Lei
@ 2017-07-04  9:24       ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  9:24 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch

>> diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
>> index 3d51341e62ee..891002ee005f 100644
>> --- a/drivers/nvme/target/loop.c
>> +++ b/drivers/nvme/target/loop.c
>> @@ -434,7 +434,7 @@ static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl)
>>   	if (ctrl->ctrl.state == NVME_CTRL_LIVE)
>>   		nvme_shutdown_ctrl(&ctrl->ctrl);
>>   
>> -	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
>> +	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
>>   	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
>>   				nvme_cancel_request, &ctrl->ctrl);
>>   	nvme_loop_destroy_admin_queue(ctrl);
> 
> Is the queue killed before calling nvme_loop_destroy_admin_queue()?

No, its not.

> If not, simply quiescing and not unquiescing will hang in
> blk_cleanup_queue().

Why should it? blk_cleanup_queue does not hang, even under IO load...

> So I suggest to put blk_mq_unquiesce_queue() after blk_mq_tagset_busy_iter().

I can do that, but I don't understand why...

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 3/8] nvme-loop: quiesce admin_q instead of stopping hw queues
@ 2017-07-04  9:24       ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04  9:24 UTC (permalink / raw)


>> diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
>> index 3d51341e62ee..891002ee005f 100644
>> --- a/drivers/nvme/target/loop.c
>> +++ b/drivers/nvme/target/loop.c
>> @@ -434,7 +434,7 @@ static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl)
>>   	if (ctrl->ctrl.state == NVME_CTRL_LIVE)
>>   		nvme_shutdown_ctrl(&ctrl->ctrl);
>>   
>> -	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
>> +	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
>>   	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
>>   				nvme_cancel_request, &ctrl->ctrl);
>>   	nvme_loop_destroy_admin_queue(ctrl);
> 
> Is the queue killed before calling nvme_loop_destroy_admin_queue()?

No, its not.

> If not, simply quiescing and not unquiescing will hang in
> blk_cleanup_queue().

Why should it? blk_cleanup_queue does not hang, even under IO load...

> So I suggest to put blk_mq_unquiesce_queue() after blk_mq_tagset_busy_iter().

I can do that, but I don't understand why...

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/8] nvme-loop: quiesce admin_q instead of stopping hw queues
  2017-07-04  9:24       ` Sagi Grimberg
  (?)
@ 2017-07-04 10:38       ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04 10:38 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: linux-nvme, Ming Lei, Keith Busch, Jens Axboe, Christoph Hellwig,
	linux-block

[-- Attachment #1: Type: text/plain, Size: 1206 bytes --]

2017年7月4日 下午5:25,"Sagi Grimberg" <sagi@grimberg.me>写道:

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
>> index 3d51341e62ee..891002ee005f 100644
>> --- a/drivers/nvme/target/loop.c
>> +++ b/drivers/nvme/target/loop.c
>> @@ -434,7 +434,7 @@ static void nvme_loop_shutdown_ctrl(struct
>> nvme_loop_ctrl *ctrl)
>>         if (ctrl->ctrl.state == NVME_CTRL_LIVE)
>>                 nvme_shutdown_ctrl(&ctrl->ctrl);
>>   -     blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
>> +       blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
>>         blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
>>                                 nvme_cancel_request, &ctrl->ctrl);
>>         nvme_loop_destroy_admin_queue(ctrl);
>>
>
> Is the queue killed before calling nvme_loop_destroy_admin_queue()?
>

No, its not.


If not, simply quiescing and not unquiescing will hang in
> blk_cleanup_queue().
>

Why should it? blk_cleanup_queue does not hang, even under IO load...


After queue is quiesced, new req still can come in and be put into
sw queue or scheduler queue,
then these requests can't be
submitted and completed at all,
finally cleanup never returns.

[-- Attachment #2: Type: text/html, Size: 2223 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-04  9:07         ` Sagi Grimberg
@ 2017-07-04 12:41           ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04 12:41 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, Keith Busch, Christoph Hellwig, linux-nvme

On Tue, Jul 04, 2017 at 12:07:38PM +0300, Sagi Grimberg wrote:
> 
> > > > @@ -791,7 +791,8 @@ static void
> > > > nvme_rdma_error_recovery_work(struct work_struct *work)
> > > >        * queues are not a live anymore, so restart the queues to
> > > > fail fast
> > > >        * new IO
> > > >        */
> > > > -    blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
> > > > +    blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
> > > > +    blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
> > > 
> > > Now the queue won't be stopped via blk_mq_quiesce_queue(), so why do
> > > you add blk_mq_kick_requeue_list() here?
> > 
> > I think you're right.
> > 
> > We now quiesce the queue and fast fail inflight io, in
> > nvme_complete_rq we call blk_mq_requeue_request with
> > !blk_mq_queue_stopped(req->q) which is now true.
> > 
> > So the requeue_work is triggered and requeue the request,
> > and when we unquiesce we simply run the hw queues again.
> > 
> > If we were to call it with !blk_queue_quiesced(req->q)
> > I think it would be needed though...
> 
> If you look at nvme_start_queues, it also kicks the requeue
> work. I think that the proper fix for this is _keep_ the

Then the kick can be removed from nvme_start_queues()

> requeue kick and in nvme_complete_rq call:
> 
> blk_mq_requeue_request(req, !blk_queue_quiesced(req->q));
> 
> Thoughts?

I think we can always to kick the requeue work even when queue
is stopped. It is OK to put the requeue req into sw queue/scheduler
queue when queue is stopped.

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
@ 2017-07-04 12:41           ` Ming Lei
  0 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04 12:41 UTC (permalink / raw)


On Tue, Jul 04, 2017@12:07:38PM +0300, Sagi Grimberg wrote:
> 
> > > > @@ -791,7 +791,8 @@ static void
> > > > nvme_rdma_error_recovery_work(struct work_struct *work)
> > > >        * queues are not a live anymore, so restart the queues to
> > > > fail fast
> > > >        * new IO
> > > >        */
> > > > -    blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
> > > > +    blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
> > > > +    blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
> > > 
> > > Now the queue won't be stopped via blk_mq_quiesce_queue(), so why do
> > > you add blk_mq_kick_requeue_list() here?
> > 
> > I think you're right.
> > 
> > We now quiesce the queue and fast fail inflight io, in
> > nvme_complete_rq we call blk_mq_requeue_request with
> > !blk_mq_queue_stopped(req->q) which is now true.
> > 
> > So the requeue_work is triggered and requeue the request,
> > and when we unquiesce we simply run the hw queues again.
> > 
> > If we were to call it with !blk_queue_quiesced(req->q)
> > I think it would be needed though...
> 
> If you look at nvme_start_queues, it also kicks the requeue
> work. I think that the proper fix for this is _keep_ the

Then the kick can be removed from nvme_start_queues()

> requeue kick and in nvme_complete_rq call:
> 
> blk_mq_requeue_request(req, !blk_queue_quiesced(req->q));
> 
> Thoughts?

I think we can always to kick the requeue work even when queue
is stopped. It is OK to put the requeue req into sw queue/scheduler
queue when queue is stopped.

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-04 12:41           ` Ming Lei
@ 2017-07-04 15:35             ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04 15:35 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, Keith Busch, Christoph Hellwig, linux-nvme


> Then the kick can be removed from nvme_start_queues()
> 
>> requeue kick and in nvme_complete_rq call:
>>
>> blk_mq_requeue_request(req, !blk_queue_quiesced(req->q));
>>
>> Thoughts?
> 
> I think we can always to kick the requeue work even when queue
> is stopped. It is OK to put the requeue req into sw queue/scheduler
> queue when queue is stopped.
> 

Agreed.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
@ 2017-07-04 15:35             ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-04 15:35 UTC (permalink / raw)



> Then the kick can be removed from nvme_start_queues()
> 
>> requeue kick and in nvme_complete_rq call:
>>
>> blk_mq_requeue_request(req, !blk_queue_quiesced(req->q));
>>
>> Thoughts?
> 
> I think we can always to kick the requeue work even when queue
> is stopped. It is OK to put the requeue req into sw queue/scheduler
> queue when queue is stopped.
> 

Agreed.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 7/8] virtio_blk: quiesce/unquiesce live IO when entering PM states
  2017-07-04  7:55   ` Sagi Grimberg
@ 2017-07-04 21:39     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2017-07-04 21:39 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch, Jason Wang

On Tue, Jul 04, 2017 at 10:55:11AM +0300, Sagi Grimberg wrote:
> We must make sure that no requests are being queued before we iterate
> delete vqs. quiesce/unquiesce the request queue istead of start/stop
> hw queues.
> 
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>


Acked-by: Michael S. Tsirkin <mst@redhat.com>

But please remember to Cc virtio mailing list on virtio patches.


> ---
>  drivers/block/virtio_blk.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 0297ad7c1452..4e02aa5fdac0 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -840,7 +840,7 @@ static int virtblk_freeze(struct virtio_device *vdev)
>  	/* Make sure no work handler is accessing the device. */
>  	flush_work(&vblk->config_work);
>  
> -	blk_mq_stop_hw_queues(vblk->disk->queue);
> +	blk_mq_quiesce_queue(vblk->disk->queue);
>  
>  	vdev->config->del_vqs(vdev);
>  	return 0;
> @@ -857,7 +857,7 @@ static int virtblk_restore(struct virtio_device *vdev)
>  
>  	virtio_device_ready(vdev);
>  
> -	blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
> +	blk_mq_unquiesce_queue(vblk->disk->queue);
>  	return 0;
>  }
>  #endif
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 7/8] virtio_blk: quiesce/unquiesce live IO when entering PM states
@ 2017-07-04 21:39     ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2017-07-04 21:39 UTC (permalink / raw)


On Tue, Jul 04, 2017@10:55:11AM +0300, Sagi Grimberg wrote:
> We must make sure that no requests are being queued before we iterate
> delete vqs. quiesce/unquiesce the request queue istead of start/stop
> hw queues.
> 
> Cc: Michael S. Tsirkin <mst at redhat.com>
> Cc: Jason Wang <jasowang at redhat.com>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>


Acked-by: Michael S. Tsirkin <mst at redhat.com>

But please remember to Cc virtio mailing list on virtio patches.


> ---
>  drivers/block/virtio_blk.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 0297ad7c1452..4e02aa5fdac0 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -840,7 +840,7 @@ static int virtblk_freeze(struct virtio_device *vdev)
>  	/* Make sure no work handler is accessing the device. */
>  	flush_work(&vblk->config_work);
>  
> -	blk_mq_stop_hw_queues(vblk->disk->queue);
> +	blk_mq_quiesce_queue(vblk->disk->queue);
>  
>  	vdev->config->del_vqs(vdev);
>  	return 0;
> @@ -857,7 +857,7 @@ static int virtblk_restore(struct virtio_device *vdev)
>  
>  	virtio_device_ready(vdev);
>  
> -	blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
> +	blk_mq_unquiesce_queue(vblk->disk->queue);
>  	return 0;
>  }
>  #endif
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 8/8] xen-blockfront: quiesce IO before device removal
  2017-07-04  7:55   ` Sagi Grimberg
@ 2017-07-04 22:19     ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04 22:19 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch, Konrad Rzeszutek Wilk, Roger Pau Monné,
	Boris Ostrovsky, Juergen Gross

On Tue, Jul 04, 2017 at 10:55:12AM +0300, Sagi Grimberg wrote:
> Before calling blk_cleanup_queue one must make sure
> that no request is being queued. In order to guarantee that
> we need to use blk_mq_quiesce as it respects the submission
> path rcu grace.
> 
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Roger Pau Monn� <roger.pau@citrix.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/block/xen-blkfront.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index c852ed3c01d5..5272ca8fb0dc 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -1187,7 +1187,7 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
>  		return;
>  
>  	/* No more blkif_request(). */
> -	blk_mq_stop_hw_queues(info->rq);
> +	blk_mq_quiesce_queue(info->rq);
>  
>  	for (i = 0; i < info->nr_rings; i++) {
>  		struct blkfront_ring_info *rinfo = &info->rinfo[i];
> @@ -1217,7 +1217,7 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
>  static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
>  {
>  	if (!RING_FULL(&rinfo->ring))
> -		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
> +		blk_mq_unquiesce_queue(rinfo->dev_info->rq);
>  }

Looks the above change isn't needed since this blk_mq_start_stopped_hw_queues()
should have been the counterpart of blk_mq_stop_hw_queue() in
blkif_queue_rq().

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 8/8] xen-blockfront: quiesce IO before device removal
@ 2017-07-04 22:19     ` Ming Lei
  0 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04 22:19 UTC (permalink / raw)


On Tue, Jul 04, 2017@10:55:12AM +0300, Sagi Grimberg wrote:
> Before calling blk_cleanup_queue one must make sure
> that no request is being queued. In order to guarantee that
> we need to use blk_mq_quiesce as it respects the submission
> path rcu grace.
> 
> Cc: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>
> Cc: Roger Pau Monn? <roger.pau at citrix.com>
> Cc: Boris Ostrovsky <boris.ostrovsky at oracle.com>
> Cc: Juergen Gross <jgross at suse.com>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>  drivers/block/xen-blkfront.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index c852ed3c01d5..5272ca8fb0dc 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -1187,7 +1187,7 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
>  		return;
>  
>  	/* No more blkif_request(). */
> -	blk_mq_stop_hw_queues(info->rq);
> +	blk_mq_quiesce_queue(info->rq);
>  
>  	for (i = 0; i < info->nr_rings; i++) {
>  		struct blkfront_ring_info *rinfo = &info->rinfo[i];
> @@ -1217,7 +1217,7 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
>  static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
>  {
>  	if (!RING_FULL(&rinfo->ring))
> -		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
> +		blk_mq_unquiesce_queue(rinfo->dev_info->rq);
>  }

Looks the above change isn't needed since this blk_mq_start_stopped_hw_queues()
should have been the counterpart of blk_mq_stop_hw_queue() in
blkif_queue_rq().

-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 6/8] mtip32xx: quiesce request queues to make sure no submissions are inflight
  2017-07-04  7:55   ` Sagi Grimberg
@ 2017-07-04 22:32     ` Ming Lei
  -1 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04 22:32 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch

On Tue, Jul 04, 2017 at 10:55:10AM +0300, Sagi Grimberg wrote:
> Unlike blk_mq_stop_hw_queues, blk_mq_quiesce_queue respects the
> submission path rcu grace. quiesce the queue before iterating
> on live tags, or performing device io quiescing.
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/block/mtip32xx/mtip32xx.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
> index 61b046f256ca..69a62aeace2a 100644
> --- a/drivers/block/mtip32xx/mtip32xx.c
> +++ b/drivers/block/mtip32xx/mtip32xx.c
> @@ -950,7 +950,7 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
>  	unsigned long to;
>  	bool active = true;
>  
> -	blk_mq_stop_hw_queues(port->dd->queue);
> +	blk_mq_quiesce_queue(port->dd->queue);
>  
>  	to = jiffies + msecs_to_jiffies(timeout);
>  	do {
> @@ -970,10 +970,10 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
>  			break;
>  	} while (time_before(jiffies, to));
>  
> -	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
> +	blk_mq_unquiesce_queue(port->dd->queue);
>  	return active ? -EBUSY : 0;
>  err_fault:
> -	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
> +	blk_mq_unquiesce_queue(port->dd->queue);
>  	return -EFAULT;
>  }

The above change looks correct.

>  
> @@ -3995,7 +3995,7 @@ static int mtip_block_remove(struct driver_data *dd)
>  						dd->disk->disk_name);
>  
>  	blk_freeze_queue_start(dd->queue);
> -	blk_mq_stop_hw_queues(dd->queue);
> +	blk_mq_quiesce_queue(dd->queue);
>  	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);

We still need to unquiesce queue for avoiding hanging blk_freeze_queue()
in blk_cleanup_queue() since any new request queued during quiescing
can't be dispatched to driver/device.

There are other blk_mq_tagset_busy_iter() in mtip_service_thread(),
which looks need quiesce too. This case is even worse, because both
mtip_abort_cmd() and mtip_queue_cmd() do not check if the req is
started.


-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 6/8] mtip32xx: quiesce request queues to make sure no submissions are inflight
@ 2017-07-04 22:32     ` Ming Lei
  0 siblings, 0 replies; 57+ messages in thread
From: Ming Lei @ 2017-07-04 22:32 UTC (permalink / raw)


On Tue, Jul 04, 2017@10:55:10AM +0300, Sagi Grimberg wrote:
> Unlike blk_mq_stop_hw_queues, blk_mq_quiesce_queue respects the
> submission path rcu grace. quiesce the queue before iterating
> on live tags, or performing device io quiescing.
> 
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>  drivers/block/mtip32xx/mtip32xx.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
> index 61b046f256ca..69a62aeace2a 100644
> --- a/drivers/block/mtip32xx/mtip32xx.c
> +++ b/drivers/block/mtip32xx/mtip32xx.c
> @@ -950,7 +950,7 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
>  	unsigned long to;
>  	bool active = true;
>  
> -	blk_mq_stop_hw_queues(port->dd->queue);
> +	blk_mq_quiesce_queue(port->dd->queue);
>  
>  	to = jiffies + msecs_to_jiffies(timeout);
>  	do {
> @@ -970,10 +970,10 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
>  			break;
>  	} while (time_before(jiffies, to));
>  
> -	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
> +	blk_mq_unquiesce_queue(port->dd->queue);
>  	return active ? -EBUSY : 0;
>  err_fault:
> -	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
> +	blk_mq_unquiesce_queue(port->dd->queue);
>  	return -EFAULT;
>  }

The above change looks correct.

>  
> @@ -3995,7 +3995,7 @@ static int mtip_block_remove(struct driver_data *dd)
>  						dd->disk->disk_name);
>  
>  	blk_freeze_queue_start(dd->queue);
> -	blk_mq_stop_hw_queues(dd->queue);
> +	blk_mq_quiesce_queue(dd->queue);
>  	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);

We still need to unquiesce queue for avoiding hanging blk_freeze_queue()
in blk_cleanup_queue() since any new request queued during quiescing
can't be dispatched to driver/device.

There are other blk_mq_tagset_busy_iter() in mtip_service_thread(),
which looks need quiesce too. This case is even worse, because both
mtip_abort_cmd() and mtip_queue_cmd() do not check if the req is
started.


-- 
Ming

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 8/8] xen-blockfront: quiesce IO before device removal
  2017-07-04 22:19     ` Ming Lei
@ 2017-07-05  6:29       ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:29 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch, Konrad Rzeszutek Wilk, Roger Pau Monné,
	Boris Ostrovsky, Juergen Gross



> Looks the above change isn't needed since this blk_mq_start_stopped_hw_queues()
> should have been the counterpart of blk_mq_stop_hw_queue() in
> blkif_queue_rq().

Removed the patch altogether in v2 as I'm not convinced it is needed.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 8/8] xen-blockfront: quiesce IO before device removal
@ 2017-07-05  6:29       ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:29 UTC (permalink / raw)




> Looks the above change isn't needed since this blk_mq_start_stopped_hw_queues()
> should have been the counterpart of blk_mq_stop_hw_queue() in
> blkif_queue_rq().

Removed the patch altogether in v2 as I'm not convinced it is needed.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 6/8] mtip32xx: quiesce request queues to make sure no submissions are inflight
  2017-07-04 22:32     ` Ming Lei
@ 2017-07-05  6:34       ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:34 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig, Keith Busch

>> @@ -3995,7 +3995,7 @@ static int mtip_block_remove(struct driver_data *dd)
>>   						dd->disk->disk_name);
>>   
>>   	blk_freeze_queue_start(dd->queue);
>> -	blk_mq_stop_hw_queues(dd->queue);
>> +	blk_mq_quiesce_queue(dd->queue);
>>   	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
> 
> We still need to unquiesce queue for avoiding hanging blk_freeze_queue()
> in blk_cleanup_queue() since any new request queued during quiescing
> can't be dispatched to driver/device.

Yes, already added it in v2.

> There are other blk_mq_tagset_busy_iter() in mtip_service_thread(),
> which looks need quiesce too.

Wasn't sure about those two... I agree it looks like quiescing is
needed, will add in v2.

> This case is even worse, because both
> mtip_abort_cmd() and mtip_queue_cmd() do not check if the req is
> started.

Its easy enough to add.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 6/8] mtip32xx: quiesce request queues to make sure no submissions are inflight
@ 2017-07-05  6:34       ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:34 UTC (permalink / raw)


>> @@ -3995,7 +3995,7 @@ static int mtip_block_remove(struct driver_data *dd)
>>   						dd->disk->disk_name);
>>   
>>   	blk_freeze_queue_start(dd->queue);
>> -	blk_mq_stop_hw_queues(dd->queue);
>> +	blk_mq_quiesce_queue(dd->queue);
>>   	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
> 
> We still need to unquiesce queue for avoiding hanging blk_freeze_queue()
> in blk_cleanup_queue() since any new request queued during quiescing
> can't be dispatched to driver/device.

Yes, already added it in v2.

> There are other blk_mq_tagset_busy_iter() in mtip_service_thread(),
> which looks need quiesce too.

Wasn't sure about those two... I agree it looks like quiescing is
needed, will add in v2.

> This case is even worse, because both
> mtip_abort_cmd() and mtip_queue_cmd() do not check if the req is
> started.

Its easy enough to add.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 8/8] xen-blockfront: quiesce IO before device removal
  2017-07-05  6:29       ` Sagi Grimberg
@ 2017-07-05 22:56         ` Christoph Hellwig
  -1 siblings, 0 replies; 57+ messages in thread
From: Christoph Hellwig @ 2017-07-05 22:56 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Ming Lei, Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch, Konrad Rzeszutek Wilk, Roger Pau Monné,
	Boris Ostrovsky, Juergen Gross

On Wed, Jul 05, 2017 at 09:29:26AM +0300, Sagi Grimberg wrote:
>
>
>> Looks the above change isn't needed since this blk_mq_start_stopped_hw_queues()
>> should have been the counterpart of blk_mq_stop_hw_queue() in
>> blkif_queue_rq().
>
> Removed the patch altogether in v2 as I'm not convinced it is needed.

I've only started reading the series, but to me it seems like the
two places where xen-blkfront currently calls blk_mq_stop_hw_queues
should probably converted, as they are the same category as the other
calls converted in your series.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 8/8] xen-blockfront: quiesce IO before device removal
@ 2017-07-05 22:56         ` Christoph Hellwig
  0 siblings, 0 replies; 57+ messages in thread
From: Christoph Hellwig @ 2017-07-05 22:56 UTC (permalink / raw)


On Wed, Jul 05, 2017@09:29:26AM +0300, Sagi Grimberg wrote:
>
>
>> Looks the above change isn't needed since this blk_mq_start_stopped_hw_queues()
>> should have been the counterpart of blk_mq_stop_hw_queue() in
>> blkif_queue_rq().
>
> Removed the patch altogether in v2 as I'm not convinced it is needed.

I've only started reading the series, but to me it seems like the
two places where xen-blkfront currently calls blk_mq_stop_hw_queues
should probably converted, as they are the same category as the other
calls converted in your series.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 8/8] xen-blockfront: quiesce IO before device removal
  2017-07-05 22:56         ` Christoph Hellwig
@ 2017-07-06  6:52           ` Sagi Grimberg
  -1 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-06  6:52 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Ming Lei, Jens Axboe, linux-block, linux-nvme, Keith Busch,
	Konrad Rzeszutek Wilk, Roger Pau Monné,
	Boris Ostrovsky, Juergen Gross


>>> Looks the above change isn't needed since this blk_mq_start_stopped_hw_queues()
>>> should have been the counterpart of blk_mq_stop_hw_queue() in
>>> blkif_queue_rq().
>>
>> Removed the patch altogether in v2 as I'm not convinced it is needed.
> 
> I've only started reading the series, but to me it seems like the
> two places where xen-blkfront currently calls blk_mq_stop_hw_queues
> should probably converted, as they are the same category as the other
> calls converted in your series.

I'm not familiar enough with the code, I can restore a correct version
of it if it makes sense to you guys...

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 8/8] xen-blockfront: quiesce IO before device removal
@ 2017-07-06  6:52           ` Sagi Grimberg
  0 siblings, 0 replies; 57+ messages in thread
From: Sagi Grimberg @ 2017-07-06  6:52 UTC (permalink / raw)



>>> Looks the above change isn't needed since this blk_mq_start_stopped_hw_queues()
>>> should have been the counterpart of blk_mq_stop_hw_queue() in
>>> blkif_queue_rq().
>>
>> Removed the patch altogether in v2 as I'm not convinced it is needed.
> 
> I've only started reading the series, but to me it seems like the
> two places where xen-blkfront currently calls blk_mq_stop_hw_queues
> should probably converted, as they are the same category as the other
> calls converted in your series.

I'm not familiar enough with the code, I can restore a correct version
of it if it makes sense to you guys...

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2017-07-06  6:52 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-04  7:55 [PATCH 0/8] correct quiescing in several block drivers Sagi Grimberg
2017-07-04  7:55 ` Sagi Grimberg
2017-07-04  7:55 ` [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues Sagi Grimberg
2017-07-04  7:55   ` Sagi Grimberg
2017-07-04  8:15   ` Ming Lei
2017-07-04  8:15     ` Ming Lei
2017-07-04  8:59     ` Sagi Grimberg
2017-07-04  8:59       ` Sagi Grimberg
2017-07-04  9:07       ` Sagi Grimberg
2017-07-04  9:07         ` Sagi Grimberg
2017-07-04 12:41         ` Ming Lei
2017-07-04 12:41           ` Ming Lei
2017-07-04 15:35           ` Sagi Grimberg
2017-07-04 15:35             ` Sagi Grimberg
2017-07-04  7:55 ` [PATCH 2/8] nvme-fc: " Sagi Grimberg
2017-07-04  7:55   ` Sagi Grimberg
2017-07-04  8:18   ` Ming Lei
2017-07-04  8:18     ` Ming Lei
2017-07-04  7:55 ` [PATCH 3/8] nvme-loop: quiesce admin_q instead of stopping " Sagi Grimberg
2017-07-04  7:55   ` Sagi Grimberg
2017-07-04  8:23   ` Ming Lei
2017-07-04  8:23     ` Ming Lei
2017-07-04  9:24     ` Sagi Grimberg
2017-07-04  9:24       ` Sagi Grimberg
2017-07-04 10:38       ` Ming Lei
2017-07-04  7:55 ` [PATCH 4/8] nvme-pci: quiesce/unquiesce admin_q instead of start/stop its " Sagi Grimberg
2017-07-04  7:55   ` Sagi Grimberg
2017-07-04  8:26   ` Ming Lei
2017-07-04  8:26     ` Ming Lei
2017-07-04  7:55 ` [PATCH 5/8] nbd: quiesce request queues to make sure no submissions are inflight Sagi Grimberg
2017-07-04  7:55   ` Sagi Grimberg
2017-07-04  8:28   ` Ming Lei
2017-07-04  8:28     ` Ming Lei
2017-07-04  7:55 ` [PATCH 6/8] mtip32xx: " Sagi Grimberg
2017-07-04  7:55   ` Sagi Grimberg
2017-07-04 22:32   ` Ming Lei
2017-07-04 22:32     ` Ming Lei
2017-07-05  6:34     ` Sagi Grimberg
2017-07-05  6:34       ` Sagi Grimberg
2017-07-04  7:55 ` [PATCH 7/8] virtio_blk: quiesce/unquiesce live IO when entering PM states Sagi Grimberg
2017-07-04  7:55   ` Sagi Grimberg
2017-07-04  8:41   ` Ming Lei
2017-07-04  8:41     ` Ming Lei
2017-07-04 21:39   ` Michael S. Tsirkin
2017-07-04 21:39     ` Michael S. Tsirkin
2017-07-04  7:55 ` [PATCH 8/8] xen-blockfront: quiesce IO before device removal Sagi Grimberg
2017-07-04  7:55   ` Sagi Grimberg
2017-07-04 22:19   ` Ming Lei
2017-07-04 22:19     ` Ming Lei
2017-07-05  6:29     ` Sagi Grimberg
2017-07-05  6:29       ` Sagi Grimberg
2017-07-05 22:56       ` Christoph Hellwig
2017-07-05 22:56         ` Christoph Hellwig
2017-07-06  6:52         ` Sagi Grimberg
2017-07-06  6:52           ` Sagi Grimberg
2017-07-04  8:12 ` [PATCH 0/8] correct quiescing in several block drivers Ming Lei
2017-07-04  8:12   ` Ming Lei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.