All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Sagi Grimberg <sagi@grimberg.me>, Keith Busch <kbusch@kernel.org>,
	Jens Axboe <axboe@kernel.dk>
Cc: Chao Leng <lengchao@huawei.com>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org
Subject: [PATCH 2/2] nvme-multipath: don't block on blk_queue_enter of the underlying device
Date: Mon, 22 Mar 2021 08:37:26 +0100	[thread overview]
Message-ID: <20210322073726.788347-3-hch@lst.de> (raw)
In-Reply-To: <20210322073726.788347-1-hch@lst.de>

When we reset/teardown a controller, we must freeze and quiesce the
namespaces request queues to make sure that we safely stop inflight I/O
submissions. Freeze is mandatory because if our hctx map changed between
reconnects, blk_mq_update_nr_hw_queues will immediately attempt to freeze
the queue, and if it still has pending submissions (that are still
quiesced) it will hang.

However, by freezing the namespaces request queues, and only unfreezing
them when we successfully reconnect, inflight submissions that are
running concurrently can now block grabbing the nshead srcu until either
we successfully reconnect or ctrl_loss_tmo expired (or the user
explicitly disconnected).

This caused a deadlock when a different controller (different path on the
same subsystem) became live (i.e. optimized/non-optimized). This is
because nvme_mpath_set_live needs to synchronize the nshead srcu before
requeueing I/O in order to make sure that current_path is visible to
future (re-)submisions. However the srcu lock is taken by a blocked
submission on a frozen request queue, and we have a deadlock.

In order to fix this use the blk_mq_submit_bio_direct API to submit the
bio to the low-level driver, which does not block on the queue free
but instead allows nvme-multipath to pick another path or queue up the
bio.

Fixes: 9f98772ba307 ("nvme-rdma: fix controller reset hang during traffic")
Fixes: 2875b0aecabe ("nvme-tcp: fix controller reset hang during traffic")

Reported-by Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/multipath.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index a1d476e1ac020f..92adebfaf86fd1 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -309,6 +309,7 @@ blk_qc_t nvme_ns_head_submit_bio(struct bio *bio)
 	 */
 	blk_queue_split(&bio);
 
+retry:
 	srcu_idx = srcu_read_lock(&head->srcu);
 	ns = nvme_find_path(head);
 	if (likely(ns)) {
@@ -316,7 +317,12 @@ blk_qc_t nvme_ns_head_submit_bio(struct bio *bio)
 		bio->bi_opf |= REQ_NVME_MPATH;
 		trace_block_bio_remap(bio, disk_devt(ns->head->disk),
 				      bio->bi_iter.bi_sector);
-		ret = submit_bio_noacct(bio);
+
+		if (!blk_mq_submit_bio_direct(bio, &ret)) {
+			nvme_mpath_clear_current_path(ns);
+			srcu_read_unlock(&head->srcu, srcu_idx);
+			goto retry;
+		}
 	} else if (nvme_available_path(head)) {
 		dev_warn_ratelimited(dev, "no usable path - requeuing I/O\n");
 
-- 
2.30.1


WARNING: multiple messages have this Message-ID (diff)
From: Christoph Hellwig <hch@lst.de>
To: Sagi Grimberg <sagi@grimberg.me>, Keith Busch <kbusch@kernel.org>,
	Jens Axboe <axboe@kernel.dk>
Cc: Chao Leng <lengchao@huawei.com>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org
Subject: [PATCH 2/2] nvme-multipath: don't block on blk_queue_enter of the underlying device
Date: Mon, 22 Mar 2021 08:37:26 +0100	[thread overview]
Message-ID: <20210322073726.788347-3-hch@lst.de> (raw)
In-Reply-To: <20210322073726.788347-1-hch@lst.de>

When we reset/teardown a controller, we must freeze and quiesce the
namespaces request queues to make sure that we safely stop inflight I/O
submissions. Freeze is mandatory because if our hctx map changed between
reconnects, blk_mq_update_nr_hw_queues will immediately attempt to freeze
the queue, and if it still has pending submissions (that are still
quiesced) it will hang.

However, by freezing the namespaces request queues, and only unfreezing
them when we successfully reconnect, inflight submissions that are
running concurrently can now block grabbing the nshead srcu until either
we successfully reconnect or ctrl_loss_tmo expired (or the user
explicitly disconnected).

This caused a deadlock when a different controller (different path on the
same subsystem) became live (i.e. optimized/non-optimized). This is
because nvme_mpath_set_live needs to synchronize the nshead srcu before
requeueing I/O in order to make sure that current_path is visible to
future (re-)submisions. However the srcu lock is taken by a blocked
submission on a frozen request queue, and we have a deadlock.

In order to fix this use the blk_mq_submit_bio_direct API to submit the
bio to the low-level driver, which does not block on the queue free
but instead allows nvme-multipath to pick another path or queue up the
bio.

Fixes: 9f98772ba307 ("nvme-rdma: fix controller reset hang during traffic")
Fixes: 2875b0aecabe ("nvme-tcp: fix controller reset hang during traffic")

Reported-by Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/multipath.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index a1d476e1ac020f..92adebfaf86fd1 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -309,6 +309,7 @@ blk_qc_t nvme_ns_head_submit_bio(struct bio *bio)
 	 */
 	blk_queue_split(&bio);
 
+retry:
 	srcu_idx = srcu_read_lock(&head->srcu);
 	ns = nvme_find_path(head);
 	if (likely(ns)) {
@@ -316,7 +317,12 @@ blk_qc_t nvme_ns_head_submit_bio(struct bio *bio)
 		bio->bi_opf |= REQ_NVME_MPATH;
 		trace_block_bio_remap(bio, disk_devt(ns->head->disk),
 				      bio->bi_iter.bi_sector);
-		ret = submit_bio_noacct(bio);
+
+		if (!blk_mq_submit_bio_direct(bio, &ret)) {
+			nvme_mpath_clear_current_path(ns);
+			srcu_read_unlock(&head->srcu, srcu_idx);
+			goto retry;
+		}
 	} else if (nvme_available_path(head)) {
 		dev_warn_ratelimited(dev, "no usable path - requeuing I/O\n");
 
-- 
2.30.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2021-03-22  7:39 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-22  7:37 fix nvme-tcp and nvme-rdma controller reset hangs when using multipath Christoph Hellwig
2021-03-22  7:37 ` Christoph Hellwig
2021-03-22  7:37 ` [PATCH 1/2] blk-mq: add a blk_mq_submit_bio_direct API Christoph Hellwig
2021-03-22  7:37   ` Christoph Hellwig
2021-03-22 11:23   ` Hannes Reinecke
2021-03-22 11:23     ` Hannes Reinecke
2021-03-22 15:30   ` Keith Busch
2021-03-22 15:30     ` Keith Busch
2021-03-22  7:37 ` Christoph Hellwig [this message]
2021-03-22  7:37   ` [PATCH 2/2] nvme-multipath: don't block on blk_queue_enter of the underlying device Christoph Hellwig
2021-03-22 11:22   ` Hannes Reinecke
2021-03-22 11:22     ` Hannes Reinecke
2021-03-22 15:31   ` Keith Busch
2021-03-22 15:31     ` Keith Busch
2021-03-23  2:57   ` Sagi Grimberg
2021-03-23  2:57     ` Sagi Grimberg
2021-03-23  3:23     ` Sagi Grimberg
2021-03-23  3:23       ` Sagi Grimberg
2021-03-23  7:04       ` Chao Leng
2021-03-23  7:04         ` Chao Leng
2021-03-23  7:36         ` Sagi Grimberg
2021-03-23  7:36           ` Sagi Grimberg
2021-03-23  8:13           ` Chao Leng
2021-03-23  8:13             ` Chao Leng
2021-03-23 16:17             ` Christoph Hellwig
2021-03-23 16:17               ` Christoph Hellwig
2021-03-23 16:15           ` Christoph Hellwig
2021-03-23 16:15             ` Christoph Hellwig
2021-03-23 18:13             ` Sagi Grimberg
2021-03-23 18:13               ` Sagi Grimberg
2021-03-23 18:22               ` Christoph Hellwig
2021-03-23 18:22                 ` Christoph Hellwig
2021-03-23 19:00                 ` Sagi Grimberg
2021-03-23 19:00                   ` Sagi Grimberg
2021-03-23 19:01                   ` Christoph Hellwig
2021-03-23 19:01                     ` Christoph Hellwig
2021-03-23 19:10                     ` Sagi Grimberg
2021-03-23 19:10                       ` Sagi Grimberg
2021-03-23  7:28     ` Hannes Reinecke
2021-03-23  7:28       ` Hannes Reinecke
2021-03-23  7:31       ` Sagi Grimberg
2021-03-23  7:31         ` Sagi Grimberg
2021-03-23  8:36         ` Hannes Reinecke
2021-03-23  8:36           ` Hannes Reinecke
2021-03-23 14:53           ` Keith Busch
2021-03-23 14:53             ` Keith Busch
2021-03-23 16:19             ` Christoph Hellwig
2021-03-23 16:19               ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210322073726.788347-3-hch@lst.de \
    --to=hch@lst.de \
    --cc=axboe@kernel.dk \
    --cc=kbusch@kernel.org \
    --cc=lengchao@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.