linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] nvme/host/pci: Fix a race in controller removal
@ 2019-09-13  2:44 Balbir Singh
  2019-09-13  2:44 ` [PATCH 2/2] nvme/host/core: Allow overriding of wait_ready timeout Balbir Singh
  2019-09-13 15:01 ` [PATCH 1/2] nvme/host/pci: Fix a race in controller removal Keith Busch
  0 siblings, 2 replies; 9+ messages in thread
From: Balbir Singh @ 2019-09-13  2:44 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, Balbir Singh, hch, sagi

This race is hard to hit in general, now that we
have the shutdown_lock in both nvme_reset_work() and
nvme_dev_disable()

The real issue is that after doing all the setup work
in nvme_reset_work(), when get another timeout (nvme_timeout()),
then we proceed to disable the controller. This causes
the reset work to only partially progress and then fail.

Depending on the progress made, we call into
nvme_remove_dead_ctrl(), which does another
nvme_dev_disable() freezing the block-mq queues.

I've noticed a race with udevd with udevd trying to re-read
the partition table, it ends up with the bd_mutex held and
it ends up waiting in blk_queue_enter(), since we froze
the queues in nvme_dev_disable(). nvme_kill_queues() calls
revalidate_disk() and ends up waiting on the bd_mutex
resulting in a deadlock.

Allow the hung tasks a chance by unfreezing the queues after
setting dying bits on the queue, then call revalidate_disk()
to update the disk size.

NOTE: I've seen this race when the controller does not
respond to IOs or abort requests, but responds to other
commands and even signals it's ready after its reset,
but still drops IO. I've tested this by emulating the
behaviour in the driver.

Signed-off-by: Balbir Singh <sblbir@amzn.com>
---
 drivers/nvme/host/core.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index b45f82d58be8..45b96c6ac2d5 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -103,10 +103,15 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
 	 */
 	if (!ns->disk || test_and_set_bit(NVME_NS_DEAD, &ns->flags))
 		return;
-	revalidate_disk(ns->disk);
 	blk_set_queue_dying(ns->queue);
+	/*
+	 * Allow any pending udevd commands to be unblocked
+	 * so that revalidate_disk can then get bd_mutex
+	 */
+	blk_mq_unfreeze_queue(ns->queue);
 	/* Forcibly unquiesce queues to avoid blocking dispatch */
 	blk_mq_unquiesce_queue(ns->queue);
+	revalidate_disk(ns->disk);
 }
 
 static void nvme_queue_scan(struct nvme_ctrl *ctrl)
-- 
2.16.5


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-09-13 22:46 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-13  2:44 [PATCH 1/2] nvme/host/pci: Fix a race in controller removal Balbir Singh
2019-09-13  2:44 ` [PATCH 2/2] nvme/host/core: Allow overriding of wait_ready timeout Balbir Singh
2019-09-13 14:55   ` Keith Busch
2019-09-13 20:47     ` Singh, Balbir
2019-09-13 15:01 ` [PATCH 1/2] nvme/host/pci: Fix a race in controller removal Keith Busch
2019-09-13 20:58   ` Singh, Balbir
2019-09-13 21:40     ` Bart Van Assche
2019-09-13 22:01       ` Singh, Balbir
2019-09-13 22:46         ` Singh, Balbir

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).