linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 rfc 0/3] resolve controller delete hang due to ongoing mpath I/O
@ 2020-07-05  7:59 Sagi Grimberg
  2020-07-05  7:59 ` [PATCH v2 rfc 1/3] nvme: split nvme_remove_namespaces Sagi Grimberg
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Sagi Grimberg @ 2020-07-05  7:59 UTC (permalink / raw)
  To: linux-nvme, Christoph Hellwig, Keith Busch; +Cc: Anton Eidelman

Changes from v1:
- Rename states to NVME_CTRL_DELETING and NVME_CTRL_DELETING_NOIO to better
  describe the states
- Added prep patch to split nvme_remove_namespaces to _prep_ and _do_
- Added prep patch to provide some documentation about the states

A deadlock happens in the following scenario with multipath:
1) scan_work(nvme0) detects a new nsid while nvme0
    is an optimized path to it, path nvme1 happens to be
    inaccessible.

2) Before scan_work is complete nvme0 disconnect is initiated
    nvme_delete_ctrl_sync() sets nvme0 state to NVME_CTRL_DELETING

3) scan_work(1) attempts to submit IO,
    but nvme_path_is_optimized() observes nvme0 is not LIVE.
    Since nvme1 is a possible path IO is requeued and scan_work hangs.

4) Delete also hangs in flush_work(ctrl->scan_work)
    from nvme_remove_namespaces().

Similiarly a deadlock with ana_work may happen: if ana_work has started
and calls nvme_mpath_set_live and device_add_disk, it will
trigger I/O. When we trigger disconnect I/O will block because
our accessible (optimized) path is disconnecting, but the alternate
path is inaccessible, so I/O blocks. Then disconnect tries to flush
the ana_work and hangs.

This patchset alters the nvme states to address this deadlock condition.

Feedback is welcome.

Sagi Grimberg (3):
  nvme: split nvme_remove_namespaces
  nvme: document nvme controller states
  nvme-core: fix deadlock in disconnect during scan_work and/or ana_work

 drivers/nvme/host/core.c      | 45 +++++++++++++++++++++++++++--------
 drivers/nvme/host/fc.c        |  1 +
 drivers/nvme/host/multipath.c | 13 +++++++++-
 drivers/nvme/host/nvme.h      | 23 +++++++++++++++++-
 drivers/nvme/host/pci.c       |  4 +++-
 drivers/nvme/host/rdma.c      | 10 ++++----
 drivers/nvme/host/tcp.c       | 15 +++++++-----
 7 files changed, 88 insertions(+), 23 deletions(-)

-- 
2.25.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 rfc 1/3] nvme: split nvme_remove_namespaces
  2020-07-05  7:59 [PATCH v2 rfc 0/3] resolve controller delete hang due to ongoing mpath I/O Sagi Grimberg
@ 2020-07-05  7:59 ` Sagi Grimberg
  2020-07-08 15:17   ` Christoph Hellwig
  2020-07-05  7:59 ` [PATCH v2 rfc 2/3] nvme: document nvme controller states Sagi Grimberg
  2020-07-05  7:59 ` [PATCH v2 rfc 3/3] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work Sagi Grimberg
  2 siblings, 1 reply; 8+ messages in thread
From: Sagi Grimberg @ 2020-07-05  7:59 UTC (permalink / raw)
  To: linux-nvme, Christoph Hellwig, Keith Busch; +Cc: Anton Eidelman

In controller deletion we will need to add a state transition
after we flush the namespace scanning, and continue to the
namespace removal afterwards. Hence, split nvme_remove_namespaces
so that we can add this change only in the nvme_do_delete_ctrl
call site.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c | 29 ++++++++++++++++++++---------
 drivers/nvme/host/nvme.h |  3 ++-
 2 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 509bf4e1d423..f1bb2a522cf0 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4091,16 +4091,8 @@ static void nvme_scan_work(struct work_struct *work)
 	up_write(&ctrl->namespaces_rwsem);
 }
 
-/*
- * This function iterates the namespace list unlocked to allow recovery from
- * controller failure. It is up to the caller to ensure the namespace list is
- * not modified by scan work while this function is executing.
- */
-void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
+void nvme_prep_remove_namespaces(struct nvme_ctrl *ctrl)
 {
-	struct nvme_ns *ns, *next;
-	LIST_HEAD(ns_list);
-
 	/*
 	 * make sure to requeue I/O to all namespaces as these
 	 * might result from the scan itself and must complete
@@ -4119,6 +4111,13 @@ void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
 	 */
 	if (ctrl->state == NVME_CTRL_DEAD)
 		nvme_kill_queues(ctrl);
+}
+EXPORT_SYMBOL_GPL(nvme_prep_remove_namespaces);
+
+void nvme_do_remove_namespaces(struct nvme_ctrl *ctrl)
+{
+	struct nvme_ns *ns, *next;
+	LIST_HEAD(ns_list);
 
 	down_write(&ctrl->namespaces_rwsem);
 	list_splice_init(&ctrl->namespaces, &ns_list);
@@ -4127,6 +4126,18 @@ void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
 	list_for_each_entry_safe(ns, next, &ns_list, list)
 		nvme_ns_remove(ns);
 }
+EXPORT_SYMBOL_GPL(nvme_do_remove_namespaces);
+
+/*
+ * This function iterates the namespace list unlocked to allow recovery from
+ * controller failure. It is up to the caller to ensure the namespace list is
+ * not modified by scan work while this function is executing.
+ */
+void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
+{
+	nvme_prep_remove_namespaces(ctrl);
+	nvme_do_remove_namespaces(ctrl);
+}
 EXPORT_SYMBOL_GPL(nvme_remove_namespaces);
 
 static int nvme_class_uevent(struct device *dev, struct kobj_uevent_env *env)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 85d76981b66e..f184ae623f12 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -530,7 +530,8 @@ void nvme_uninit_ctrl(struct nvme_ctrl *ctrl);
 void nvme_start_ctrl(struct nvme_ctrl *ctrl);
 void nvme_stop_ctrl(struct nvme_ctrl *ctrl);
 int nvme_init_identify(struct nvme_ctrl *ctrl);
-
+void nvme_prep_remove_namespaces(struct nvme_ctrl *ctrl);
+void nvme_do_remove_namespaces(struct nvme_ctrl *ctrl);
 void nvme_remove_namespaces(struct nvme_ctrl *ctrl);
 
 int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
-- 
2.25.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 rfc 2/3] nvme: document nvme controller states
  2020-07-05  7:59 [PATCH v2 rfc 0/3] resolve controller delete hang due to ongoing mpath I/O Sagi Grimberg
  2020-07-05  7:59 ` [PATCH v2 rfc 1/3] nvme: split nvme_remove_namespaces Sagi Grimberg
@ 2020-07-05  7:59 ` Sagi Grimberg
  2020-07-05  7:59 ` [PATCH v2 rfc 3/3] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work Sagi Grimberg
  2 siblings, 0 replies; 8+ messages in thread
From: Sagi Grimberg @ 2020-07-05  7:59 UTC (permalink / raw)
  To: linux-nvme, Christoph Hellwig, Keith Busch; +Cc: Anton Eidelman

We are starting to see some non-trivial states
so lets start documenting them.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/nvme.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index f184ae623f12..0bffa0435e24 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -173,6 +173,20 @@ static inline u16 nvme_req_qid(struct request *req)
  */
 #define NVME_QUIRK_DELAY_AMOUNT		2300
 
+/*
+ * enum nvme_ctrl_state: Controller state
+ *
+ * @NVME_CTRL_NEW:		New controller just allocated, initial state
+ * @NVME_CTRL_LIVE:		Controller is connected and I/O capable
+ * @NVME_CTRL_RESETTING:	Controller is resetting (or scheduled reset)
+ * @NVME_CTRL_CONNECTING:	Controller is disconnected, now connecting the
+ *				transport
+ * @NVME_CTRL_DELETING:		Controller is deleting (or scheduled deletion)
+ * @NVME_CTRL_DEAD:		Controller is non-present/unresponsive during
+ *				shutdown or removal. In this case we forcibly
+ *				kill all inflight I/O as they have no chance to
+ *				complete
+ */
 enum nvme_ctrl_state {
 	NVME_CTRL_NEW,
 	NVME_CTRL_LIVE,
-- 
2.25.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 rfc 3/3] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work
  2020-07-05  7:59 [PATCH v2 rfc 0/3] resolve controller delete hang due to ongoing mpath I/O Sagi Grimberg
  2020-07-05  7:59 ` [PATCH v2 rfc 1/3] nvme: split nvme_remove_namespaces Sagi Grimberg
  2020-07-05  7:59 ` [PATCH v2 rfc 2/3] nvme: document nvme controller states Sagi Grimberg
@ 2020-07-05  7:59 ` Sagi Grimberg
  2 siblings, 0 replies; 8+ messages in thread
From: Sagi Grimberg @ 2020-07-05  7:59 UTC (permalink / raw)
  To: linux-nvme, Christoph Hellwig, Keith Busch; +Cc: Anton Eidelman

A deadlock happens in the following scenario with multipath:
1) scan_work(nvme0) detects a new nsid while nvme0
    is an optimized path to it, path nvme1 happens to be
    inaccessible.

2) Before scan_work is complete nvme0 disconnect is initiated
    nvme_delete_ctrl_sync() sets nvme0 state to NVME_CTRL_DELETING

3) scan_work(1) attempts to submit IO,
    but nvme_path_is_optimized() observes nvme0 is not LIVE.
    Since nvme1 is a possible path IO is requeued and scan_work hangs.

--
Workqueue: nvme-wq nvme_scan_work [nvme_core]
kernel: Call Trace:
kernel:  __schedule+0x2b9/0x6c0
kernel:  schedule+0x42/0xb0
kernel:  io_schedule+0x16/0x40
kernel:  do_read_cache_page+0x438/0x830
kernel:  read_cache_page+0x12/0x20
kernel:  read_dev_sector+0x27/0xc0
kernel:  read_lba+0xc1/0x220
kernel:  efi_partition+0x1e6/0x708
kernel:  check_partition+0x154/0x244
kernel:  rescan_partitions+0xae/0x280
kernel:  __blkdev_get+0x40f/0x560
kernel:  blkdev_get+0x3d/0x140
kernel:  __device_add_disk+0x388/0x480
kernel:  device_add_disk+0x13/0x20
kernel:  nvme_mpath_set_live+0x119/0x140 [nvme_core]
kernel:  nvme_update_ns_ana_state+0x5c/0x60 [nvme_core]
kernel:  nvme_set_ns_ana_state+0x1e/0x30 [nvme_core]
kernel:  nvme_parse_ana_log+0xa1/0x180 [nvme_core]
kernel:  nvme_mpath_add_disk+0x47/0x90 [nvme_core]
kernel:  nvme_validate_ns+0x396/0x940 [nvme_core]
kernel:  nvme_scan_work+0x24f/0x380 [nvme_core]
kernel:  process_one_work+0x1db/0x380
kernel:  worker_thread+0x249/0x400
kernel:  kthread+0x104/0x140
--

4) Delete also hangs in flush_work(ctrl->scan_work)
    from nvme_remove_namespaces().

Similiarly a deadlock with ana_work may happen: if ana_work has started
and calls nvme_mpath_set_live and device_add_disk, it will
trigger I/O. When we trigger disconnect I/O will block because
our accessible (optimized) path is disconnecting, but the alternate
path is inaccessible, so I/O blocks. Then disconnect tries to flush
the ana_work and hangs.

[  605.550896] Workqueue: nvme-wq nvme_ana_work [nvme_core]
[  605.552087] Call Trace:
[  605.552683]  __schedule+0x2b9/0x6c0
[  605.553507]  schedule+0x42/0xb0
[  605.554201]  io_schedule+0x16/0x40
[  605.555012]  do_read_cache_page+0x438/0x830
[  605.556925]  read_cache_page+0x12/0x20
[  605.557757]  read_dev_sector+0x27/0xc0
[  605.558587]  amiga_partition+0x4d/0x4c5
[  605.561278]  check_partition+0x154/0x244
[  605.562138]  rescan_partitions+0xae/0x280
[  605.563076]  __blkdev_get+0x40f/0x560
[  605.563830]  blkdev_get+0x3d/0x140
[  605.564500]  __device_add_disk+0x388/0x480
[  605.565316]  device_add_disk+0x13/0x20
[  605.566070]  nvme_mpath_set_live+0x5e/0x130 [nvme_core]
[  605.567114]  nvme_update_ns_ana_state+0x2c/0x30 [nvme_core]
[  605.568197]  nvme_update_ana_state+0xca/0xe0 [nvme_core]
[  605.569360]  nvme_parse_ana_log+0xa1/0x180 [nvme_core]
[  605.571385]  nvme_read_ana_log+0x76/0x100 [nvme_core]
[  605.572376]  nvme_ana_work+0x15/0x20 [nvme_core]
[  605.573330]  process_one_work+0x1db/0x380
[  605.574144]  worker_thread+0x4d/0x400
[  605.574896]  kthread+0x104/0x140
[  605.577205]  ret_from_fork+0x35/0x40
[  605.577955] INFO: task nvme:14044 blocked for more than 120 seconds.
[  605.579239]       Tainted: G           OE     5.3.5-050305-generic #201910071830
[  605.580712] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  605.582320] nvme            D    0 14044  14043 0x00000000
[  605.583424] Call Trace:
[  605.583935]  __schedule+0x2b9/0x6c0
[  605.584625]  schedule+0x42/0xb0
[  605.585290]  schedule_timeout+0x203/0x2f0
[  605.588493]  wait_for_completion+0xb1/0x120
[  605.590066]  __flush_work+0x123/0x1d0
[  605.591758]  __cancel_work_timer+0x10e/0x190
[  605.593542]  cancel_work_sync+0x10/0x20
[  605.594347]  nvme_mpath_stop+0x2f/0x40 [nvme_core]
[  605.595328]  nvme_stop_ctrl+0x12/0x50 [nvme_core]
[  605.596262]  nvme_do_delete_ctrl+0x3f/0x90 [nvme_core]
[  605.597333]  nvme_sysfs_delete+0x5c/0x70 [nvme_core]
[  605.598320]  dev_attr_store+0x17/0x30

Fix this by introducing a new state: NVME_CTRL_DELETE_NOIO, which will
indicate the phase of controller deletion where I/O cannot be allowed
to access the namespace. NVME_CTRL_DELETING still allows mpath I/O to
be issued to the bottom device, and only after we flush the ana_work
and scan_work (after nvme_stop_ctrl and nvme_prep_remove_namespaces)
we change the state to NVME_CTRL_DELETING_NOIO. Also we prevent ana_work
from re-firing by aborting early if we are not LIVE, so we should be safe
here.

In addition, change the transport drivers to follow the updated state
machine.

Fixes: 0d0b660f214d ("nvme: add ANA support")
Reported-by: Anton Eidelman <anton@lightbitslabs.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c      | 16 +++++++++++++++-
 drivers/nvme/host/fc.c        |  1 +
 drivers/nvme/host/multipath.c | 13 ++++++++++++-
 drivers/nvme/host/nvme.h      |  6 ++++++
 drivers/nvme/host/pci.c       |  4 +++-
 drivers/nvme/host/rdma.c      | 10 ++++++----
 drivers/nvme/host/tcp.c       | 15 +++++++++------
 7 files changed, 52 insertions(+), 13 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index f1bb2a522cf0..c2f0e6cfea28 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -168,7 +168,9 @@ static void nvme_do_delete_ctrl(struct nvme_ctrl *ctrl)
 
 	flush_work(&ctrl->reset_work);
 	nvme_stop_ctrl(ctrl);
-	nvme_remove_namespaces(ctrl);
+	nvme_prep_remove_namespaces(ctrl);
+	nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING_NOIO);
+	nvme_do_remove_namespaces(ctrl);
 	ctrl->ops->delete_ctrl(ctrl);
 	nvme_uninit_ctrl(ctrl);
 }
@@ -366,6 +368,16 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
 			break;
 		}
 		break;
+	case NVME_CTRL_DELETING_NOIO:
+		switch (old_state) {
+		case NVME_CTRL_DELETING:
+		case NVME_CTRL_DEAD:
+			changed = true;
+			/* FALLTHRU */
+		default:
+			break;
+		}
+		break;
 	case NVME_CTRL_DEAD:
 		switch (old_state) {
 		case NVME_CTRL_DELETING:
@@ -403,6 +415,7 @@ static bool nvme_state_terminal(struct nvme_ctrl *ctrl)
 	case NVME_CTRL_CONNECTING:
 		return false;
 	case NVME_CTRL_DELETING:
+	case NVME_CTRL_DELETING_NOIO:
 	case NVME_CTRL_DEAD:
 		return true;
 	default:
@@ -3482,6 +3495,7 @@ static ssize_t nvme_sysfs_show_state(struct device *dev,
 		[NVME_CTRL_RESETTING]	= "resetting",
 		[NVME_CTRL_CONNECTING]	= "connecting",
 		[NVME_CTRL_DELETING]	= "deleting",
+		[NVME_CTRL_DELETING_NOIO]= "deleting (no IO)",
 		[NVME_CTRL_DEAD]	= "dead",
 	};
 
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index e999a8c4b7e8..549f5b0fb0b4 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -825,6 +825,7 @@ nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl)
 		break;
 
 	case NVME_CTRL_DELETING:
+	case NVME_CTRL_DELETING_NOIO:
 	default:
 		/* no action to take - let it delete */
 		break;
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 6cf80f7cf6be..395f7d1d5c54 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -167,7 +167,15 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
 
 static bool nvme_path_is_disabled(struct nvme_ns *ns)
 {
-	return ns->ctrl->state != NVME_CTRL_LIVE ||
+	/*
+	 * We don't treat NVME_CTRL_DELETING as a disabled path
+	 * as I/O should still be able to complete assuming that
+	 * the controller is connected, otherwise it'll fail
+	 * immediately and return to the requeue list. only fail
+	 * for NVME_CTRL_DELETING_NOIO
+	 */
+	return (ns->ctrl->state != NVME_CTRL_LIVE &&
+		ns->ctrl->state != NVME_CTRL_DELETING) ||
 		test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
 		test_bit(NVME_NS_REMOVING, &ns->flags);
 }
@@ -565,6 +573,9 @@ static void nvme_ana_work(struct work_struct *work)
 {
 	struct nvme_ctrl *ctrl = container_of(work, struct nvme_ctrl, ana_work);
 
+	if (ctrl->state != NVME_CTRL_LIVE)
+		return;
+
 	nvme_read_ana_log(ctrl);
 }
 
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 0bffa0435e24..7762ce24df77 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -182,6 +182,11 @@ static inline u16 nvme_req_qid(struct request *req)
  * @NVME_CTRL_CONNECTING:	Controller is disconnected, now connecting the
  *				transport
  * @NVME_CTRL_DELETING:		Controller is deleting (or scheduled deletion)
+ * @NVME_CTRL_DELETING_NOIO:	Controller is deleting and I/O is not
+ *				disabled/failed immediately. This state comes
+ * 				after all async event processing took place and
+ * 				before ns removal and the controller deletion
+ * 				progress
  * @NVME_CTRL_DEAD:		Controller is non-present/unresponsive during
  *				shutdown or removal. In this case we forcibly
  *				kill all inflight I/O as they have no chance to
@@ -193,6 +198,7 @@ enum nvme_ctrl_state {
 	NVME_CTRL_RESETTING,
 	NVME_CTRL_CONNECTING,
 	NVME_CTRL_DELETING,
+	NVME_CTRL_DELETING_NOIO,
 	NVME_CTRL_DEAD,
 };
 
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index c283e8dbfb86..e9d57682d021 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2898,7 +2898,9 @@ static void nvme_remove(struct pci_dev *pdev)
 
 	flush_work(&dev->ctrl.reset_work);
 	nvme_stop_ctrl(&dev->ctrl);
-	nvme_remove_namespaces(&dev->ctrl);
+	nvme_prep_remove_namespaces(&dev->ctrl);
+	nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING_NOIO);
+	nvme_do_remove_namespaces(&dev->ctrl);
 	nvme_dev_disable(dev, true);
 	nvme_release_cmb(dev);
 	nvme_free_host_mem(dev);
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 13506a87a444..1900ae99ca7b 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1076,11 +1076,12 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
 	changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
 	if (!changed) {
 		/*
-		 * state change failure is ok if we're in DELETING state,
+		 * state change failure is ok if we started ctrl delete,
 		 * unless we're during creation of a new controller to
 		 * avoid races with teardown flow.
 		 */
-		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
+		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
+			     ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
 		WARN_ON_ONCE(new);
 		ret = -EINVAL;
 		goto destroy_io;
@@ -1133,8 +1134,9 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
 
 	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
-		/* state change failure is ok if we're in DELETING state */
-		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
+		/* state change failure is ok if we started ctrl delete */
+		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
+			     ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
 		return;
 	}
 
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 860d7ddc2eee..14b6594b418a 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1948,11 +1948,12 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
 
 	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) {
 		/*
-		 * state change failure is ok if we're in DELETING state,
+		 * state change failure is ok if we started ctrl delete,
 		 * unless we're during creation of a new controller to
 		 * avoid races with teardown flow.
 		 */
-		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
+		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
+			     ctrl->state != NVME_CTRL_DELETING_NOIO);
 		WARN_ON_ONCE(new);
 		ret = -EINVAL;
 		goto destroy_io;
@@ -2008,8 +2009,9 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
 	blk_mq_unquiesce_queue(ctrl->admin_q);
 
 	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
-		/* state change failure is ok if we're in DELETING state */
-		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
+		/* state change failure is ok if we started ctrl delete */
+		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
+			     ctrl->state != NVME_CTRL_DELETING_NOIO);
 		return;
 	}
 
@@ -2044,8 +2046,9 @@ static void nvme_reset_ctrl_work(struct work_struct *work)
 	nvme_tcp_teardown_ctrl(ctrl, false);
 
 	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
-		/* state change failure is ok if we're in DELETING state */
-		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
+		/* state change failure is ok if we started ctrl delete */
+		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
+			     ctrl->state != NVME_CTRL_DELETING_NOIO);
 		return;
 	}
 
-- 
2.25.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 rfc 1/3] nvme: split nvme_remove_namespaces
  2020-07-05  7:59 ` [PATCH v2 rfc 1/3] nvme: split nvme_remove_namespaces Sagi Grimberg
@ 2020-07-08 15:17   ` Christoph Hellwig
  2020-07-10  4:35     ` Sagi Grimberg
  0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2020-07-08 15:17 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Keith Busch, Anton Eidelman, Christoph Hellwig, linux-nvme


I find the split rather confusing.  So I looked into alternatives
and found that the state change should just be a no-op for the PCIe
reset case.

So what about something like this ontop of your series?

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 4b3bd9b85656e5..feee55903b1968 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -168,9 +168,7 @@ static void nvme_do_delete_ctrl(struct nvme_ctrl *ctrl)
 
 	flush_work(&ctrl->reset_work);
 	nvme_stop_ctrl(ctrl);
-	nvme_prep_remove_namespaces(ctrl);
-	nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING_NOIO);
-	nvme_do_remove_namespaces(ctrl);
+	nvme_remove_namespaces(ctrl);
 	ctrl->ops->delete_ctrl(ctrl);
 	nvme_uninit_ctrl(ctrl);
 }
@@ -4104,8 +4102,16 @@ static void nvme_scan_work(struct work_struct *work)
 	up_write(&ctrl->namespaces_rwsem);
 }
 
-void nvme_prep_remove_namespaces(struct nvme_ctrl *ctrl)
+/*
+ * This function iterates the namespace list unlocked to allow recovery from
+ * controller failure. It is up to the caller to ensure the namespace list is
+ * not modified by scan work while this function is executing.
+ */
+void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
 {
+	struct nvme_ns *ns, *next;
+	LIST_HEAD(ns_list);
+
 	/*
 	 * make sure to requeue I/O to all namespaces as these
 	 * might result from the scan itself and must complete
@@ -4124,13 +4130,9 @@ void nvme_prep_remove_namespaces(struct nvme_ctrl *ctrl)
 	 */
 	if (ctrl->state == NVME_CTRL_DEAD)
 		nvme_kill_queues(ctrl);
-}
-EXPORT_SYMBOL_GPL(nvme_prep_remove_namespaces);
 
-void nvme_do_remove_namespaces(struct nvme_ctrl *ctrl)
-{
-	struct nvme_ns *ns, *next;
-	LIST_HEAD(ns_list);
+	/* this is a no-op when called from the controller reset handler */
+	nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING_NOIO);
 
 	down_write(&ctrl->namespaces_rwsem);
 	list_splice_init(&ctrl->namespaces, &ns_list);
@@ -4139,18 +4141,6 @@ void nvme_do_remove_namespaces(struct nvme_ctrl *ctrl)
 	list_for_each_entry_safe(ns, next, &ns_list, list)
 		nvme_ns_remove(ns);
 }
-EXPORT_SYMBOL_GPL(nvme_do_remove_namespaces);
-
-/*
- * This function iterates the namespace list unlocked to allow recovery from
- * controller failure. It is up to the caller to ensure the namespace list is
- * not modified by scan work while this function is executing.
- */
-void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
-{
-	nvme_prep_remove_namespaces(ctrl);
-	nvme_do_remove_namespaces(ctrl);
-}
 EXPORT_SYMBOL_GPL(nvme_remove_namespaces);
 
 static int nvme_class_uevent(struct device *dev, struct kobj_uevent_env *env)
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 1e74e0d62e2b11..900b35d47ec7ba 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -168,16 +168,17 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
 static bool nvme_path_is_disabled(struct nvme_ns *ns)
 {
 	/*
-	 * We don't treat NVME_CTRL_DELETING as a disabled path
-	 * as I/O should still be able to complete assuming that
-	 * the controller is connected, otherwise it'll fail
-	 * immediately and return to the requeue list. only fail
-	 * for NVME_CTRL_DELETING_NOIO
+	 * We don't treat NVME_CTRL_DELETING as a disabled path as I/O should
+	 * still be able to complete assuming that the controller is connected.
+	 * Otherwise it will fail immediately and return to the requeue list.
 	 */
-	return (ns->ctrl->state != NVME_CTRL_LIVE &&
-		ns->ctrl->state != NVME_CTRL_DELETING) ||
-		test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
-		test_bit(NVME_NS_REMOVING, &ns->flags);
+	if (ns->ctrl->state != NVME_CTRL_LIVE &&
+	    ns->ctrl->state != NVME_CTRL_DELETING)
+		return true;
+	if (test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
+	    test_bit(NVME_NS_REMOVING, &ns->flags))
+		return true;
+	return false;
 }
 
 static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 2ba5d0cee6df25..c22117cd9b41e2 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -552,8 +552,6 @@ void nvme_uninit_ctrl(struct nvme_ctrl *ctrl);
 void nvme_start_ctrl(struct nvme_ctrl *ctrl);
 void nvme_stop_ctrl(struct nvme_ctrl *ctrl);
 int nvme_init_identify(struct nvme_ctrl *ctrl);
-void nvme_prep_remove_namespaces(struct nvme_ctrl *ctrl);
-void nvme_do_remove_namespaces(struct nvme_ctrl *ctrl);
 void nvme_remove_namespaces(struct nvme_ctrl *ctrl);
 
 int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 0f974f932ac4e0..74cced620b0484 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2620,7 +2620,7 @@ static void nvme_reset_work(struct work_struct *work)
 	if (dev->online_queues < 2) {
 		dev_warn(dev->ctrl.device, "IO queues not created\n");
 		nvme_kill_queues(&dev->ctrl);
-		nvme_remove_namespaces(&dev->ctrl);
+		nvme_remove_namespaces(ctrl);
 		nvme_free_tagset(dev);
 	} else {
 		nvme_start_queues(&dev->ctrl);
@@ -2899,9 +2899,7 @@ static void nvme_remove(struct pci_dev *pdev)
 
 	flush_work(&dev->ctrl.reset_work);
 	nvme_stop_ctrl(&dev->ctrl);
-	nvme_prep_remove_namespaces(&dev->ctrl);
-	nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING_NOIO);
-	nvme_do_remove_namespaces(&dev->ctrl);
+	nvme_remove_namespaces(&dev->ctrl);
 	nvme_dev_disable(dev, true);
 	nvme_release_cmb(dev);
 	nvme_free_host_mem(dev);

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 rfc 1/3] nvme: split nvme_remove_namespaces
  2020-07-08 15:17   ` Christoph Hellwig
@ 2020-07-10  4:35     ` Sagi Grimberg
  2020-07-14 11:06       ` Christoph Hellwig
  0 siblings, 1 reply; 8+ messages in thread
From: Sagi Grimberg @ 2020-07-10  4:35 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Keith Busch, Anton Eidelman, linux-nvme

> I find the split rather confusing.  So I looked into alternatives
> and found that the state change should just be a no-op for the PCIe
> reset case.

Looks like the original version, but the only thing that bothers me
is that in the reset handler, if we failed to setup any I/O queues,
we'll put the state in DELETING_NOIO while it's not deleting, I think
that the point of allowing this is to have the device still up for the
user to analyze and debug it.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 rfc 1/3] nvme: split nvme_remove_namespaces
  2020-07-10  4:35     ` Sagi Grimberg
@ 2020-07-14 11:06       ` Christoph Hellwig
  2020-07-22 22:58         ` Sagi Grimberg
  0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2020-07-14 11:06 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Keith Busch, Anton Eidelman, Christoph Hellwig, linux-nvme

On Thu, Jul 09, 2020 at 09:35:13PM -0700, Sagi Grimberg wrote:
>> I find the split rather confusing.  So I looked into alternatives
>> and found that the state change should just be a no-op for the PCIe
>> reset case.
>
> Looks like the original version, but the only thing that bothers me
> is that in the reset handler, if we failed to setup any I/O queues,
> we'll put the state in DELETING_NOIO while it's not deleting, I think
> that the point of allowing this is to have the device still up for the
> user to analyze and debug it.

Based on the state machine we can't really move from that state to
DELETING_NOIO as indicated in the comment I added, can we?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 rfc 1/3] nvme: split nvme_remove_namespaces
  2020-07-14 11:06       ` Christoph Hellwig
@ 2020-07-22 22:58         ` Sagi Grimberg
  0 siblings, 0 replies; 8+ messages in thread
From: Sagi Grimberg @ 2020-07-22 22:58 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Keith Busch, Anton Eidelman, linux-nvme


>>> I find the split rather confusing.  So I looked into alternatives
>>> and found that the state change should just be a no-op for the PCIe
>>> reset case.
>>
>> Looks like the original version, but the only thing that bothers me
>> is that in the reset handler, if we failed to setup any I/O queues,
>> we'll put the state in DELETING_NOIO while it's not deleting, I think
>> that the point of allowing this is to have the device still up for the
>> user to analyze and debug it.
> 
> Based on the state machine we can't really move from that state to
> DELETING_NOIO as indicated in the comment I added, can we?

You're right. Let me prepare a patch

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-07-22 22:58 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-05  7:59 [PATCH v2 rfc 0/3] resolve controller delete hang due to ongoing mpath I/O Sagi Grimberg
2020-07-05  7:59 ` [PATCH v2 rfc 1/3] nvme: split nvme_remove_namespaces Sagi Grimberg
2020-07-08 15:17   ` Christoph Hellwig
2020-07-10  4:35     ` Sagi Grimberg
2020-07-14 11:06       ` Christoph Hellwig
2020-07-22 22:58         ` Sagi Grimberg
2020-07-05  7:59 ` [PATCH v2 rfc 2/3] nvme: document nvme controller states Sagi Grimberg
2020-07-05  7:59 ` [PATCH v2 rfc 3/3] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work Sagi Grimberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).