linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv2 0/6] nvme: double reset prevention
@ 2019-09-19 21:34 kbusch
  2019-09-19 21:34 ` [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues kbusch
                   ` (7 more replies)
  0 siblings, 8 replies; 41+ messages in thread
From: kbusch @ 2019-09-19 21:34 UTC (permalink / raw)
  To: linux-nvme, Sagi Grimberg, Christoph Hellwig
  Cc: Keith Busch, Edmund Nadolski, James Smart

From: Keith Busch <kbusch@kernel.org>

The main objective of this series is to prevent double resets. This sort
of thing is known to have happened if a timeout occurs at roughly the same time
as a user intiated reset, like through through PCIe's FLR.

The double reset could happen because the controller disabling had been
occuring outside of the RESETTING state when we can't schedule the
reset_work, which is to occur later. When another reset schedules in
between these events, the controller ends up in the wrong state.

The end result of this series is simply to block subsequent resets by
initializing the controller state to RESETTING without actually
scheduling the reset_work. The first patches in this series are simply

v1 -> v2:

  Merged up to current linux-block for-5.4/post

  Changelog updates

  Patch reordering for correct chronological sequence

  Fixed state handling for f/w activation failures

  Included a review from James Smart on the patches that weren't changed

Keith Busch (6):
  nvme-pci: Free tagset if no IO queues
  nvme: Remove ADMIN_ONLY state
  nvme: Restart request timers in resetting state
  nvme: Introduce nvme_reset_continue
  nvme: Prevent resets during paused states
  nvme: Wait for reset state when required

 drivers/nvme/host/core.c    | 80 ++++++++++++++++++++++++-------------
 drivers/nvme/host/fabrics.h |  3 +-
 drivers/nvme/host/nvme.h    |  6 ++-
 drivers/nvme/host/pci.c     | 78 +++++++++++++++++++++---------------
 drivers/nvme/host/rdma.c    |  8 ++++
 drivers/nvme/host/tcp.c     |  8 ++++
 6 files changed, 121 insertions(+), 62 deletions(-)

-- 
2.21.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues
  2019-09-19 21:34 [PATCHv2 0/6] nvme: double reset prevention kbusch
@ 2019-09-19 21:34 ` kbusch
  2019-09-20 17:49   ` Sagi Grimberg
  2019-09-27 21:33   ` Christoph Hellwig
  2019-09-19 21:34 ` [PATCHv2 2/6] nvme: Remove ADMIN_ONLY state kbusch
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 41+ messages in thread
From: kbusch @ 2019-09-19 21:34 UTC (permalink / raw)
  To: linux-nvme, Sagi Grimberg, Christoph Hellwig
  Cc: Keith Busch, Edmund Nadolski, James Smart

From: Keith Busch <kbusch@kernel.org>

If a controller becomes degraded after a reset, we will not be able to
perform any IO. We currently teardown previously created request
queues and namespaces, but we had kept the unusable tagset. Free
it after all queues using it have been released.

Reviewed-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 drivers/nvme/host/pci.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 6b4d7b064b38..9c7bc7311817 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2488,14 +2488,20 @@ static void nvme_release_prp_pools(struct nvme_dev *dev)
 	dma_pool_destroy(dev->prp_small_pool);
 }
 
+static void nvme_free_tagset(struct nvme_dev *dev)
+{
+	if (dev->tagset.tags)
+		blk_mq_free_tag_set(&dev->tagset);
+	dev->ctrl.tagset = NULL;
+}
+
 static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl)
 {
 	struct nvme_dev *dev = to_nvme_dev(ctrl);
 
 	nvme_dbbuf_dma_free(dev);
 	put_device(dev->dev);
-	if (dev->tagset.tags)
-		blk_mq_free_tag_set(&dev->tagset);
+	nvme_free_tagset(dev);
 	if (dev->ctrl.admin_q)
 		blk_put_queue(dev->ctrl.admin_q);
 	kfree(dev->queues);
@@ -2614,6 +2620,7 @@ static void nvme_reset_work(struct work_struct *work)
 		nvme_kill_queues(&dev->ctrl);
 		nvme_remove_namespaces(&dev->ctrl);
 		new_state = NVME_CTRL_ADMIN_ONLY;
+		nvme_free_tagset(dev);
 	} else {
 		nvme_start_queues(&dev->ctrl);
 		nvme_wait_freeze(&dev->ctrl);
-- 
2.21.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCHv2 2/6] nvme: Remove ADMIN_ONLY state
  2019-09-19 21:34 [PATCHv2 0/6] nvme: double reset prevention kbusch
  2019-09-19 21:34 ` [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues kbusch
@ 2019-09-19 21:34 ` kbusch
  2019-09-20 17:53   ` Sagi Grimberg
  2019-09-27 21:35   ` Christoph Hellwig
  2019-09-19 21:34 ` [PATCHv2 3/6] nvme: Restart request timers in resetting state kbusch
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 41+ messages in thread
From: kbusch @ 2019-09-19 21:34 UTC (permalink / raw)
  To: linux-nvme, Sagi Grimberg, Christoph Hellwig
  Cc: Keith Busch, Edmund Nadolski, James Smart

From: Keith Busch <kbusch@kernel.org>

The admin only state was intended to fence off actions that don't
apply to a non-IO capable controller. The only actual user of this is
the scan_work, and pci was the only transport to ever set this state.
The consequence of having this state is placing an additional burden on
every other action that applies to both live and admin only controllers.

Remove the admin only state and place the admin only burden on the only
place that actually cares: scan_work.

This also prepares to make it easier to temporarily pause a LIVE state
so that we don't need to remember which state the controller had been in
prior to the pause.

Reviewed-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 drivers/nvme/host/core.c    | 22 +++-------------------
 drivers/nvme/host/fabrics.h |  3 +--
 drivers/nvme/host/nvme.h    |  1 -
 drivers/nvme/host/pci.c     | 21 ++++++---------------
 4 files changed, 10 insertions(+), 37 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 108f60b46804..e03b1765fc01 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -113,7 +113,7 @@ static void nvme_queue_scan(struct nvme_ctrl *ctrl)
 	/*
 	 * Only new queue scan work when admin and IO queues are both alive
 	 */
-	if (ctrl->state == NVME_CTRL_LIVE)
+	if (ctrl->state == NVME_CTRL_LIVE && ctrl->tagset)
 		queue_work(nvme_wq, &ctrl->scan_work);
 }
 
@@ -134,8 +134,7 @@ int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl)
 	ret = nvme_reset_ctrl(ctrl);
 	if (!ret) {
 		flush_work(&ctrl->reset_work);
-		if (ctrl->state != NVME_CTRL_LIVE &&
-		    ctrl->state != NVME_CTRL_ADMIN_ONLY)
+		if (ctrl->state != NVME_CTRL_LIVE)
 			ret = -ENETRESET;
 	}
 
@@ -312,15 +311,6 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
 
 	old_state = ctrl->state;
 	switch (new_state) {
-	case NVME_CTRL_ADMIN_ONLY:
-		switch (old_state) {
-		case NVME_CTRL_CONNECTING:
-			changed = true;
-			/* FALLTHRU */
-		default:
-			break;
-		}
-		break;
 	case NVME_CTRL_LIVE:
 		switch (old_state) {
 		case NVME_CTRL_NEW:
@@ -336,7 +326,6 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
 		switch (old_state) {
 		case NVME_CTRL_NEW:
 		case NVME_CTRL_LIVE:
-		case NVME_CTRL_ADMIN_ONLY:
 			changed = true;
 			/* FALLTHRU */
 		default:
@@ -356,7 +345,6 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
 	case NVME_CTRL_DELETING:
 		switch (old_state) {
 		case NVME_CTRL_LIVE:
-		case NVME_CTRL_ADMIN_ONLY:
 		case NVME_CTRL_RESETTING:
 		case NVME_CTRL_CONNECTING:
 			changed = true;
@@ -2786,7 +2774,6 @@ static int nvme_dev_open(struct inode *inode, struct file *file)
 
 	switch (ctrl->state) {
 	case NVME_CTRL_LIVE:
-	case NVME_CTRL_ADMIN_ONLY:
 		break;
 	default:
 		return -EWOULDBLOCK;
@@ -3076,7 +3063,6 @@ static ssize_t nvme_sysfs_show_state(struct device *dev,
 	static const char *const state_name[] = {
 		[NVME_CTRL_NEW]		= "new",
 		[NVME_CTRL_LIVE]	= "live",
-		[NVME_CTRL_ADMIN_ONLY]	= "only-admin",
 		[NVME_CTRL_RESETTING]	= "resetting",
 		[NVME_CTRL_CONNECTING]	= "connecting",
 		[NVME_CTRL_DELETING]	= "deleting",
@@ -3585,11 +3571,9 @@ static void nvme_scan_work(struct work_struct *work)
 	struct nvme_id_ctrl *id;
 	unsigned nn;
 
-	if (ctrl->state != NVME_CTRL_LIVE)
+	if (ctrl->state != NVME_CTRL_LIVE || !ctrl->tagset)
 		return;
 
-	WARN_ON_ONCE(!ctrl->tagset);
-
 	if (test_and_clear_bit(NVME_AER_NOTICE_NS_CHANGED, &ctrl->events)) {
 		dev_info(ctrl->device, "rescanning namespaces.\n");
 		nvme_clear_changed_ns_log(ctrl);
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index 93f08d77c896..a0ec40ab62ee 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -182,8 +182,7 @@ bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
 static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
 		bool queue_live)
 {
-	if (likely(ctrl->state == NVME_CTRL_LIVE ||
-		   ctrl->state == NVME_CTRL_ADMIN_ONLY))
+	if (likely(ctrl->state == NVME_CTRL_LIVE))
 		return true;
 	return __nvmf_check_ready(ctrl, rq, queue_live);
 }
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index b5013c101b35..55ffae820c8f 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -161,7 +161,6 @@ static inline u16 nvme_req_qid(struct request *req)
 enum nvme_ctrl_state {
 	NVME_CTRL_NEW,
 	NVME_CTRL_LIVE,
-	NVME_CTRL_ADMIN_ONLY,    /* Only admin queue live */
 	NVME_CTRL_RESETTING,
 	NVME_CTRL_CONNECTING,
 	NVME_CTRL_DELETING,
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 9c7bc7311817..9a9e90efcc95 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2261,10 +2261,7 @@ static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode)
 	return true;
 }
 
-/*
- * return error value only when tagset allocation failed
- */
-static int nvme_dev_add(struct nvme_dev *dev)
+static void nvme_dev_add(struct nvme_dev *dev)
 {
 	int ret;
 
@@ -2294,7 +2291,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
 		if (ret) {
 			dev_warn(dev->ctrl.device,
 				"IO queues tagset allocation failed %d\n", ret);
-			return ret;
+			return;
 		}
 		dev->ctrl.tagset = &dev->tagset;
 	} else {
@@ -2305,7 +2302,6 @@ static int nvme_dev_add(struct nvme_dev *dev)
 	}
 
 	nvme_dbbuf_set(dev);
-	return 0;
 }
 
 static int nvme_pci_enable(struct nvme_dev *dev)
@@ -2525,7 +2521,6 @@ static void nvme_reset_work(struct work_struct *work)
 		container_of(work, struct nvme_dev, ctrl.reset_work);
 	bool was_suspend = !!(dev->ctrl.ctrl_config & NVME_CC_SHN_NORMAL);
 	int result;
-	enum nvme_ctrl_state new_state = NVME_CTRL_LIVE;
 
 	if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING)) {
 		result = -ENODEV;
@@ -2619,14 +2614,11 @@ static void nvme_reset_work(struct work_struct *work)
 		dev_warn(dev->ctrl.device, "IO queues not created\n");
 		nvme_kill_queues(&dev->ctrl);
 		nvme_remove_namespaces(&dev->ctrl);
-		new_state = NVME_CTRL_ADMIN_ONLY;
 		nvme_free_tagset(dev);
 	} else {
 		nvme_start_queues(&dev->ctrl);
 		nvme_wait_freeze(&dev->ctrl);
-		/* hit this only when allocate tagset fails */
-		if (nvme_dev_add(dev))
-			new_state = NVME_CTRL_ADMIN_ONLY;
+		nvme_dev_add(dev);
 		nvme_unfreeze(&dev->ctrl);
 	}
 
@@ -2634,9 +2626,9 @@ static void nvme_reset_work(struct work_struct *work)
 	 * If only admin queue live, keep it to do further investigation or
 	 * recovery.
 	 */
-	if (!nvme_change_ctrl_state(&dev->ctrl, new_state)) {
+	if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_LIVE)) {
 		dev_warn(dev->ctrl.device,
-			"failed to mark controller state %d\n", new_state);
+			"failed to mark controller live state\n");
 		result = -ENODEV;
 		goto out;
 	}
@@ -2943,8 +2935,7 @@ static int nvme_suspend(struct device *dev)
 	nvme_wait_freeze(ctrl);
 	nvme_sync_queues(ctrl);
 
-	if (ctrl->state != NVME_CTRL_LIVE &&
-	    ctrl->state != NVME_CTRL_ADMIN_ONLY)
+	if (ctrl->state != NVME_CTRL_LIVE)
 		goto unfreeze;
 
 	ret = nvme_get_power_state(ctrl, &ndev->last_ps);
-- 
2.21.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCHv2 3/6] nvme: Restart request timers in resetting state
  2019-09-19 21:34 [PATCHv2 0/6] nvme: double reset prevention kbusch
  2019-09-19 21:34 ` [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues kbusch
  2019-09-19 21:34 ` [PATCHv2 2/6] nvme: Remove ADMIN_ONLY state kbusch
@ 2019-09-19 21:34 ` kbusch
  2019-09-20 17:53   ` Sagi Grimberg
  2019-09-27 21:36   ` Christoph Hellwig
  2019-09-19 21:34 ` [PATCHv2 4/6] nvme: Introduce nvme_reset_continue kbusch
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 41+ messages in thread
From: kbusch @ 2019-09-19 21:34 UTC (permalink / raw)
  To: linux-nvme, Sagi Grimberg, Christoph Hellwig
  Cc: Keith Busch, Edmund Nadolski, James Smart

From: Keith Busch <kbusch@kernel.org>

A controller in the resetting state has not yet completed its recovery
actions. The pci and fc transports were already doing this, so update
the remaining transports to not attempt additional recovery in this
state. Instead, just restart the request timer.

Reviewed-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 drivers/nvme/host/rdma.c | 8 ++++++++
 drivers/nvme/host/tcp.c  | 8 ++++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index dfa07bb9dfeb..c753f2cdfeb5 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1694,6 +1694,14 @@ nvme_rdma_timeout(struct request *rq, bool reserved)
 	dev_warn(ctrl->ctrl.device, "I/O %d QID %d timeout\n",
 		 rq->tag, nvme_rdma_queue_idx(queue));
 
+	/*
+	 * Restart the timer if a controller reset is already scheduled. Any
+	 * timed out commands would be handled before entering the connecting
+	 * state.
+	 */
+	if (ctrl->ctrl.state == NVME_CTRL_RESETTING)
+		return BLK_EH_RESET_TIMER;
+
 	if (ctrl->ctrl.state != NVME_CTRL_LIVE) {
 		/*
 		 * Teardown immediately if controller times out while starting
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 4ffd5957637a..e6729cc7656c 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2044,6 +2044,14 @@ nvme_tcp_timeout(struct request *rq, bool reserved)
 	struct nvme_tcp_ctrl *ctrl = req->queue->ctrl;
 	struct nvme_tcp_cmd_pdu *pdu = req->pdu;
 
+	/*
+	 * Restart the timer if a controller reset is already scheduled. Any
+	 * timed out commands would be handled before entering the connecting
+	 * state.
+	 */
+	if (ctrl->ctrl.state == NVME_CTRL_RESETTING)
+		return BLK_EH_RESET_TIMER;
+
 	dev_warn(ctrl->ctrl.device,
 		"queue %d: timeout request %#x type %d\n",
 		nvme_tcp_queue_id(req->queue), rq->tag, pdu->hdr.type);
-- 
2.21.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCHv2 4/6] nvme: Introduce nvme_reset_continue
  2019-09-19 21:34 [PATCHv2 0/6] nvme: double reset prevention kbusch
                   ` (2 preceding siblings ...)
  2019-09-19 21:34 ` [PATCHv2 3/6] nvme: Restart request timers in resetting state kbusch
@ 2019-09-19 21:34 ` kbusch
  2019-09-20 18:15   ` Sagi Grimberg
                     ` (2 more replies)
  2019-09-19 21:34 ` [PATCHv2 5/6] nvme: Prevent resets during paused states kbusch
                   ` (3 subsequent siblings)
  7 siblings, 3 replies; 41+ messages in thread
From: kbusch @ 2019-09-19 21:34 UTC (permalink / raw)
  To: linux-nvme, Sagi Grimberg, Christoph Hellwig
  Cc: Keith Busch, Edmund Nadolski, James Smart

From: Keith Busch <kbusch@kernel.org>

There are times we need to reset the controller in two parts where the
initialization happens in a different context than the disabling. We need
to be able to set the state to RESETTING first in order to prevent another
reset from scheduling while performing a critical section that handles
the disable part. Introduce a new helpwer function, nvme_reset_continue()
that can be called from such contexts after they've set the controller
state to RESETTING.

Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 drivers/nvme/host/core.c | 12 ++++++++++--
 drivers/nvme/host/nvme.h |  1 +
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e03b1765fc01..95b74d020506 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -117,14 +117,22 @@ static void nvme_queue_scan(struct nvme_ctrl *ctrl)
 		queue_work(nvme_wq, &ctrl->scan_work);
 }
 
-int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
+int nvme_reset_continue(struct nvme_ctrl *ctrl)
 {
-	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
+	if (ctrl->state != NVME_CTRL_RESETTING)
 		return -EBUSY;
 	if (!queue_work(nvme_reset_wq, &ctrl->reset_work))
 		return -EBUSY;
 	return 0;
 }
+EXPORT_SYMBOL_GPL(nvme_reset_continue);
+
+int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
+{
+	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
+		return -EBUSY;
+	return nvme_reset_continue(ctrl);
+}
 EXPORT_SYMBOL_GPL(nvme_reset_ctrl);
 
 int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 55ffae820c8f..3ad16a81370b 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -497,6 +497,7 @@ int nvme_get_features(struct nvme_ctrl *dev, unsigned int fid,
 int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count);
 void nvme_stop_keep_alive(struct nvme_ctrl *ctrl);
 int nvme_reset_ctrl(struct nvme_ctrl *ctrl);
+int nvme_reset_continue(struct nvme_ctrl *ctrl);
 int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl);
 int nvme_delete_ctrl(struct nvme_ctrl *ctrl);
 
-- 
2.21.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-09-19 21:34 [PATCHv2 0/6] nvme: double reset prevention kbusch
                   ` (3 preceding siblings ...)
  2019-09-19 21:34 ` [PATCHv2 4/6] nvme: Introduce nvme_reset_continue kbusch
@ 2019-09-19 21:34 ` kbusch
  2019-09-20 18:03   ` Sagi Grimberg
                     ` (2 more replies)
  2019-09-19 21:34 ` [PATCHv2 6/6] nvme: Wait for reset state when required kbusch
                   ` (2 subsequent siblings)
  7 siblings, 3 replies; 41+ messages in thread
From: kbusch @ 2019-09-19 21:34 UTC (permalink / raw)
  To: linux-nvme, Sagi Grimberg, Christoph Hellwig
  Cc: Keith Busch, Edmund Nadolski, James Smart

From: Keith Busch <kbusch@kernel.org>

A paused controller is doing critical internal activation work in the
background. Block any controller reset from occurring during this period
by setting it to the controller state to RESETTING first.

Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 drivers/nvme/host/core.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 95b74d020506..33b5729763c2 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3742,15 +3742,14 @@ static void nvme_fw_act_work(struct work_struct *work)
 		if (time_after(jiffies, fw_act_timeout)) {
 			dev_warn(ctrl->device,
 				"Fw activation timeout, reset controller\n");
-			nvme_reset_ctrl(ctrl);
-			break;
+			nvme_reset_continue(ctrl);
+			return;
 		}
 		msleep(100);
 	}
 
-	if (ctrl->state != NVME_CTRL_LIVE)
+	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
 		return;
-
 	nvme_start_queues(ctrl);
 	/* read FW slot information to clear the AER */
 	nvme_get_fw_slot_info(ctrl);
@@ -3768,7 +3767,8 @@ static void nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
 		nvme_queue_scan(ctrl);
 		break;
 	case NVME_AER_NOTICE_FW_ACT_STARTING:
-		queue_work(nvme_wq, &ctrl->fw_act_work);
+		if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
+			queue_work(nvme_wq, &ctrl->fw_act_work);
 		break;
 #ifdef CONFIG_NVME_MULTIPATH
 	case NVME_AER_NOTICE_ANA:
-- 
2.21.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCHv2 6/6] nvme: Wait for reset state when required
  2019-09-19 21:34 [PATCHv2 0/6] nvme: double reset prevention kbusch
                   ` (4 preceding siblings ...)
  2019-09-19 21:34 ` [PATCHv2 5/6] nvme: Prevent resets during paused states kbusch
@ 2019-09-19 21:34 ` kbusch
  2019-09-20 18:13   ` Sagi Grimberg
  2019-09-27 21:43   ` Christoph Hellwig
  2019-09-20 17:51 ` [PATCHv2 0/6] nvme: double reset prevention Sagi Grimberg
  2019-09-23 22:09 ` Sagi Grimberg
  7 siblings, 2 replies; 41+ messages in thread
From: kbusch @ 2019-09-19 21:34 UTC (permalink / raw)
  To: linux-nvme, Sagi Grimberg, Christoph Hellwig
  Cc: Keith Busch, Edmund Nadolski, James Smart

From: Keith Busch <kbusch@kernel.org>

Provide a method to block until we've successfully entered the
RESETTING state so that we can ensure disabling the controller can
not be interrupted by another one of the various controller reset
paths. Otherwise, these simultaneous controller disabling/enabling tasks
may interfere with each other and leave the controller in the wrong state.

Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 drivers/nvme/host/core.c | 36 ++++++++++++++++++++++++++++++-
 drivers/nvme/host/nvme.h |  4 ++++
 drivers/nvme/host/pci.c  | 46 +++++++++++++++++++++++++++-------------
 3 files changed, 70 insertions(+), 16 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 33b5729763c2..3c75459c28bb 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -374,8 +374,10 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
 		break;
 	}
 
-	if (changed)
+	if (changed) {
 		ctrl->state = new_state;
+		wake_up_all(&ctrl->state_wq);
+	}
 
 	spin_unlock_irqrestore(&ctrl->lock, flags);
 	if (changed && ctrl->state == NVME_CTRL_LIVE)
@@ -384,6 +386,37 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
 }
 EXPORT_SYMBOL_GPL(nvme_change_ctrl_state);
 
+/*
+ * Returns false for sink states that can't ever transition back to live.
+ */
+static bool nvme_state_transient(struct nvme_ctrl *ctrl)
+{
+	switch (ctrl->state) {
+	case NVME_CTRL_NEW:
+	case NVME_CTRL_LIVE:
+	case NVME_CTRL_RESETTING:
+	case NVME_CTRL_CONNECTING:
+		return true;
+	case NVME_CTRL_DELETING:
+	case NVME_CTRL_DEAD:
+	default:
+		return false;
+	}
+}
+
+/*
+ * Waits for the controller state to be resetting, or returns false if it is
+ * not possible to ever transition to that state.
+ */
+bool nvme_wait_reset(struct nvme_ctrl *ctrl)
+{
+	wait_event(ctrl->state_wq,
+		   nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING) ||
+		   !nvme_state_transient(ctrl));
+	return ctrl->state == NVME_CTRL_RESETTING;
+}
+EXPORT_SYMBOL_GPL(nvme_wait_reset);
+
 static void nvme_free_ns_head(struct kref *ref)
 {
 	struct nvme_ns_head *head =
@@ -3891,6 +3924,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
 	INIT_WORK(&ctrl->async_event_work, nvme_async_event_work);
 	INIT_WORK(&ctrl->fw_act_work, nvme_fw_act_work);
 	INIT_WORK(&ctrl->delete_work, nvme_delete_ctrl_work);
+	init_waitqueue_head(&ctrl->state_wq);
 
 	INIT_DELAYED_WORK(&ctrl->ka_work, nvme_keep_alive_work);
 	memset(&ctrl->ka_cmd, 0, sizeof(ctrl->ka_cmd));
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 3ad16a81370b..43ac1244c4f7 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -15,6 +15,7 @@
 #include <linux/sed-opal.h>
 #include <linux/fault-inject.h>
 #include <linux/rcupdate.h>
+#include <linux/wait.h>
 
 #include <trace/events/block.h>
 
@@ -198,6 +199,7 @@ struct nvme_ctrl {
 	struct cdev cdev;
 	struct work_struct reset_work;
 	struct work_struct delete_work;
+	wait_queue_head_t state_wq;
 
 	struct nvme_subsystem *subsys;
 	struct list_head subsys_entry;
@@ -448,6 +450,7 @@ void nvme_complete_rq(struct request *req);
 bool nvme_cancel_request(struct request *req, void *data, bool reserved);
 bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
 		enum nvme_ctrl_state new_state);
+bool nvme_wait_reset(struct nvme_ctrl *ctrl);
 int nvme_disable_ctrl(struct nvme_ctrl *ctrl);
 int nvme_enable_ctrl(struct nvme_ctrl *ctrl);
 int nvme_shutdown_ctrl(struct nvme_ctrl *ctrl);
@@ -499,6 +502,7 @@ void nvme_stop_keep_alive(struct nvme_ctrl *ctrl);
 int nvme_reset_ctrl(struct nvme_ctrl *ctrl);
 int nvme_reset_continue(struct nvme_ctrl *ctrl);
 int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl);
+int nvme_reset_continue(struct nvme_ctrl *ctrl);
 int nvme_delete_ctrl(struct nvme_ctrl *ctrl);
 
 int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 9a9e90efcc95..41eb89bf0be3 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2461,6 +2461,14 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
 	mutex_unlock(&dev->shutdown_lock);
 }
 
+static int nvme_disable_prepare_reset(struct nvme_dev *dev, bool shutdown)
+{
+	if (!nvme_wait_reset(&dev->ctrl))
+		return -EBUSY;
+	nvme_dev_disable(dev, shutdown);
+	return 0;
+}
+
 static int nvme_setup_prp_pools(struct nvme_dev *dev)
 {
 	dev->prp_page_pool = dma_pool_create("prp list page", dev->dev,
@@ -2508,6 +2516,11 @@ static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl)
 
 static void nvme_remove_dead_ctrl(struct nvme_dev *dev)
 {
+	/*
+	 * Set state to deleting now to avoid blocking nvme_wait_reset(), which
+	 * may be holding this pci_dev's device lock.
+	 */
+	nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING);
 	nvme_get_ctrl(&dev->ctrl);
 	nvme_dev_disable(dev, false);
 	nvme_kill_queues(&dev->ctrl);
@@ -2833,19 +2846,28 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 static void nvme_reset_prepare(struct pci_dev *pdev)
 {
 	struct nvme_dev *dev = pci_get_drvdata(pdev);
-	nvme_dev_disable(dev, false);
+
+	/*
+	 * We don't need to check the return value from waiting for the reset
+	 * state as pci_dev device lock is held, making it impossible to race
+	 * with ->remove().
+	 */
+	nvme_disable_prepare_reset(dev, false);
+	nvme_sync_queues(&dev->ctrl);
 }
 
 static void nvme_reset_done(struct pci_dev *pdev)
 {
 	struct nvme_dev *dev = pci_get_drvdata(pdev);
-	nvme_reset_ctrl_sync(&dev->ctrl);
+
+	if (!nvme_reset_continue(&dev->ctrl))
+		flush_work(&dev->ctrl.reset_work);
 }
 
 static void nvme_shutdown(struct pci_dev *pdev)
 {
 	struct nvme_dev *dev = pci_get_drvdata(pdev);
-	nvme_dev_disable(dev, true);
+	nvme_disable_prepare_reset(dev, true);
 }
 
 /*
@@ -2898,7 +2920,7 @@ static int nvme_resume(struct device *dev)
 
 	if (ndev->last_ps == U32_MAX ||
 	    nvme_set_power_state(ctrl, ndev->last_ps) != 0)
-		nvme_reset_ctrl(ctrl);
+		return nvme_reset_continue(&ndev->ctrl);
 	return 0;
 }
 
@@ -2926,10 +2948,8 @@ static int nvme_suspend(struct device *dev)
 	 */
 	if (pm_suspend_via_firmware() || !ctrl->npss ||
 	    !pcie_aspm_enabled(pdev) ||
-	    (ndev->ctrl.quirks & NVME_QUIRK_SIMPLE_SUSPEND)) {
-		nvme_dev_disable(ndev, true);
-		return 0;
-	}
+	    (ndev->ctrl.quirks & NVME_QUIRK_SIMPLE_SUSPEND))
+		return nvme_disable_prepare_reset(ndev, true);
 
 	nvme_start_freeze(ctrl);
 	nvme_wait_freeze(ctrl);
@@ -2951,9 +2971,8 @@ static int nvme_suspend(struct device *dev)
 		 * Clearing npss forces a controller reset on resume. The
 		 * correct value will be resdicovered then.
 		 */
-		nvme_dev_disable(ndev, true);
+		ret = nvme_disable_prepare_reset(ndev, true);
 		ctrl->npss = 0;
-		ret = 0;
 		goto unfreeze;
 	}
 	/*
@@ -2970,9 +2989,7 @@ static int nvme_suspend(struct device *dev)
 static int nvme_simple_suspend(struct device *dev)
 {
 	struct nvme_dev *ndev = pci_get_drvdata(to_pci_dev(dev));
-
-	nvme_dev_disable(ndev, true);
-	return 0;
+	return nvme_disable_prepare_reset(ndev, true);
 }
 
 static int nvme_simple_resume(struct device *dev)
@@ -2980,8 +2997,7 @@ static int nvme_simple_resume(struct device *dev)
 	struct pci_dev *pdev = to_pci_dev(dev);
 	struct nvme_dev *ndev = pci_get_drvdata(pdev);
 
-	nvme_reset_ctrl(&ndev->ctrl);
-	return 0;
+	return nvme_reset_continue(&ndev->ctrl);
 }
 
 static const struct dev_pm_ops nvme_dev_pm_ops = {
-- 
2.21.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues
  2019-09-19 21:34 ` [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues kbusch
@ 2019-09-20 17:49   ` Sagi Grimberg
  2019-09-20 17:53     ` Keith Busch
  2019-09-27 21:33   ` Christoph Hellwig
  1 sibling, 1 reply; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 17:49 UTC (permalink / raw)
  To: kbusch, linux-nvme, Christoph Hellwig; +Cc: Edmund Nadolski, James Smart


> +static void nvme_free_tagset(struct nvme_dev *dev)
> +{
> +	if (dev->tagset.tags)
> +		blk_mq_free_tag_set(&dev->tagset);
> +	dev->ctrl.tagset = NULL;
> +}
> +

testing dev->tagset.tags and nulling dev->ctrl.tagset?
why not check dev->ctrl.tagset?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 0/6] nvme: double reset prevention
  2019-09-19 21:34 [PATCHv2 0/6] nvme: double reset prevention kbusch
                   ` (5 preceding siblings ...)
  2019-09-19 21:34 ` [PATCHv2 6/6] nvme: Wait for reset state when required kbusch
@ 2019-09-20 17:51 ` Sagi Grimberg
  2019-09-20 17:54   ` Keith Busch
  2019-09-23 22:09 ` Sagi Grimberg
  7 siblings, 1 reply; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 17:51 UTC (permalink / raw)
  To: kbusch, linux-nvme, Christoph Hellwig; +Cc: Edmund Nadolski, James Smart


> The main objective of this series is to prevent double resets. This sort
> of thing is known to have happened if a timeout occurs at roughly the same time
> as a user intiated reset, like through through PCIe's FLR.
> 
> The double reset could happen because the controller disabling had been
> occuring outside of the RESETTING state when we can't schedule the
> reset_work, which is to occur later. When another reset schedules in
> between these events, the controller ends up in the wrong state.
> 
> The end result of this series is simply to block subsequent resets by
> initializing the controller state to RESETTING without actually
> scheduling the reset_work. The first patches in this series are simply

are simply what?

> 
> v1 -> v2:
> 
>    Merged up to current linux-block for-5.4/post
> 
>    Changelog updates
> 
>    Patch reordering for correct chronological sequence
> 
>    Fixed state handling for f/w activation failures
> 
>    Included a review from James Smart on the patches that weren't changed
> 
> Keith Busch (6):
>    nvme-pci: Free tagset if no IO queues
>    nvme: Remove ADMIN_ONLY state
>    nvme: Restart request timers in resetting state
>    nvme: Introduce nvme_reset_continue
>    nvme: Prevent resets during paused states
>    nvme: Wait for reset state when required
> 
>   drivers/nvme/host/core.c    | 80 ++++++++++++++++++++++++-------------
>   drivers/nvme/host/fabrics.h |  3 +-
>   drivers/nvme/host/nvme.h    |  6 ++-
>   drivers/nvme/host/pci.c     | 78 +++++++++++++++++++++---------------
>   drivers/nvme/host/rdma.c    |  8 ++++
>   drivers/nvme/host/tcp.c     |  8 ++++
>   6 files changed, 121 insertions(+), 62 deletions(-)
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 2/6] nvme: Remove ADMIN_ONLY state
  2019-09-19 21:34 ` [PATCHv2 2/6] nvme: Remove ADMIN_ONLY state kbusch
@ 2019-09-20 17:53   ` Sagi Grimberg
  2019-09-27 21:35   ` Christoph Hellwig
  1 sibling, 0 replies; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 17:53 UTC (permalink / raw)
  To: kbusch, linux-nvme, Christoph Hellwig; +Cc: Edmund Nadolski, James Smart

Looks fine to me,

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 3/6] nvme: Restart request timers in resetting state
  2019-09-19 21:34 ` [PATCHv2 3/6] nvme: Restart request timers in resetting state kbusch
@ 2019-09-20 17:53   ` Sagi Grimberg
  2019-09-27 21:36   ` Christoph Hellwig
  1 sibling, 0 replies; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 17:53 UTC (permalink / raw)
  To: kbusch, linux-nvme, Christoph Hellwig; +Cc: Edmund Nadolski, James Smart

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues
  2019-09-20 17:49   ` Sagi Grimberg
@ 2019-09-20 17:53     ` Keith Busch
  2019-09-20 17:56       ` Sagi Grimberg
  0 siblings, 1 reply; 41+ messages in thread
From: Keith Busch @ 2019-09-20 17:53 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart

On Fri, Sep 20, 2019 at 10:49:11AM -0700, Sagi Grimberg wrote:
> 
> > +static void nvme_free_tagset(struct nvme_dev *dev)
> > +{
> > +	if (dev->tagset.tags)
> > +		blk_mq_free_tag_set(&dev->tagset);
> > +	dev->ctrl.tagset = NULL;
> > +}
> > +
> 
> testing dev->tagset.tags and nulling dev->ctrl.tagset?
> why not check dev->ctrl.tagset?

We could theoretically use either. I don't see any case where we've
allocated dev->tagset.tags but didn't set dev->ctrl.tagset.  But I
just kept the same check we had before, just moved it into a helper
function.


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 0/6] nvme: double reset prevention
  2019-09-20 17:51 ` [PATCHv2 0/6] nvme: double reset prevention Sagi Grimberg
@ 2019-09-20 17:54   ` Keith Busch
  0 siblings, 0 replies; 41+ messages in thread
From: Keith Busch @ 2019-09-20 17:54 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart

On Fri, Sep 20, 2019 at 10:51:52AM -0700, Sagi Grimberg wrote:
> 
> > The main objective of this series is to prevent double resets. This sort
> > of thing is known to have happened if a timeout occurs at roughly the same time
> > as a user intiated reset, like through through PCIe's FLR.
> > 
> > The double reset could happen because the controller disabling had been
> > occuring outside of the RESETTING state when we can't schedule the
> > reset_work, which is to occur later. When another reset schedules in
> > between these events, the controller ends up in the wrong state.
> > 
> > The end result of this series is simply to block subsequent resets by
> > initializing the controller state to RESETTING without actually
> > scheduling the reset_work. The first patches in this series are simply
> 
> are simply what?

Oops, cut off my message.

Simply fixing issues discovered while testing the changes.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues
  2019-09-20 17:53     ` Keith Busch
@ 2019-09-20 17:56       ` Sagi Grimberg
  0 siblings, 0 replies; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 17:56 UTC (permalink / raw)
  To: Keith Busch; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart


>>> +static void nvme_free_tagset(struct nvme_dev *dev)
>>> +{
>>> +	if (dev->tagset.tags)
>>> +		blk_mq_free_tag_set(&dev->tagset);
>>> +	dev->ctrl.tagset = NULL;
>>> +}
>>> +
>>
>> testing dev->tagset.tags and nulling dev->ctrl.tagset?
>> why not check dev->ctrl.tagset?
> 
> We could theoretically use either. I don't see any case where we've
> allocated dev->tagset.tags but didn't set dev->ctrl.tagset.  But I
> just kept the same check we had before, just moved it into a helper
> function.

Yea, just looks a bit strange now that its in its own function.

Not a real issue though...

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-09-19 21:34 ` [PATCHv2 5/6] nvme: Prevent resets during paused states kbusch
@ 2019-09-20 18:03   ` Sagi Grimberg
  2019-09-20 18:05   ` Sagi Grimberg
  2019-09-27 21:41   ` Christoph Hellwig
  2 siblings, 0 replies; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 18:03 UTC (permalink / raw)
  To: kbusch, linux-nvme, Christoph Hellwig; +Cc: Edmund Nadolski, James Smart



On 9/19/19 2:34 PM, kbusch@kernel.org wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> A paused controller is doing critical internal activation work in the
> background. Block any controller reset from occurring during this period
> by setting it to the controller state to RESETTING first.
> 
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
>   drivers/nvme/host/core.c | 10 +++++-----
>   1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 95b74d020506..33b5729763c2 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3742,15 +3742,14 @@ static void nvme_fw_act_work(struct work_struct *work)
>   		if (time_after(jiffies, fw_act_timeout)) {
>   			dev_warn(ctrl->device,
>   				"Fw activation timeout, reset controller\n");
> -			nvme_reset_ctrl(ctrl);
> -			break;
> +			nvme_reset_continue(ctrl);
> +			return;
>   		}
>   		msleep(100);
>   	}
>   
> -	if (ctrl->state != NVME_CTRL_LIVE)
> +	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
>   		return;
> -
>   	nvme_start_queues(ctrl);
>   	/* read FW slot information to clear the AER */
>   	nvme_get_fw_slot_info(ctrl);
> @@ -3768,7 +3767,8 @@ static void nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
>   		nvme_queue_scan(ctrl);
>   		break;
>   	case NVME_AER_NOTICE_FW_ACT_STARTING:
> -		queue_work(nvme_wq, &ctrl->fw_act_work);
> +		if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> +			queue_work(nvme_wq, &ctrl->fw_act_work);

Overall looks fine,
can we add a little comment:
"prevent others from resetting the controller during fw activation"

Otherwise,

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-09-19 21:34 ` [PATCHv2 5/6] nvme: Prevent resets during paused states kbusch
  2019-09-20 18:03   ` Sagi Grimberg
@ 2019-09-20 18:05   ` Sagi Grimberg
  2019-09-20 18:08     ` Keith Busch
  2019-09-27 21:41   ` Christoph Hellwig
  2 siblings, 1 reply; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 18:05 UTC (permalink / raw)
  To: kbusch, linux-nvme, Christoph Hellwig; +Cc: Edmund Nadolski, James Smart


>   drivers/nvme/host/core.c | 10 +++++-----
>   1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 95b74d020506..33b5729763c2 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3742,15 +3742,14 @@ static void nvme_fw_act_work(struct work_struct *work)
>   		if (time_after(jiffies, fw_act_timeout)) {
>   			dev_warn(ctrl->device,
>   				"Fw activation timeout, reset controller\n");
> -			nvme_reset_ctrl(ctrl);
> -			break;
> +			nvme_reset_continue(ctrl);
> +			return;
>   		}
>   		msleep(100);
>   	}
>   
> -	if (ctrl->state != NVME_CTRL_LIVE)
> +	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
>   		return;
> -
>   	nvme_start_queues(ctrl);
>   	/* read FW slot information to clear the AER */
>   	nvme_get_fw_slot_info(ctrl);
> @@ -3768,7 +3767,8 @@ static void nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
>   		nvme_queue_scan(ctrl);
>   		break;
>   	case NVME_AER_NOTICE_FW_ACT_STARTING:
> -		queue_work(nvme_wq, &ctrl->fw_act_work);
> +		if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> +			queue_work(nvme_wq, &ctrl->fw_act_work);

btw, what happens if someone just initiated a reset here when the event
is handled? just ignore the fw activation?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-09-20 18:05   ` Sagi Grimberg
@ 2019-09-20 18:08     ` Keith Busch
  2019-09-20 18:14       ` Sagi Grimberg
  0 siblings, 1 reply; 41+ messages in thread
From: Keith Busch @ 2019-09-20 18:08 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart

On Fri, Sep 20, 2019 at 11:05:22AM -0700, Sagi Grimberg wrote:
> 
> >   drivers/nvme/host/core.c | 10 +++++-----
> >   1 file changed, 5 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > index 95b74d020506..33b5729763c2 100644
> > --- a/drivers/nvme/host/core.c
> > +++ b/drivers/nvme/host/core.c
> > @@ -3742,15 +3742,14 @@ static void nvme_fw_act_work(struct work_struct *work)
> >   		if (time_after(jiffies, fw_act_timeout)) {
> >   			dev_warn(ctrl->device,
> >   				"Fw activation timeout, reset controller\n");
> > -			nvme_reset_ctrl(ctrl);
> > -			break;
> > +			nvme_reset_continue(ctrl);
> > +			return;
> >   		}
> >   		msleep(100);
> >   	}
> > -	if (ctrl->state != NVME_CTRL_LIVE)
> > +	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
> >   		return;
> > -
> >   	nvme_start_queues(ctrl);
> >   	/* read FW slot information to clear the AER */
> >   	nvme_get_fw_slot_info(ctrl);
> > @@ -3768,7 +3767,8 @@ static void nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
> >   		nvme_queue_scan(ctrl);
> >   		break;
> >   	case NVME_AER_NOTICE_FW_ACT_STARTING:
> > -		queue_work(nvme_wq, &ctrl->fw_act_work);
> > +		if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> > +			queue_work(nvme_wq, &ctrl->fw_act_work);
> 
> btw, what happens if someone just initiated a reset here when the event
> is handled? just ignore the fw activation?

Yep, if someone reset the controller before the driver got to handle the
activation notice event, there's nothing for fw_act_work to do anymore
since CSTS.PP is not valid after toggling CC.EN.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 6/6] nvme: Wait for reset state when required
  2019-09-19 21:34 ` [PATCHv2 6/6] nvme: Wait for reset state when required kbusch
@ 2019-09-20 18:13   ` Sagi Grimberg
  2019-09-20 18:26     ` Keith Busch
  2019-09-27 21:43   ` Christoph Hellwig
  1 sibling, 1 reply; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 18:13 UTC (permalink / raw)
  To: kbusch, linux-nvme, Christoph Hellwig; +Cc: Edmund Nadolski, James Smart



On 9/19/19 2:34 PM, kbusch@kernel.org wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> Provide a method to block until we've successfully entered the
> RESETTING state so that we can ensure disabling the controller can
> not be interrupted by another one of the various controller reset
> paths. Otherwise, these simultaneous controller disabling/enabling tasks
> may interfere with each other and leave the controller in the wrong state.
> 
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
>   drivers/nvme/host/core.c | 36 ++++++++++++++++++++++++++++++-
>   drivers/nvme/host/nvme.h |  4 ++++
>   drivers/nvme/host/pci.c  | 46 +++++++++++++++++++++++++++-------------
>   3 files changed, 70 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 33b5729763c2..3c75459c28bb 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -374,8 +374,10 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
>   		break;
>   	}
>   
> -	if (changed)
> +	if (changed) {
>   		ctrl->state = new_state;
> +		wake_up_all(&ctrl->state_wq);
> +	}
>   
>   	spin_unlock_irqrestore(&ctrl->lock, flags);
>   	if (changed && ctrl->state == NVME_CTRL_LIVE)
> @@ -384,6 +386,37 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
>   }
>   EXPORT_SYMBOL_GPL(nvme_change_ctrl_state);
>   
> +/*
> + * Returns false for sink states that can't ever transition back to live.
> + */
> +static bool nvme_state_transient(struct nvme_ctrl *ctrl)
> +{
> +	switch (ctrl->state) {
> +	case NVME_CTRL_NEW:
> +	case NVME_CTRL_LIVE:
> +	case NVME_CTRL_RESETTING:
> +	case NVME_CTRL_CONNECTING:
> +		return true;
> +	case NVME_CTRL_DELETING:
> +	case NVME_CTRL_DEAD:
> +	default:
> +		return false;
> +	}
> +}
> +
> +/*
> + * Waits for the controller state to be resetting, or returns false if it is
> + * not possible to ever transition to that state.
> + */
> +bool nvme_wait_reset(struct nvme_ctrl *ctrl)
> +{
> +	wait_event(ctrl->state_wq,
> +		   nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING) ||
> +		   !nvme_state_transient(ctrl));
> +	return ctrl->state == NVME_CTRL_RESETTING;
> +}
> +EXPORT_SYMBOL_GPL(nvme_wait_reset);
> +
>   static void nvme_free_ns_head(struct kref *ref)
>   {
>   	struct nvme_ns_head *head =
> @@ -3891,6 +3924,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
>   	INIT_WORK(&ctrl->async_event_work, nvme_async_event_work);
>   	INIT_WORK(&ctrl->fw_act_work, nvme_fw_act_work);
>   	INIT_WORK(&ctrl->delete_work, nvme_delete_ctrl_work);
> +	init_waitqueue_head(&ctrl->state_wq);
>   
>   	INIT_DELAYED_WORK(&ctrl->ka_work, nvme_keep_alive_work);
>   	memset(&ctrl->ka_cmd, 0, sizeof(ctrl->ka_cmd));
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index 3ad16a81370b..43ac1244c4f7 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -15,6 +15,7 @@
>   #include <linux/sed-opal.h>
>   #include <linux/fault-inject.h>
>   #include <linux/rcupdate.h>
> +#include <linux/wait.h>
>   
>   #include <trace/events/block.h>
>   
> @@ -198,6 +199,7 @@ struct nvme_ctrl {
>   	struct cdev cdev;
>   	struct work_struct reset_work;
>   	struct work_struct delete_work;
> +	wait_queue_head_t state_wq;
>   
>   	struct nvme_subsystem *subsys;
>   	struct list_head subsys_entry;
> @@ -448,6 +450,7 @@ void nvme_complete_rq(struct request *req);
>   bool nvme_cancel_request(struct request *req, void *data, bool reserved);
>   bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
>   		enum nvme_ctrl_state new_state);
> +bool nvme_wait_reset(struct nvme_ctrl *ctrl);
>   int nvme_disable_ctrl(struct nvme_ctrl *ctrl);
>   int nvme_enable_ctrl(struct nvme_ctrl *ctrl);
>   int nvme_shutdown_ctrl(struct nvme_ctrl *ctrl);
> @@ -499,6 +502,7 @@ void nvme_stop_keep_alive(struct nvme_ctrl *ctrl);
>   int nvme_reset_ctrl(struct nvme_ctrl *ctrl);
>   int nvme_reset_continue(struct nvme_ctrl *ctrl);
>   int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl);
> +int nvme_reset_continue(struct nvme_ctrl *ctrl);
>   int nvme_delete_ctrl(struct nvme_ctrl *ctrl);
>   
>   int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 9a9e90efcc95..41eb89bf0be3 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2461,6 +2461,14 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
>   	mutex_unlock(&dev->shutdown_lock);
>   }
>   
> +static int nvme_disable_prepare_reset(struct nvme_dev *dev, bool shutdown)
> +{
> +	if (!nvme_wait_reset(&dev->ctrl))
> +		return -EBUSY;
> +	nvme_dev_disable(dev, shutdown);
> +	return 0;
> +}
> +
>   static int nvme_setup_prp_pools(struct nvme_dev *dev)
>   {
>   	dev->prp_page_pool = dma_pool_create("prp list page", dev->dev,
> @@ -2508,6 +2516,11 @@ static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl)
>   
>   static void nvme_remove_dead_ctrl(struct nvme_dev *dev)
>   {
> +	/*
> +	 * Set state to deleting now to avoid blocking nvme_wait_reset(), which
> +	 * may be holding this pci_dev's device lock.
> +	 */
> +	nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING);
>   	nvme_get_ctrl(&dev->ctrl);
>   	nvme_dev_disable(dev, false);
>   	nvme_kill_queues(&dev->ctrl);
> @@ -2833,19 +2846,28 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>   static void nvme_reset_prepare(struct pci_dev *pdev)
>   {
>   	struct nvme_dev *dev = pci_get_drvdata(pdev);
> -	nvme_dev_disable(dev, false);
> +
> +	/*
> +	 * We don't need to check the return value from waiting for the reset
> +	 * state as pci_dev device lock is held, making it impossible to race
> +	 * with ->remove().
> +	 */
> +	nvme_disable_prepare_reset(dev, false);
> +	nvme_sync_queues(&dev->ctrl);
>   }
>   
>   static void nvme_reset_done(struct pci_dev *pdev)
>   {
>   	struct nvme_dev *dev = pci_get_drvdata(pdev);
> -	nvme_reset_ctrl_sync(&dev->ctrl);
> +
> +	if (!nvme_reset_continue(&dev->ctrl))
> +		flush_work(&dev->ctrl.reset_work);

Is nvme_reset_continue allowed to fail here?
As I see it, the flush must complete regardless doesn't it?

>   }
>   
>   static void nvme_shutdown(struct pci_dev *pdev)
>   {
>   	struct nvme_dev *dev = pci_get_drvdata(pdev);
> -	nvme_dev_disable(dev, true);
> +	nvme_disable_prepare_reset(dev, true);
>   }
>   
>   /*
> @@ -2898,7 +2920,7 @@ static int nvme_resume(struct device *dev)
>   
>   	if (ndev->last_ps == U32_MAX ||
>   	    nvme_set_power_state(ctrl, ndev->last_ps) != 0)
> -		nvme_reset_ctrl(ctrl);
> +		return nvme_reset_continue(&ndev->ctrl);

Is the controller allowed not to be in RESETTING in this stage?
do we need a WARN_ON if it is?

>   	return 0;
>   }
>   
> @@ -2926,10 +2948,8 @@ static int nvme_suspend(struct device *dev)
>   	 */
>   	if (pm_suspend_via_firmware() || !ctrl->npss ||
>   	    !pcie_aspm_enabled(pdev) ||
> -	    (ndev->ctrl.quirks & NVME_QUIRK_SIMPLE_SUSPEND)) {
> -		nvme_dev_disable(ndev, true);
> -		return 0;
> -	}
> +	    (ndev->ctrl.quirks & NVME_QUIRK_SIMPLE_SUSPEND))
> +		return nvme_disable_prepare_reset(ndev, true);
>   
>   	nvme_start_freeze(ctrl);
>   	nvme_wait_freeze(ctrl);
> @@ -2951,9 +2971,8 @@ static int nvme_suspend(struct device *dev)
>   		 * Clearing npss forces a controller reset on resume. The
>   		 * correct value will be resdicovered then.
>   		 */
> -		nvme_dev_disable(ndev, true);
> +		ret = nvme_disable_prepare_reset(ndev, true);
>   		ctrl->npss = 0;
> -		ret = 0;
>   		goto unfreeze;
>   	}
>   	/*
> @@ -2970,9 +2989,7 @@ static int nvme_suspend(struct device *dev)
>   static int nvme_simple_suspend(struct device *dev)
>   {
>   	struct nvme_dev *ndev = pci_get_drvdata(to_pci_dev(dev));
> -
> -	nvme_dev_disable(ndev, true);
> -	return 0;
> +	return nvme_disable_prepare_reset(ndev, true);
>   }
>   
>   static int nvme_simple_resume(struct device *dev)
> @@ -2980,8 +2997,7 @@ static int nvme_simple_resume(struct device *dev)
>   	struct pci_dev *pdev = to_pci_dev(dev);
>   	struct nvme_dev *ndev = pci_get_drvdata(pdev);
>   
> -	nvme_reset_ctrl(&ndev->ctrl);
> -	return 0;
> +	return nvme_reset_continue(&ndev->ctrl);

Same here...

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-09-20 18:08     ` Keith Busch
@ 2019-09-20 18:14       ` Sagi Grimberg
  0 siblings, 0 replies; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 18:14 UTC (permalink / raw)
  To: Keith Busch; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart


>>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>>> index 95b74d020506..33b5729763c2 100644
>>> --- a/drivers/nvme/host/core.c
>>> +++ b/drivers/nvme/host/core.c
>>> @@ -3742,15 +3742,14 @@ static void nvme_fw_act_work(struct work_struct *work)
>>>    		if (time_after(jiffies, fw_act_timeout)) {
>>>    			dev_warn(ctrl->device,
>>>    				"Fw activation timeout, reset controller\n");
>>> -			nvme_reset_ctrl(ctrl);
>>> -			break;
>>> +			nvme_reset_continue(ctrl);
>>> +			return;
>>>    		}
>>>    		msleep(100);
>>>    	}
>>> -	if (ctrl->state != NVME_CTRL_LIVE)
>>> +	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
>>>    		return;
>>> -
>>>    	nvme_start_queues(ctrl);
>>>    	/* read FW slot information to clear the AER */
>>>    	nvme_get_fw_slot_info(ctrl);
>>> @@ -3768,7 +3767,8 @@ static void nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
>>>    		nvme_queue_scan(ctrl);
>>>    		break;
>>>    	case NVME_AER_NOTICE_FW_ACT_STARTING:
>>> -		queue_work(nvme_wq, &ctrl->fw_act_work);
>>> +		if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
>>> +			queue_work(nvme_wq, &ctrl->fw_act_work);
>>
>> btw, what happens if someone just initiated a reset here when the event
>> is handled? just ignore the fw activation?
> 
> Yep, if someone reset the controller before the driver got to handle the
> activation notice event, there's nothing for fw_act_work to do anymore
> since CSTS.PP is not valid after toggling CC.EN.

Strictly speaking, this is not testing the CC.EN, and it is possible
that it was not toggled yet. But I get what you are saying.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 4/6] nvme: Introduce nvme_reset_continue
  2019-09-19 21:34 ` [PATCHv2 4/6] nvme: Introduce nvme_reset_continue kbusch
@ 2019-09-20 18:15   ` Sagi Grimberg
  2019-09-27 21:37   ` Christoph Hellwig
  2019-09-27 21:44   ` Christoph Hellwig
  2 siblings, 0 replies; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 18:15 UTC (permalink / raw)
  To: kbusch, linux-nvme, Christoph Hellwig; +Cc: Edmund Nadolski, James Smart


> ---
>   drivers/nvme/host/core.c | 12 ++++++++++--
>   drivers/nvme/host/nvme.h |  1 +
>   2 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index e03b1765fc01..95b74d020506 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -117,14 +117,22 @@ static void nvme_queue_scan(struct nvme_ctrl *ctrl)
>   		queue_work(nvme_wq, &ctrl->scan_work);
>   }
>   
> -int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
> +int nvme_reset_continue(struct nvme_ctrl *ctrl)
>   {
> -	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> +	if (ctrl->state != NVME_CTRL_RESETTING)
>   		return -EBUSY;
>   	if (!queue_work(nvme_reset_wq, &ctrl->reset_work))
>   		return -EBUSY;
>   	return 0;
>   }
> +EXPORT_SYMBOL_GPL(nvme_reset_continue);

Would it be useful for you to have nvme_reset_continue_sync variant?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 6/6] nvme: Wait for reset state when required
  2019-09-20 18:13   ` Sagi Grimberg
@ 2019-09-20 18:26     ` Keith Busch
  2019-09-20 19:29       ` Keith Busch
  0 siblings, 1 reply; 41+ messages in thread
From: Keith Busch @ 2019-09-20 18:26 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart

On Fri, Sep 20, 2019 at 11:13:06AM -0700, Sagi Grimberg wrote:
> On 9/19/19 2:34 PM, kbusch@kernel.org wrote:
> >   static void nvme_reset_done(struct pci_dev *pdev)
> >   {
> >   	struct nvme_dev *dev = pci_get_drvdata(pdev);
> > -	nvme_reset_ctrl_sync(&dev->ctrl);
> > +
> > +	if (!nvme_reset_continue(&dev->ctrl))
> > +		flush_work(&dev->ctrl.reset_work);
> 
> Is nvme_reset_continue allowed to fail here?
> As I see it, the flush must complete regardless doesn't it?

I was thinking a surprise hot-removal, but no, the pci driver is holding
the device lock across the prepare/done sequence, so I don't think it's
possible for this reset_continue() to fail to schedule.
 
> >   }
> >   static void nvme_shutdown(struct pci_dev *pdev)
> >   {
> >   	struct nvme_dev *dev = pci_get_drvdata(pdev);
> > -	nvme_dev_disable(dev, true);
> > +	nvme_disable_prepare_reset(dev, true);
> >   }
> >   /*
> > @@ -2898,7 +2920,7 @@ static int nvme_resume(struct device *dev)
> >   	if (ndev->last_ps == U32_MAX ||
> >   	    nvme_set_power_state(ctrl, ndev->last_ps) != 0)
> > -		nvme_reset_ctrl(ctrl);
> > +		return nvme_reset_continue(&ndev->ctrl);
> 
> Is the controller allowed not to be in RESETTING in this stage?
> do we need a WARN_ON if it is?

This one, though, I think a hot removal can cause this to fail since the
device lock isn't held from suspend all the way through resume, so we
could be in the DELETING or DEAD state.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 6/6] nvme: Wait for reset state when required
  2019-09-20 18:26     ` Keith Busch
@ 2019-09-20 19:29       ` Keith Busch
  2019-09-20 20:49         ` Sagi Grimberg
  0 siblings, 1 reply; 41+ messages in thread
From: Keith Busch @ 2019-09-20 19:29 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart

On Fri, Sep 20, 2019 at 12:26:37PM -0600, Keith Busch wrote:
> On Fri, Sep 20, 2019 at 11:13:06AM -0700, Sagi Grimberg wrote:
> > On 9/19/19 2:34 PM, kbusch@kernel.org wrote:
> > >   static void nvme_reset_done(struct pci_dev *pdev)
> > >   {
> > >   	struct nvme_dev *dev = pci_get_drvdata(pdev);
> > > -	nvme_reset_ctrl_sync(&dev->ctrl);
> > > +
> > > +	if (!nvme_reset_continue(&dev->ctrl))
> > > +		flush_work(&dev->ctrl.reset_work);
> > 
> > Is nvme_reset_continue allowed to fail here?
> > As I see it, the flush must complete regardless doesn't it?
> 
> I was thinking a surprise hot-removal, but no, the pci driver is holding
> the device lock across the prepare/done sequence, so I don't think it's
> possible for this reset_continue() to fail to schedule.

Oh wait, there actually is a way, and I even accounted for it in this
patch, but just forgot about it. If we start an FLR while the controller
is CONNECTING, and if that CONNECTING fails, we'll transition to DELETING.
So yeah, reset_continue could fail.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 6/6] nvme: Wait for reset state when required
  2019-09-20 19:29       ` Keith Busch
@ 2019-09-20 20:49         ` Sagi Grimberg
  2019-09-20 21:06           ` Keith Busch
  0 siblings, 1 reply; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-20 20:49 UTC (permalink / raw)
  To: Keith Busch; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart


>>>>    static void nvme_reset_done(struct pci_dev *pdev)
>>>>    {
>>>>    	struct nvme_dev *dev = pci_get_drvdata(pdev);
>>>> -	nvme_reset_ctrl_sync(&dev->ctrl);
>>>> +
>>>> +	if (!nvme_reset_continue(&dev->ctrl))
>>>> +		flush_work(&dev->ctrl.reset_work);
>>>
>>> Is nvme_reset_continue allowed to fail here?
>>> As I see it, the flush must complete regardless doesn't it?
>>
>> I was thinking a surprise hot-removal, but no, the pci driver is holding
>> the device lock across the prepare/done sequence, so I don't think it's
>> possible for this reset_continue() to fail to schedule.
> 
> Oh wait, there actually is a way, and I even accounted for it in this
> patch, but just forgot about it. If we start an FLR while the controller
> is CONNECTING, and if that CONNECTING fails, we'll transition to DELETING.
> So yeah, reset_continue could fail.

And nvme_remove() will flush the reset_work?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 6/6] nvme: Wait for reset state when required
  2019-09-20 20:49         ` Sagi Grimberg
@ 2019-09-20 21:06           ` Keith Busch
  0 siblings, 0 replies; 41+ messages in thread
From: Keith Busch @ 2019-09-20 21:06 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart

On Fri, Sep 20, 2019 at 01:49:57PM -0700, Sagi Grimberg wrote:
> 
> > > > >    static void nvme_reset_done(struct pci_dev *pdev)
> > > > >    {
> > > > >    	struct nvme_dev *dev = pci_get_drvdata(pdev);
> > > > > -	nvme_reset_ctrl_sync(&dev->ctrl);
> > > > > +
> > > > > +	if (!nvme_reset_continue(&dev->ctrl))
> > > > > +		flush_work(&dev->ctrl.reset_work);
> > > > 
> > > > Is nvme_reset_continue allowed to fail here?
> > > > As I see it, the flush must complete regardless doesn't it?
> > > 
> > > I was thinking a surprise hot-removal, but no, the pci driver is holding
> > > the device lock across the prepare/done sequence, so I don't think it's
> > > possible for this reset_continue() to fail to schedule.
> > 
> > Oh wait, there actually is a way, and I even accounted for it in this
> > patch, but just forgot about it. If we start an FLR while the controller
> > is CONNECTING, and if that CONNECTING fails, we'll transition to DELETING.
> > So yeah, reset_continue could fail.
> 
> And nvme_remove() will flush the reset_work?

If scheduled, yes, nvme_remove() flushes reset_work.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 0/6] nvme: double reset prevention
  2019-09-19 21:34 [PATCHv2 0/6] nvme: double reset prevention kbusch
                   ` (6 preceding siblings ...)
  2019-09-20 17:51 ` [PATCHv2 0/6] nvme: double reset prevention Sagi Grimberg
@ 2019-09-23 22:09 ` Sagi Grimberg
  2019-09-24 15:07   ` Keith Busch
  7 siblings, 1 reply; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-23 22:09 UTC (permalink / raw)
  To: kbusch, linux-nvme, Christoph Hellwig; +Cc: Edmund Nadolski, James Smart

Keith,

The series looks fine to me, do you think we need more eyes on
this or we should go ahead and apply this?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 0/6] nvme: double reset prevention
  2019-09-23 22:09 ` Sagi Grimberg
@ 2019-09-24 15:07   ` Keith Busch
  2019-09-24 18:07     ` Sagi Grimberg
  0 siblings, 1 reply; 41+ messages in thread
From: Keith Busch @ 2019-09-24 15:07 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart

On Mon, Sep 23, 2019 at 03:09:43PM -0700, Sagi Grimberg wrote:
> Keith,
> 
> The series looks fine to me, do you think we need more eyes on
> this or we should go ahead and apply this?

Sure, I think the series is okay as-is. It will clash with a fix from
Mario, though, so one of us will have to rebase.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 0/6] nvme: double reset prevention
  2019-09-24 15:07   ` Keith Busch
@ 2019-09-24 18:07     ` Sagi Grimberg
  2019-09-24 18:12       ` Keith Busch
  0 siblings, 1 reply; 41+ messages in thread
From: Sagi Grimberg @ 2019-09-24 18:07 UTC (permalink / raw)
  To: Keith Busch; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart


>> Keith,
>>
>> The series looks fine to me, do you think we need more eyes on
>> this or we should go ahead and apply this?
> 
> Sure, I think the series is okay as-is. It will clash with a fix from
> Mario, though, so one of us will have to rebase.

Mario's patch is already on the nvme tree, care to respin?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 0/6] nvme: double reset prevention
  2019-09-24 18:07     ` Sagi Grimberg
@ 2019-09-24 18:12       ` Keith Busch
  0 siblings, 0 replies; 41+ messages in thread
From: Keith Busch @ 2019-09-24 18:12 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Edmund Nadolski, Christoph Hellwig, linux-nvme, James Smart

On Tue, Sep 24, 2019 at 11:07:53AM -0700, Sagi Grimberg wrote:
> 
> > > Keith,
> > > 
> > > The series looks fine to me, do you think we need more eyes on
> > > this or we should go ahead and apply this?
> > 
> > Sure, I think the series is okay as-is. It will clash with a fix from
> > Mario, though, so one of us will have to rebase.
> 
> Mario's patch is already on the nvme tree, care to respin?

Sure, I'll resend. Probably won't be able to get to it today, though.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues
  2019-09-19 21:34 ` [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues kbusch
  2019-09-20 17:49   ` Sagi Grimberg
@ 2019-09-27 21:33   ` Christoph Hellwig
  1 sibling, 0 replies; 41+ messages in thread
From: Christoph Hellwig @ 2019-09-27 21:33 UTC (permalink / raw)
  To: kbusch
  Cc: James Smart, Edmund Nadolski, Sagi Grimberg, linux-nvme,
	Christoph Hellwig

On Fri, Sep 20, 2019 at 06:34:26AM +0900, kbusch@kernel.org wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> If a controller becomes degraded after a reset, we will not be able to
> perform any IO. We currently teardown previously created request
> queues and namespaces, but we had kept the unusable tagset. Free
> it after all queues using it have been released.
> 
> Reviewed-by: James Smart <james.smart@broadcom.com>
> Signed-off-by: Keith Busch <kbusch@kernel.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 2/6] nvme: Remove ADMIN_ONLY state
  2019-09-19 21:34 ` [PATCHv2 2/6] nvme: Remove ADMIN_ONLY state kbusch
  2019-09-20 17:53   ` Sagi Grimberg
@ 2019-09-27 21:35   ` Christoph Hellwig
  1 sibling, 0 replies; 41+ messages in thread
From: Christoph Hellwig @ 2019-09-27 21:35 UTC (permalink / raw)
  To: kbusch
  Cc: James Smart, Edmund Nadolski, Sagi Grimberg, linux-nvme,
	Christoph Hellwig

On Fri, Sep 20, 2019 at 06:34:27AM +0900, kbusch@kernel.org wrote:
> @@ -3585,11 +3571,9 @@ static void nvme_scan_work(struct work_struct *work)
>  	struct nvme_id_ctrl *id;
>  	unsigned nn;
>  
> -	if (ctrl->state != NVME_CTRL_LIVE)
> +	if (ctrl->state != NVME_CTRL_LIVE || !ctrl->tagset)
>  		return;
>  
> -	WARN_ON_ONCE(!ctrl->tagset);
> -

It would be nice to throw in a comment here on why tagset can be NULL
here and what that means to make this a little easier to decode for
future hackers.  Except for that the patch looks fine to me.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 3/6] nvme: Restart request timers in resetting state
  2019-09-19 21:34 ` [PATCHv2 3/6] nvme: Restart request timers in resetting state kbusch
  2019-09-20 17:53   ` Sagi Grimberg
@ 2019-09-27 21:36   ` Christoph Hellwig
  1 sibling, 0 replies; 41+ messages in thread
From: Christoph Hellwig @ 2019-09-27 21:36 UTC (permalink / raw)
  To: kbusch
  Cc: James Smart, Edmund Nadolski, Sagi Grimberg, linux-nvme,
	Christoph Hellwig

On Fri, Sep 20, 2019 at 06:34:28AM +0900, kbusch@kernel.org wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> A controller in the resetting state has not yet completed its recovery
> actions. The pci and fc transports were already doing this, so update
> the remaining transports to not attempt additional recovery in this
> state. Instead, just restart the request timer.

Looks sane.  You don't happen to have any good idea how we could lift
this kind of logic to common code?

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 4/6] nvme: Introduce nvme_reset_continue
  2019-09-19 21:34 ` [PATCHv2 4/6] nvme: Introduce nvme_reset_continue kbusch
  2019-09-20 18:15   ` Sagi Grimberg
@ 2019-09-27 21:37   ` Christoph Hellwig
  2019-09-27 21:44   ` Christoph Hellwig
  2 siblings, 0 replies; 41+ messages in thread
From: Christoph Hellwig @ 2019-09-27 21:37 UTC (permalink / raw)
  To: kbusch
  Cc: James Smart, Edmund Nadolski, Sagi Grimberg, linux-nvme,
	Christoph Hellwig

On Fri, Sep 20, 2019 at 06:34:29AM +0900, kbusch@kernel.org wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> There are times we need to reset the controller in two parts where the
> initialization happens in a different context than the disabling. We need
> to be able to set the state to RESETTING first in order to prevent another
> reset from scheduling while performing a critical section that handles
> the disable part. Introduce a new helpwer function, nvme_reset_continue()
> that can be called from such contexts after they've set the controller
> state to RESETTING.
> 
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
>  drivers/nvme/host/core.c | 12 ++++++++++--
>  drivers/nvme/host/nvme.h |  1 +
>  2 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index e03b1765fc01..95b74d020506 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -117,14 +117,22 @@ static void nvme_queue_scan(struct nvme_ctrl *ctrl)
>  		queue_work(nvme_wq, &ctrl->scan_work);
>  }
>  
> -int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
> +int nvme_reset_continue(struct nvme_ctrl *ctrl)
>  {
> -	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> +	if (ctrl->state != NVME_CTRL_RESETTING)
>  		return -EBUSY;
>  	if (!queue_work(nvme_reset_wq, &ctrl->reset_work))
>  		return -EBUSY;
>  	return 0;
>  }
> +EXPORT_SYMBOL_GPL(nvme_reset_continue);
> +
> +int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
> +{
> +	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> +		return -EBUSY;
> +	return nvme_reset_continue(ctrl);
> +}
>  EXPORT_SYMBOL_GPL(nvme_reset_ctrl);

I'd prefer to still open code the queue_work in nvme_reset_ctrl.
It is one extra line but a little easier to read.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-09-19 21:34 ` [PATCHv2 5/6] nvme: Prevent resets during paused states kbusch
  2019-09-20 18:03   ` Sagi Grimberg
  2019-09-20 18:05   ` Sagi Grimberg
@ 2019-09-27 21:41   ` Christoph Hellwig
  2019-09-28  0:23     ` Keith Busch
  2 siblings, 1 reply; 41+ messages in thread
From: Christoph Hellwig @ 2019-09-27 21:41 UTC (permalink / raw)
  To: kbusch
  Cc: James Smart, Edmund Nadolski, Sagi Grimberg, linux-nvme,
	Christoph Hellwig

On Fri, Sep 20, 2019 at 06:34:30AM +0900, kbusch@kernel.org wrote:
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 95b74d020506..33b5729763c2 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3742,15 +3742,14 @@ static void nvme_fw_act_work(struct work_struct *work)
>  		if (time_after(jiffies, fw_act_timeout)) {
>  			dev_warn(ctrl->device,
>  				"Fw activation timeout, reset controller\n");
> -			nvme_reset_ctrl(ctrl);
> -			break;
> +			nvme_reset_continue(ctrl);
> +			return;

Hmm, I find the use of a function named nvme_reset_continue here
rather confusing, as there is no reset we are continuing.  Yes, we place
the controller in resetting state, but it isn't really a reset in the
traditional sense.  What is the rason against a separate paused state?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 6/6] nvme: Wait for reset state when required
  2019-09-19 21:34 ` [PATCHv2 6/6] nvme: Wait for reset state when required kbusch
  2019-09-20 18:13   ` Sagi Grimberg
@ 2019-09-27 21:43   ` Christoph Hellwig
  1 sibling, 0 replies; 41+ messages in thread
From: Christoph Hellwig @ 2019-09-27 21:43 UTC (permalink / raw)
  To: kbusch
  Cc: James Smart, Edmund Nadolski, Sagi Grimberg, linux-nvme,
	Christoph Hellwig

> +/*
> + * Returns false for sink states that can't ever transition back to live.
> + */
> +static bool nvme_state_transient(struct nvme_ctrl *ctrl)

Can we invert the logic and name?

static bool nvme_state_terminal(struct nvme_ctrl *ctrl) or so?
Transient always makes me thing of something with a very short
life time.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 4/6] nvme: Introduce nvme_reset_continue
  2019-09-19 21:34 ` [PATCHv2 4/6] nvme: Introduce nvme_reset_continue kbusch
  2019-09-20 18:15   ` Sagi Grimberg
  2019-09-27 21:37   ` Christoph Hellwig
@ 2019-09-27 21:44   ` Christoph Hellwig
  2 siblings, 0 replies; 41+ messages in thread
From: Christoph Hellwig @ 2019-09-27 21:44 UTC (permalink / raw)
  To: kbusch
  Cc: James Smart, Edmund Nadolski, Sagi Grimberg, linux-nvme,
	Christoph Hellwig

> -int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
> +int nvme_reset_continue(struct nvme_ctrl *ctrl)
>  {
> -	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> +	if (ctrl->state != NVME_CTRL_RESETTING)
>  		return -EBUSY;
>  	if (!queue_work(nvme_reset_wq, &ctrl->reset_work))
>  		return -EBUSY;
>  	return 0;
>  }
> +EXPORT_SYMBOL_GPL(nvme_reset_continue);

Also, aside from the other discussion items there doesn't seem to be a
point in exporting this function.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-09-27 21:41   ` Christoph Hellwig
@ 2019-09-28  0:23     ` Keith Busch
       [not found]       ` <CGME20191010120317uscas1p1e4483ca19dbb0e550c413c18c5928537@uscas1p1.samsung.com>
  0 siblings, 1 reply; 41+ messages in thread
From: Keith Busch @ 2019-09-28  0:23 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Edmund Nadolski, Sagi Grimberg, linux-nvme, James Smart

On Fri, Sep 27, 2019 at 11:41:21PM +0200, Christoph Hellwig wrote:
> On Fri, Sep 20, 2019 at 06:34:30AM +0900, kbusch@kernel.org wrote:
> > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > index 95b74d020506..33b5729763c2 100644
> > --- a/drivers/nvme/host/core.c
> > +++ b/drivers/nvme/host/core.c
> > @@ -3742,15 +3742,14 @@ static void nvme_fw_act_work(struct work_struct *work)
> >  		if (time_after(jiffies, fw_act_timeout)) {
> >  			dev_warn(ctrl->device,
> >  				"Fw activation timeout, reset controller\n");
> > -			nvme_reset_ctrl(ctrl);
> > -			break;
> > +			nvme_reset_continue(ctrl);
> > +			return;
> 
> Hmm, I find the use of a function named nvme_reset_continue here
> rather confusing, as there is no reset we are continuing.  Yes, we place
> the controller in resetting state, but it isn't really a reset in the

A PAUSED state would have to copy the same transitions that RESETTING has. It
seems simpler to just collapse this with the existing RESETTING state, and we
also don't need a special way to schedule reset_work from it.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* RE: [PATCHv2 5/6] nvme: Prevent resets during paused states
       [not found]       ` <CGME20191010120317uscas1p1e4483ca19dbb0e550c413c18c5928537@uscas1p1.samsung.com>
@ 2019-10-10 12:03         ` Judy Brock
  2019-10-10 13:59           ` Keith Busch
  0 siblings, 1 reply; 41+ messages in thread
From: Judy Brock @ 2019-10-10 12:03 UTC (permalink / raw)
  To: Keith Busch, Christoph Hellwig
  Cc: Judy Brock, Edmund Nadolski, Sagi Grimberg, linux-nvme, James Smart

Hi Keith,

Any chance you can get this patch in to the nvme tree soon? 

We believe it may fix an issue we have seen. We are throwing a ton of stuff at two controllers on a dual ported device at the same time including multiple nested resets, nested FW Activations, nested Formats, DSMs, FW Downloads, etc. We saw this in dmesg in one test run. We think Processing Paused bit was probably not valid at the time as we don't have any record of any FW Activates that went to either of the controllers and never finished.

kern  :warn  : [Wed Oct  2 14:21:06 2019] nvme nvme0: Fw activation timeout, reset controller
kern  :warn  : [Wed Oct  2 14:21:06 2019] nvme nvme1: Fw activation timeout, reset controller

thanks,

Judy

-----Original Message-----
From: Linux-nvme [mailto:linux-nvme-bounces@lists.infradead.org] On Behalf Of Keith Busch
Sent: Friday, September 27, 2019 5:24 PM
To: Christoph Hellwig
Cc: Edmund Nadolski; Sagi Grimberg; linux-nvme@lists.infradead.org; James Smart
Subject: Re: [PATCHv2 5/6] nvme: Prevent resets during paused states

On Fri, Sep 27, 2019 at 11:41:21PM +0200, Christoph Hellwig wrote:
> On Fri, Sep 20, 2019 at 06:34:30AM +0900, kbusch@kernel.org wrote:
> > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > index 95b74d020506..33b5729763c2 100644
> > --- a/drivers/nvme/host/core.c
> > +++ b/drivers/nvme/host/core.c
> > @@ -3742,15 +3742,14 @@ static void nvme_fw_act_work(struct work_struct *work)
> >  		if (time_after(jiffies, fw_act_timeout)) {
> >  			dev_warn(ctrl->device,
> >  				"Fw activation timeout, reset controller\n");
> > -			nvme_reset_ctrl(ctrl);
> > -			break;
> > +			nvme_reset_continue(ctrl);
> > +			return;
> 
> Hmm, I find the use of a function named nvme_reset_continue here
> rather confusing, as there is no reset we are continuing.  Yes, we place
> the controller in resetting state, but it isn't really a reset in the

A PAUSED state would have to copy the same transitions that RESETTING has. It
seems simpler to just collapse this with the existing RESETTING state, and we
also don't need a special way to schedule reset_work from it.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_mailman_listinfo_linux-2Dnvme&d=DwICAg&c=JfeWlBa6VbDyTXraMENjy_b_0yKWuqQ4qY-FPhxK4x8w-TfgRBDyeV4hVQQBEgL2&r=YJM_QPk2w1CRIo5NNBXnCXGzNnmIIfG_iTRs6chBf6s&m=esS-yc3HRUgI7AHbyjPV-1pJ3qRobo7jEiiqE-oMMpE&s=cCuZq1RCdLq7SrMkSFBJiSVlT_GcXGTKTN7e_55QRwE&e= 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-10-10 12:03         ` Judy Brock
@ 2019-10-10 13:59           ` Keith Busch
  2019-10-10 14:03             ` Christoph Hellwig
  2019-10-11  0:51             ` Judy Brock
  0 siblings, 2 replies; 41+ messages in thread
From: Keith Busch @ 2019-10-10 13:59 UTC (permalink / raw)
  To: Judy Brock
  Cc: James Smart, Edmund Nadolski, Christoph Hellwig, linux-nvme,
	Sagi Grimberg

On Thu, Oct 10, 2019 at 12:03:16PM +0000, Judy Brock wrote:
> Hi Keith,
> 
> Any chance you can get this patch in to the nvme tree soon? 

I'll try to find some time to respin today to address some of the feedback.

Christoph, are you okay with my explanation for not introducing another ctrl
state? We could add a paused state, but I think it's a bit risky for no real
gain.
 
> We believe it may fix an issue we have seen. We are throwing a ton of stuff
> at two controllers on a dual ported device at the same time including
> multiple nested resets, nested FW Activations, nested Formats, DSMs, FW
> Downloads, etc. We saw this in dmesg in one test run. We think Processing
> Paused bit was probably not valid at the time as we don't have any record of
> any FW Activates that went to either of the controllers and never finished.

That is rather unusual, which means it is a good test to have! Would you
consider porting your case to blktests?

   https://github.com/osandov/blktests

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-10-10 13:59           ` Keith Busch
@ 2019-10-10 14:03             ` Christoph Hellwig
  2019-10-11  0:51             ` Judy Brock
  1 sibling, 0 replies; 41+ messages in thread
From: Christoph Hellwig @ 2019-10-10 14:03 UTC (permalink / raw)
  To: Keith Busch
  Cc: Sagi Grimberg, James Smart, linux-nvme, Edmund Nadolski,
	Judy Brock, Christoph Hellwig

On Thu, Oct 10, 2019 at 07:59:14AM -0600, Keith Busch wrote:
> Christoph, are you okay with my explanation for not introducing another ctrl
> state? We could add a paused state, but I think it's a bit risky for no real
> gain.

I don't particularly like it, but I also can't come up with a better
idea..

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* RE: [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-10-10 13:59           ` Keith Busch
  2019-10-10 14:03             ` Christoph Hellwig
@ 2019-10-11  0:51             ` Judy Brock
  2019-10-11 14:23               ` Keith Busch
  1 sibling, 1 reply; 41+ messages in thread
From: Judy Brock @ 2019-10-11  0:51 UTC (permalink / raw)
  To: Keith Busch
  Cc: Sagi Grimberg, James Smart, linux-nvme, Edmund Nadolski,
	Judy Brock, Christoph Hellwig

> That is rather unusual, which means it is a good test to have! Would you consider porting your case to blktests?

I'll find out if it is allowed. Where are "blktests"  and is there a pointer to how cases are added?

Thanks,
Judy

-----Original Message-----
From: Linux-nvme [mailto:linux-nvme-bounces@lists.infradead.org] On Behalf Of Keith Busch
Sent: Thursday, October 10, 2019 6:59 AM
To: Judy Brock
Cc: James Smart; Edmund Nadolski; Christoph Hellwig; linux-nvme@lists.infradead.org; Sagi Grimberg
Subject: Re: [PATCHv2 5/6] nvme: Prevent resets during paused states

On Thu, Oct 10, 2019 at 12:03:16PM +0000, Judy Brock wrote:
> Hi Keith,
> 
> Any chance you can get this patch in to the nvme tree soon? 

I'll try to find some time to respin today to address some of the feedback.

Christoph, are you okay with my explanation for not introducing another ctrl
state? We could add a paused state, but I think it's a bit risky for no real
gain.
 
> We believe it may fix an issue we have seen. We are throwing a ton of stuff
> at two controllers on a dual ported device at the same time including
> multiple nested resets, nested FW Activations, nested Formats, DSMs, FW
> Downloads, etc. We saw this in dmesg in one test run. We think Processing
> Paused bit was probably not valid at the time as we don't have any record of
> any FW Activates that went to either of the controllers and never finished.

That is rather unusual, which means it is a good test to have! Would you
consider porting your case to blktests?

   https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_osandov_blktests&d=DwICAg&c=JfeWlBa6VbDyTXraMENjy_b_0yKWuqQ4qY-FPhxK4x8w-TfgRBDyeV4hVQQBEgL2&r=YJM_QPk2w1CRIo5NNBXnCXGzNnmIIfG_iTRs6chBf6s&m=Hs0mocSKTjH-thQeB1DSzzm-eTmZH3RXTCGZpRSBlWk&s=PPcqQ2ySml8pGvqNMVV9-KQc2mH-RbyN_3yqeV4yf9A&e= 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_mailman_listinfo_linux-2Dnvme&d=DwICAg&c=JfeWlBa6VbDyTXraMENjy_b_0yKWuqQ4qY-FPhxK4x8w-TfgRBDyeV4hVQQBEgL2&r=YJM_QPk2w1CRIo5NNBXnCXGzNnmIIfG_iTRs6chBf6s&m=Hs0mocSKTjH-thQeB1DSzzm-eTmZH3RXTCGZpRSBlWk&s=b4-R8uoEjU4mBPehgtQfsanEEv8D-sgbkUvLPTO3ZCM&e= 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCHv2 5/6] nvme: Prevent resets during paused states
  2019-10-11  0:51             ` Judy Brock
@ 2019-10-11 14:23               ` Keith Busch
  0 siblings, 0 replies; 41+ messages in thread
From: Keith Busch @ 2019-10-11 14:23 UTC (permalink / raw)
  To: Judy Brock
  Cc: Sagi Grimberg, Edmund Nadolski, James Smart, linux-nvme,
	Christoph Hellwig

On Fri, Oct 11, 2019 at 12:51:57AM +0000, Judy Brock wrote:
> > That is rather unusual, which means it is a good test to have! Would you consider porting your case to blktests?
> 
> I'll find out if it is allowed. Where are "blktests"  and is there a pointer to how cases are added?

It was this supposed to be this link to github I sent you before your
email server mangled it.
 
>    https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_osandov_blktests&d=DwICAg&c=JfeWlBa6VbDyTXraMENjy_b_0yKWuqQ4qY-FPhxK4x8w-TfgRBDyeV4hVQQBEgL2&r=YJM_QPk2w1CRIo5NNBXnCXGzNnmIIfG_iTRs6chBf6s&m=Hs0mocSKTjH-thQeB1DSzzm-eTmZH3RXTCGZpRSBlWk&s=PPcqQ2ySml8pGvqNMVV9-KQc2mH-RbyN_3yqeV4yf9A&e= 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2019-10-11 14:23 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-19 21:34 [PATCHv2 0/6] nvme: double reset prevention kbusch
2019-09-19 21:34 ` [PATCHv2 1/6] nvme-pci: Free tagset if no IO queues kbusch
2019-09-20 17:49   ` Sagi Grimberg
2019-09-20 17:53     ` Keith Busch
2019-09-20 17:56       ` Sagi Grimberg
2019-09-27 21:33   ` Christoph Hellwig
2019-09-19 21:34 ` [PATCHv2 2/6] nvme: Remove ADMIN_ONLY state kbusch
2019-09-20 17:53   ` Sagi Grimberg
2019-09-27 21:35   ` Christoph Hellwig
2019-09-19 21:34 ` [PATCHv2 3/6] nvme: Restart request timers in resetting state kbusch
2019-09-20 17:53   ` Sagi Grimberg
2019-09-27 21:36   ` Christoph Hellwig
2019-09-19 21:34 ` [PATCHv2 4/6] nvme: Introduce nvme_reset_continue kbusch
2019-09-20 18:15   ` Sagi Grimberg
2019-09-27 21:37   ` Christoph Hellwig
2019-09-27 21:44   ` Christoph Hellwig
2019-09-19 21:34 ` [PATCHv2 5/6] nvme: Prevent resets during paused states kbusch
2019-09-20 18:03   ` Sagi Grimberg
2019-09-20 18:05   ` Sagi Grimberg
2019-09-20 18:08     ` Keith Busch
2019-09-20 18:14       ` Sagi Grimberg
2019-09-27 21:41   ` Christoph Hellwig
2019-09-28  0:23     ` Keith Busch
     [not found]       ` <CGME20191010120317uscas1p1e4483ca19dbb0e550c413c18c5928537@uscas1p1.samsung.com>
2019-10-10 12:03         ` Judy Brock
2019-10-10 13:59           ` Keith Busch
2019-10-10 14:03             ` Christoph Hellwig
2019-10-11  0:51             ` Judy Brock
2019-10-11 14:23               ` Keith Busch
2019-09-19 21:34 ` [PATCHv2 6/6] nvme: Wait for reset state when required kbusch
2019-09-20 18:13   ` Sagi Grimberg
2019-09-20 18:26     ` Keith Busch
2019-09-20 19:29       ` Keith Busch
2019-09-20 20:49         ` Sagi Grimberg
2019-09-20 21:06           ` Keith Busch
2019-09-27 21:43   ` Christoph Hellwig
2019-09-20 17:51 ` [PATCHv2 0/6] nvme: double reset prevention Sagi Grimberg
2019-09-20 17:54   ` Keith Busch
2019-09-23 22:09 ` Sagi Grimberg
2019-09-24 15:07   ` Keith Busch
2019-09-24 18:07     ` Sagi Grimberg
2019-09-24 18:12       ` Keith Busch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).