linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/3] Asynchronous shutdown interface and example implementation
@ 2022-03-28 23:00 Tanjore Suresh
  2022-03-28 23:00 ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Tanjore Suresh
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Tanjore Suresh @ 2022-03-28 23:00 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Rafael J . Wysocki, Christoph Hellwig,
	Sagi Grimberg, Bjorn Helgaas
  Cc: linux-kernel, linux-nvme, linux-pci, Tanjore Suresh

Problem:

Some of our machines are configured with  many NVMe devices and
are validated for strict shutdown time requirements. Each NVMe
device plugged into the system, typicaly takes about 4.5 secs
to shutdown. A system with 16 such NVMe devices will takes
approximately 80 secs to shutdown and go through reboot.

The current shutdown APIs as defined at bus level is defined to be
synchronous. Therefore, more devices are in the system the greater
the time it takes to shutdown. This shutdown time significantly
contributes the machine reboot time.

Solution:

This patch set proposes an asynchronous shutdown interface at bus level,
modifies the core driver, device shutdown routine to exploit the
new interface while maintaining backward compatibility with synchronous
implementation already existing (Patch 1 of 3) and exploits new interface
to enable all PCI-E based devices to use asynchronous interface semantics
if necessary (Patch 2 of 3). The implementation at PCI-E level also works
in a backward compatible way, to allow exiting device implementation
to work with current synchronous semantics. Only show cases an example
implementation for NVMe device to exploit this asynchronous shutdown
interface. (Patch 3 of 3).

Tanjore Suresh (3):
  driver core: Support asynchronous driver shutdown
  PCI: Support asynchronous shutdown
  nvme: Add async shutdown support

 drivers/base/core.c        | 39 ++++++++++++++++++-
 drivers/nvme/host/core.c   | 28 +++++++++----
 drivers/nvme/host/nvme.h   |  8 ++++
 drivers/nvme/host/pci.c    | 80 ++++++++++++++++++++++++--------------
 drivers/pci/pci-driver.c   | 17 ++++++--
 include/linux/device/bus.h | 10 +++++
 include/linux/pci.h        |  2 +
 7 files changed, 144 insertions(+), 40 deletions(-)

-- 
2.35.1.1021.g381101b075-goog


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 1/3] driver core: Support asynchronous driver shutdown
  2022-03-28 23:00 [PATCH v1 0/3] Asynchronous shutdown interface and example implementation Tanjore Suresh
@ 2022-03-28 23:00 ` Tanjore Suresh
  2022-03-28 23:00   ` [PATCH v1 2/3] PCI: Support asynchronous shutdown Tanjore Suresh
  2022-03-29  0:19   ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Oliver O'Halloran
  2022-03-29  5:26 ` [PATCH v1 0/3] Asynchronous shutdown interface and example implementation Greg Kroah-Hartman
  2022-03-30  2:07 ` Keith Busch
  2 siblings, 2 replies; 12+ messages in thread
From: Tanjore Suresh @ 2022-03-28 23:00 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Rafael J . Wysocki, Christoph Hellwig,
	Sagi Grimberg, Bjorn Helgaas
  Cc: linux-kernel, linux-nvme, linux-pci, Tanjore Suresh

This changes the bus driver interface with additional entry points
to enable devices to implement asynchronous shutdown. The existing
synchronous interface to shutdown is unmodified and retained for
backward compatibility.

This changes the common device shutdown code to enable devices to
participate in asynchronous shutdown implementation.

Signed-off-by: Tanjore Suresh <tansuresh@google.com>
---
 drivers/base/core.c        | 39 +++++++++++++++++++++++++++++++++++++-
 include/linux/device/bus.h | 10 ++++++++++
 2 files changed, 48 insertions(+), 1 deletion(-)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index 3d6430eb0c6a..359e7067e8b8 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -4479,6 +4479,7 @@ EXPORT_SYMBOL_GPL(device_change_owner);
 void device_shutdown(void)
 {
 	struct device *dev, *parent;
+	LIST_HEAD(async_shutdown_list);
 
 	wait_for_device_probe();
 	device_block_probing();
@@ -4523,7 +4524,14 @@ void device_shutdown(void)
 				dev_info(dev, "shutdown_pre\n");
 			dev->class->shutdown_pre(dev);
 		}
-		if (dev->bus && dev->bus->shutdown) {
+
+		if (dev->bus && dev->bus->shutdown_pre) {
+			if (initcall_debug)
+				dev_info(dev, "shutdown_pre\n");
+			dev->bus->shutdown_pre(dev);
+			list_add(&dev->kobj.entry,
+				&async_shutdown_list);
+		} else if (dev->bus && dev->bus->shutdown) {
 			if (initcall_debug)
 				dev_info(dev, "shutdown\n");
 			dev->bus->shutdown(dev);
@@ -4543,6 +4551,35 @@ void device_shutdown(void)
 		spin_lock(&devices_kset->list_lock);
 	}
 	spin_unlock(&devices_kset->list_lock);
+
+	/*
+	 * Second pass spin for only devices, that have configured
+	 * Asynchronous shutdown.
+	 */
+	while (!list_empty(&async_shutdown_list)) {
+		dev = list_entry(async_shutdown_list.next, struct device,
+				kobj.entry);
+		parent = get_device(dev->parent);
+		get_device(dev);
+		/*
+		 * Make sure the device is off the  list
+		 */
+		list_del_init(&dev->kobj.entry);
+		if (parent)
+			device_lock(parent);
+		device_lock(dev);
+		if (dev->bus && dev->bus->shutdown_post) {
+			if (initcall_debug)
+				dev_info(dev,
+				"shutdown_post called\n");
+			dev->bus->shutdown_post(dev);
+		}
+		device_unlock(dev);
+		if (parent)
+			device_unlock(parent);
+		put_device(dev);
+		put_device(parent);
+	}
 }
 
 /*
diff --git a/include/linux/device/bus.h b/include/linux/device/bus.h
index a039ab809753..e261819601e9 100644
--- a/include/linux/device/bus.h
+++ b/include/linux/device/bus.h
@@ -49,6 +49,14 @@ struct fwnode_handle;
  *		will never get called until they do.
  * @remove:	Called when a device removed from this bus.
  * @shutdown:	Called at shut-down time to quiesce the device.
+ * @shutdown_pre:	Called at the shutdown-time to start the shutdown
+ *			process on the device. This entry point will be called
+ *			only when the bus driver has indicated it would like
+ *			to participate in asynchronous shutdown completion.
+ * @shutdown_post:	Called at shutdown-time  to complete the shutdown
+ *			process of the device. This entry point will be called
+ *			only when the bus drive has indicated it would like to
+ *			participate in the asynchronous shutdown completion.
  *
  * @online:	Called to put the device back online (after offlining it).
  * @offline:	Called to put the device offline for hot-removal. May fail.
@@ -93,6 +101,8 @@ struct bus_type {
 	void (*sync_state)(struct device *dev);
 	void (*remove)(struct device *dev);
 	void (*shutdown)(struct device *dev);
+	void (*shutdown_pre)(struct device *dev);
+	void (*shutdown_post)(struct device *dev);
 
 	int (*online)(struct device *dev);
 	int (*offline)(struct device *dev);
-- 
2.35.1.1021.g381101b075-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v1 2/3] PCI: Support asynchronous shutdown
  2022-03-28 23:00 ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Tanjore Suresh
@ 2022-03-28 23:00   ` Tanjore Suresh
  2022-03-28 23:00     ` [PATCH v1 3/3] nvme: Add async shutdown support Tanjore Suresh
  2022-03-29  0:19   ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Oliver O'Halloran
  1 sibling, 1 reply; 12+ messages in thread
From: Tanjore Suresh @ 2022-03-28 23:00 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Rafael J . Wysocki, Christoph Hellwig,
	Sagi Grimberg, Bjorn Helgaas
  Cc: linux-kernel, linux-nvme, linux-pci, Tanjore Suresh

Enhances the base PCI driver to add support for asynchronous
shutdown.

Assume a device takes n secs to shutdown. If a machine has been
populated with M such devices, the total time spent in shutting down
all the devices will be M * n secs, if the shutdown is done
synchronously. For example, if NVMe PCI Controllers take 5 secs
to shutdown and if there are 16 such NVMe controllers in a system,
system will spend a total of 80 secs to shutdown all
NVMe devices in that system.

In order to speed up the shutdown time, asynchronous interface to
shutdown has been implemented. This will significantly reduce
the machine reboot time.

Signed-off-by: Tanjore Suresh <tansuresh@google.com>
---
 drivers/pci/pci-driver.c | 17 ++++++++++++++---
 include/linux/pci.h      |  2 ++
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index 4ceeb75fc899..0d0b46d71e88 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -501,14 +501,16 @@ static void pci_device_remove(struct device *dev)
 	pci_dev_put(pci_dev);
 }
 
-static void pci_device_shutdown(struct device *dev)
+static void pci_device_shutdown_pre(struct device *dev)
 {
 	struct pci_dev *pci_dev = to_pci_dev(dev);
 	struct pci_driver *drv = pci_dev->driver;
 
 	pm_runtime_resume(dev);
 
-	if (drv && drv->shutdown)
+	if (drv && drv->shutdown_pre)
+		drv->shutdown_pre(pci_dev);
+	else if (drv && drv->shutdown)
 		drv->shutdown(pci_dev);
 
 	/*
@@ -522,6 +524,14 @@ static void pci_device_shutdown(struct device *dev)
 		pci_clear_master(pci_dev);
 }
 
+static void pci_device_shutdown_post(struct device *dev)
+{
+	struct pci_dev *pci_dev = to_pci_dev(dev);
+	struct pci_driver *drv = pci_dev->driver;
+
+	if (drv && drv->shutdown_post)
+		drv->shutdown_post(pci_dev);
+}
 #ifdef CONFIG_PM
 
 /* Auxiliary functions used for system resume and run-time resume. */
@@ -1625,7 +1635,8 @@ struct bus_type pci_bus_type = {
 	.uevent		= pci_uevent,
 	.probe		= pci_device_probe,
 	.remove		= pci_device_remove,
-	.shutdown	= pci_device_shutdown,
+	.shutdown_pre	= pci_device_shutdown_pre,
+	.shutdown_post	= pci_device_shutdown_post,
 	.dev_groups	= pci_dev_groups,
 	.bus_groups	= pci_bus_groups,
 	.drv_groups	= pci_drv_groups,
diff --git a/include/linux/pci.h b/include/linux/pci.h
index b957eeb89c7a..19047fcb3c8a 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -905,6 +905,8 @@ struct pci_driver {
 	int  (*suspend)(struct pci_dev *dev, pm_message_t state);	/* Device suspended */
 	int  (*resume)(struct pci_dev *dev);	/* Device woken up */
 	void (*shutdown)(struct pci_dev *dev);
+	void (*shutdown_pre)(struct pci_dev *dev);
+	void (*shutdown_post)(struct pci_dev *dev);
 	int  (*sriov_configure)(struct pci_dev *dev, int num_vfs); /* On PF */
 	int  (*sriov_set_msix_vec_count)(struct pci_dev *vf, int msix_vec_count); /* On PF */
 	u32  (*sriov_get_vf_total_msix)(struct pci_dev *pf);
-- 
2.35.1.1021.g381101b075-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v1 3/3] nvme: Add async shutdown support
  2022-03-28 23:00   ` [PATCH v1 2/3] PCI: Support asynchronous shutdown Tanjore Suresh
@ 2022-03-28 23:00     ` Tanjore Suresh
  0 siblings, 0 replies; 12+ messages in thread
From: Tanjore Suresh @ 2022-03-28 23:00 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Rafael J . Wysocki, Christoph Hellwig,
	Sagi Grimberg, Bjorn Helgaas
  Cc: linux-kernel, linux-nvme, linux-pci, Tanjore Suresh

This works with the asynchronous shutdown mechanism setup for the PCI
drivers and participates to provide both pre and post shutdown
routines at pci_driver structure level.

The shutdown_pre routine starts the shutdown and does not wait for the
shutdown to complete.  The shutdown_post routine waits for the shutdown
to complete on individual controllers that this driver instance
controls. This mechanism optimizes to speed up the shutdown in a
system which host many controllers.

Signed-off-by: Tanjore Suresh <tansuresh@google.com>
---
 drivers/nvme/host/core.c | 28 ++++++++++----
 drivers/nvme/host/nvme.h |  8 ++++
 drivers/nvme/host/pci.c  | 80 +++++++++++++++++++++++++---------------
 3 files changed, 80 insertions(+), 36 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 677fa4bf76d3..24b08789fd34 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2173,16 +2173,30 @@ EXPORT_SYMBOL_GPL(nvme_enable_ctrl);
 
 int nvme_shutdown_ctrl(struct nvme_ctrl *ctrl)
 {
-	unsigned long timeout = jiffies + (ctrl->shutdown_timeout * HZ);
-	u32 csts;
 	int ret;
 
+	ret = nvme_shutdown_ctrl_start(ctrl);
+	if (ret)
+		return ret;
+	return nvme_wait_for_shutdown_cmpl(ctrl);
+}
+EXPORT_SYMBOL_GPL(nvme_shutdown_ctrl);
+
+int nvme_shutdown_ctrl_start(struct nvme_ctrl *ctrl)
+{
+
 	ctrl->ctrl_config &= ~NVME_CC_SHN_MASK;
 	ctrl->ctrl_config |= NVME_CC_SHN_NORMAL;
 
-	ret = ctrl->ops->reg_write32(ctrl, NVME_REG_CC, ctrl->ctrl_config);
-	if (ret)
-		return ret;
+	return ctrl->ops->reg_write32(ctrl, NVME_REG_CC, ctrl->ctrl_config);
+}
+EXPORT_SYMBOL_GPL(nvme_shutdown_ctrl_start);
+
+int nvme_wait_for_shutdown_cmpl(struct nvme_ctrl *ctrl)
+{
+	unsigned long deadline = jiffies + (ctrl->shutdown_timeout * HZ);
+	u32 csts;
+	int ret;
 
 	while ((ret = ctrl->ops->reg_read32(ctrl, NVME_REG_CSTS, &csts)) == 0) {
 		if ((csts & NVME_CSTS_SHST_MASK) == NVME_CSTS_SHST_CMPLT)
@@ -2191,7 +2205,7 @@ int nvme_shutdown_ctrl(struct nvme_ctrl *ctrl)
 		msleep(100);
 		if (fatal_signal_pending(current))
 			return -EINTR;
-		if (time_after(jiffies, timeout)) {
+		if (time_after(jiffies, deadline)) {
 			dev_err(ctrl->device,
 				"Device shutdown incomplete; abort shutdown\n");
 			return -ENODEV;
@@ -2200,7 +2214,7 @@ int nvme_shutdown_ctrl(struct nvme_ctrl *ctrl)
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(nvme_shutdown_ctrl);
+EXPORT_SYMBOL_GPL(nvme_wait_for_shutdown_cmpl);
 
 static int nvme_configure_timestamp(struct nvme_ctrl *ctrl)
 {
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index f4b674a8ce20..87f5803ef577 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -170,6 +170,12 @@ enum {
 	NVME_REQ_USERCMD		= (1 << 1),
 };
 
+enum shutdown_type {
+	DO_NOT_SHUTDOWN = 0,
+	SHUTDOWN_TYPE_SYNC = 1,
+	SHUTDOWN_TYPE_ASYNC = 2,
+};
+
 static inline struct nvme_request *nvme_req(struct request *req)
 {
 	return blk_mq_rq_to_pdu(req);
@@ -672,6 +678,8 @@ bool nvme_wait_reset(struct nvme_ctrl *ctrl);
 int nvme_disable_ctrl(struct nvme_ctrl *ctrl);
 int nvme_enable_ctrl(struct nvme_ctrl *ctrl);
 int nvme_shutdown_ctrl(struct nvme_ctrl *ctrl);
+int nvme_shutdown_ctrl_start(struct nvme_ctrl *ctrl);
+int nvme_wait_for_shutdown_cmpl(struct nvme_ctrl *ctrl);
 int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
 		const struct nvme_ctrl_ops *ops, unsigned long quirks);
 void nvme_uninit_ctrl(struct nvme_ctrl *ctrl);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2e98ac3f3ad6..dc72fe7d8994 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -107,7 +107,7 @@ MODULE_PARM_DESC(noacpi, "disable acpi bios quirks");
 struct nvme_dev;
 struct nvme_queue;
 
-static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown);
+static void nvme_dev_disable(struct nvme_dev *dev, int shutdown_type);
 static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode);
 
 /*
@@ -1357,7 +1357,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
 	 */
 	if (nvme_should_reset(dev, csts)) {
 		nvme_warn_reset(dev, csts);
-		nvme_dev_disable(dev, false);
+		nvme_dev_disable(dev, DO_NOT_SHUTDOWN);
 		nvme_reset_ctrl(&dev->ctrl);
 		return BLK_EH_DONE;
 	}
@@ -1392,7 +1392,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
 			 "I/O %d QID %d timeout, disable controller\n",
 			 req->tag, nvmeq->qid);
 		nvme_req(req)->flags |= NVME_REQ_CANCELLED;
-		nvme_dev_disable(dev, true);
+		nvme_dev_disable(dev, SHUTDOWN_TYPE_SYNC);
 		return BLK_EH_DONE;
 	case NVME_CTRL_RESETTING:
 		return BLK_EH_RESET_TIMER;
@@ -1410,7 +1410,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
 			 "I/O %d QID %d timeout, reset controller\n",
 			 req->tag, nvmeq->qid);
 		nvme_req(req)->flags |= NVME_REQ_CANCELLED;
-		nvme_dev_disable(dev, false);
+		nvme_dev_disable(dev, DO_NOT_SHUTDOWN);
 		nvme_reset_ctrl(&dev->ctrl);
 
 		return BLK_EH_DONE;
@@ -1503,11 +1503,13 @@ static void nvme_suspend_io_queues(struct nvme_dev *dev)
 		nvme_suspend_queue(&dev->queues[i]);
 }
 
-static void nvme_disable_admin_queue(struct nvme_dev *dev, bool shutdown)
+static void nvme_disable_admin_queue(struct nvme_dev *dev, int shutdown_type)
 {
 	struct nvme_queue *nvmeq = &dev->queues[0];
 
-	if (shutdown)
+	if (shutdown_type == SHUTDOWN_TYPE_ASYNC)
+		nvme_shutdown_ctrl_start(&dev->ctrl);
+	else if (shutdown_type == SHUTDOWN_TYPE_SYNC)
 		nvme_shutdown_ctrl(&dev->ctrl);
 	else
 		nvme_disable_ctrl(&dev->ctrl);
@@ -2669,7 +2671,7 @@ static void nvme_pci_disable(struct nvme_dev *dev)
 	}
 }
 
-static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
+static void nvme_dev_disable(struct nvme_dev *dev, int shutdown_type)
 {
 	bool dead = true, freeze = false;
 	struct pci_dev *pdev = to_pci_dev(dev->dev);
@@ -2691,14 +2693,14 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
 	 * Give the controller a chance to complete all entered requests if
 	 * doing a safe shutdown.
 	 */
-	if (!dead && shutdown && freeze)
+	if (!dead && (shutdown_type != DO_NOT_SHUTDOWN) && freeze)
 		nvme_wait_freeze_timeout(&dev->ctrl, NVME_IO_TIMEOUT);
 
 	nvme_stop_queues(&dev->ctrl);
 
 	if (!dead && dev->ctrl.queue_count > 0) {
 		nvme_disable_io_queues(dev);
-		nvme_disable_admin_queue(dev, shutdown);
+		nvme_disable_admin_queue(dev, shutdown_type);
 	}
 	nvme_suspend_io_queues(dev);
 	nvme_suspend_queue(&dev->queues[0]);
@@ -2710,12 +2712,12 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
 	blk_mq_tagset_wait_completed_request(&dev->tagset);
 	blk_mq_tagset_wait_completed_request(&dev->admin_tagset);
 
-	/*
-	 * The driver will not be starting up queues again if shutting down so
-	 * must flush all entered requests to their failed completion to avoid
-	 * deadlocking blk-mq hot-cpu notifier.
-	 */
-	if (shutdown) {
+	if (shutdown_type == SHUTDOWN_TYPE_SYNC) {
+		/*
+		 * The driver will not be starting up queues again if shutting down so
+		 * must flush all entered requests to their failed completion to avoid
+		 * deadlocking blk-mq hot-cpu notifier.
+		 */
 		nvme_start_queues(&dev->ctrl);
 		if (dev->ctrl.admin_q && !blk_queue_dying(dev->ctrl.admin_q))
 			nvme_start_admin_queue(&dev->ctrl);
@@ -2723,11 +2725,11 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
 	mutex_unlock(&dev->shutdown_lock);
 }
 
-static int nvme_disable_prepare_reset(struct nvme_dev *dev, bool shutdown)
+static int nvme_disable_prepare_reset(struct nvme_dev *dev, int type)
 {
 	if (!nvme_wait_reset(&dev->ctrl))
 		return -EBUSY;
-	nvme_dev_disable(dev, shutdown);
+	nvme_dev_disable(dev, type);
 	return 0;
 }
 
@@ -2785,7 +2787,7 @@ static void nvme_remove_dead_ctrl(struct nvme_dev *dev)
 	 */
 	nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING);
 	nvme_get_ctrl(&dev->ctrl);
-	nvme_dev_disable(dev, false);
+	nvme_dev_disable(dev, DO_NOT_SHUTDOWN);
 	nvme_kill_queues(&dev->ctrl);
 	if (!queue_work(nvme_wq, &dev->remove_work))
 		nvme_put_ctrl(&dev->ctrl);
@@ -2810,7 +2812,7 @@ static void nvme_reset_work(struct work_struct *work)
 	 * moving on.
 	 */
 	if (dev->ctrl.ctrl_config & NVME_CC_ENABLE)
-		nvme_dev_disable(dev, false);
+		nvme_dev_disable(dev, DO_NOT_SHUTDOWN);
 	nvme_sync_queues(&dev->ctrl);
 
 	mutex_lock(&dev->shutdown_lock);
@@ -3151,7 +3153,7 @@ static void nvme_reset_prepare(struct pci_dev *pdev)
 	 * state as pci_dev device lock is held, making it impossible to race
 	 * with ->remove().
 	 */
-	nvme_disable_prepare_reset(dev, false);
+	nvme_disable_prepare_reset(dev, DO_NOT_SHUTDOWN);
 	nvme_sync_queues(&dev->ctrl);
 }
 
@@ -3163,13 +3165,32 @@ static void nvme_reset_done(struct pci_dev *pdev)
 		flush_work(&dev->ctrl.reset_work);
 }
 
-static void nvme_shutdown(struct pci_dev *pdev)
+static void nvme_shutdown_pre(struct pci_dev *pdev)
 {
 	struct nvme_dev *dev = pci_get_drvdata(pdev);
 
-	nvme_disable_prepare_reset(dev, true);
+	nvme_disable_prepare_reset(dev, SHUTDOWN_TYPE_ASYNC);
 }
 
+static void nvme_shutdown_post(struct pci_dev *pdev)
+{
+	struct nvme_dev *dev = pci_get_drvdata(pdev);
+
+	mutex_lock(&dev->shutdown_lock);
+	nvme_wait_for_shutdown_cmpl(&dev->ctrl);
+
+	/*
+	 * The driver will not be starting up queues again if shutting down so
+	 * must flush all entered requests to their failed completion to avoid
+	 * deadlocking blk-mq hot-cpu notifier.
+	 */
+	nvme_start_queues(&dev->ctrl);
+	if (dev->ctrl.admin_q && !blk_queue_dying(dev->ctrl.admin_q))
+		nvme_start_admin_queue(&dev->ctrl);
+
+	mutex_unlock(&dev->shutdown_lock);
+
+}
 static void nvme_remove_attrs(struct nvme_dev *dev)
 {
 	if (dev->attrs_added)
@@ -3191,13 +3212,13 @@ static void nvme_remove(struct pci_dev *pdev)
 
 	if (!pci_device_is_present(pdev)) {
 		nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DEAD);
-		nvme_dev_disable(dev, true);
+		nvme_dev_disable(dev, SHUTDOWN_TYPE_SYNC);
 	}
 
 	flush_work(&dev->ctrl.reset_work);
 	nvme_stop_ctrl(&dev->ctrl);
 	nvme_remove_namespaces(&dev->ctrl);
-	nvme_dev_disable(dev, true);
+	nvme_dev_disable(dev, SHUTDOWN_TYPE_SYNC);
 	nvme_remove_attrs(dev);
 	nvme_free_host_mem(dev);
 	nvme_dev_remove_admin(dev);
@@ -3259,7 +3280,7 @@ static int nvme_suspend(struct device *dev)
 	if (pm_suspend_via_firmware() || !ctrl->npss ||
 	    !pcie_aspm_enabled(pdev) ||
 	    (ndev->ctrl.quirks & NVME_QUIRK_SIMPLE_SUSPEND))
-		return nvme_disable_prepare_reset(ndev, true);
+		return nvme_disable_prepare_reset(ndev, SHUTDOWN_TYPE_SYNC);
 
 	nvme_start_freeze(ctrl);
 	nvme_wait_freeze(ctrl);
@@ -3302,7 +3323,7 @@ static int nvme_suspend(struct device *dev)
 		 * Clearing npss forces a controller reset on resume. The
 		 * correct value will be rediscovered then.
 		 */
-		ret = nvme_disable_prepare_reset(ndev, true);
+		ret = nvme_disable_prepare_reset(ndev, SHUTDOWN_TYPE_SYNC);
 		ctrl->npss = 0;
 	}
 unfreeze:
@@ -3314,7 +3335,7 @@ static int nvme_simple_suspend(struct device *dev)
 {
 	struct nvme_dev *ndev = pci_get_drvdata(to_pci_dev(dev));
 
-	return nvme_disable_prepare_reset(ndev, true);
+	return nvme_disable_prepare_reset(ndev, SHUTDOWN_TYPE_SYNC);
 }
 
 static int nvme_simple_resume(struct device *dev)
@@ -3351,7 +3372,7 @@ static pci_ers_result_t nvme_error_detected(struct pci_dev *pdev,
 	case pci_channel_io_frozen:
 		dev_warn(dev->ctrl.device,
 			"frozen state error detected, reset controller\n");
-		nvme_dev_disable(dev, false);
+		nvme_dev_disable(dev, DO_NOT_SHUTDOWN);
 		return PCI_ERS_RESULT_NEED_RESET;
 	case pci_channel_io_perm_failure:
 		dev_warn(dev->ctrl.device,
@@ -3478,7 +3499,8 @@ static struct pci_driver nvme_driver = {
 	.id_table	= nvme_id_table,
 	.probe		= nvme_probe,
 	.remove		= nvme_remove,
-	.shutdown	= nvme_shutdown,
+	.shutdown_pre   = nvme_shutdown_pre,
+	.shutdown_post  = nvme_shutdown_post,
 #ifdef CONFIG_PM_SLEEP
 	.driver		= {
 		.pm	= &nvme_dev_pm_ops,
-- 
2.35.1.1021.g381101b075-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 1/3] driver core: Support asynchronous driver shutdown
  2022-03-28 23:00 ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Tanjore Suresh
  2022-03-28 23:00   ` [PATCH v1 2/3] PCI: Support asynchronous shutdown Tanjore Suresh
@ 2022-03-29  0:19   ` Oliver O'Halloran
  2022-03-30 14:12     ` Belanger, Martin
  2022-03-31 16:57     ` Jonathan Derrick
  1 sibling, 2 replies; 12+ messages in thread
From: Oliver O'Halloran @ 2022-03-29  0:19 UTC (permalink / raw)
  To: Tanjore Suresh
  Cc: Greg Kroah-Hartman, Rafael J . Wysocki, Christoph Hellwig,
	Sagi Grimberg, Bjorn Helgaas, Linux Kernel Mailing List,
	linux-nvme, linux-pci

On Tue, Mar 29, 2022 at 10:35 AM Tanjore Suresh <tansuresh@google.com> wrote:
>
> This changes the bus driver interface with additional entry points
> to enable devices to implement asynchronous shutdown. The existing
> synchronous interface to shutdown is unmodified and retained for
> backward compatibility.
>
> This changes the common device shutdown code to enable devices to
> participate in asynchronous shutdown implementation.

nice to see someone looking at improving the shutdown path

> Signed-off-by: Tanjore Suresh <tansuresh@google.com>
> ---
>  drivers/base/core.c        | 39 +++++++++++++++++++++++++++++++++++++-
>  include/linux/device/bus.h | 10 ++++++++++
>  2 files changed, 48 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/base/core.c b/drivers/base/core.c
> index 3d6430eb0c6a..359e7067e8b8 100644
> --- a/drivers/base/core.c
> +++ b/drivers/base/core.c
> @@ -4479,6 +4479,7 @@ EXPORT_SYMBOL_GPL(device_change_owner);
> *snip*

This all seems a bit dangerous and I'm wondering what systems you've
tested these changes with. I had a look at implementing something
similar a few years ago and one case that always concerned me was
embedded systems where the PCIe root complex also has a driver bound.
Say you've got the following PCIe topology:

00:00.0 - root port
01:00.0 - nvme drive

With the current implementation of device_shutdown() we can guarantee
that the child device (the nvme) is shut down before we start trying
to shut down the parent device (the root complex) so there's no
possibility of deadlocks and other dependency headaches. With this
implementation of async shutdown we lose that guarantee and I'm not
sure what the consequences are. Personally I was never able to
convince myself it was safe, but maybe you're braver than I am :)

That all said, there's probably only a few kinds of device that will
really want to implement async shutdown support so maybe you can
restrict it to leaf devices and flip the ordering around to something
like:

for_each_device(dev) {
   if (can_async(dev) && has_no_children(dev))
      start_async_shutdown(dev)
}
wait_for_all_async_shutdowns_to_finish()

// tear down the remaining system devices synchronously
for_each_device(dev)
   do_sync_shutdown(dev)

>  /*
> diff --git a/include/linux/device/bus.h b/include/linux/device/bus.h
> index a039ab809753..e261819601e9 100644
> --- a/include/linux/device/bus.h
> +++ b/include/linux/device/bus.h
> @@ -93,6 +101,8 @@ struct bus_type {
>         void (*sync_state)(struct device *dev);
>         void (*remove)(struct device *dev);
>         void (*shutdown)(struct device *dev);
> +       void (*shutdown_pre)(struct device *dev);
> +       void (*shutdown_post)(struct device *dev);

Call them shutdown_async_start() / shutdown_async_end() or something
IMO. These names are not at all helpful and they're easy to mix up
their role with the class based shutdown_pre / _post

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/3] Asynchronous shutdown interface and example implementation
  2022-03-28 23:00 [PATCH v1 0/3] Asynchronous shutdown interface and example implementation Tanjore Suresh
  2022-03-28 23:00 ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Tanjore Suresh
@ 2022-03-29  5:26 ` Greg Kroah-Hartman
  2022-03-30  2:07 ` Keith Busch
  2 siblings, 0 replies; 12+ messages in thread
From: Greg Kroah-Hartman @ 2022-03-29  5:26 UTC (permalink / raw)
  To: Tanjore Suresh
  Cc: Rafael J . Wysocki, Christoph Hellwig, Sagi Grimberg,
	Bjorn Helgaas, linux-kernel, linux-nvme, linux-pci

On Mon, Mar 28, 2022 at 04:00:05PM -0700, Tanjore Suresh wrote:
> Problem:
> 
> Some of our machines are configured with  many NVMe devices and
> are validated for strict shutdown time requirements. Each NVMe
> device plugged into the system, typicaly takes about 4.5 secs
> to shutdown. A system with 16 such NVMe devices will takes
> approximately 80 secs to shutdown and go through reboot.
> 
> The current shutdown APIs as defined at bus level is defined to be
> synchronous. Therefore, more devices are in the system the greater
> the time it takes to shutdown. This shutdown time significantly
> contributes the machine reboot time.
> 
> Solution:
> 
> This patch set proposes an asynchronous shutdown interface at bus level,
> modifies the core driver, device shutdown routine to exploit the
> new interface while maintaining backward compatibility with synchronous
> implementation already existing (Patch 1 of 3) and exploits new interface
> to enable all PCI-E based devices to use asynchronous interface semantics
> if necessary (Patch 2 of 3). The implementation at PCI-E level also works
> in a backward compatible way, to allow exiting device implementation
> to work with current synchronous semantics. Only show cases an example
> implementation for NVMe device to exploit this asynchronous shutdown
> interface. (Patch 3 of 3).
> 
> Tanjore Suresh (3):
>   driver core: Support asynchronous driver shutdown
>   PCI: Support asynchronous shutdown
>   nvme: Add async shutdown support
> 
>  drivers/base/core.c        | 39 ++++++++++++++++++-
>  drivers/nvme/host/core.c   | 28 +++++++++----
>  drivers/nvme/host/nvme.h   |  8 ++++
>  drivers/nvme/host/pci.c    | 80 ++++++++++++++++++++++++--------------
>  drivers/pci/pci-driver.c   | 17 ++++++--
>  include/linux/device/bus.h | 10 +++++
>  include/linux/pci.h        |  2 +
>  7 files changed, 144 insertions(+), 40 deletions(-)
> 
> -- 
> 2.35.1.1021.g381101b075-goog
> 

Hi,

This is the friendly patch-bot of Greg Kroah-Hartman.  You have sent him
a patch that has triggered this response.  He used to manually respond
to these common problems, but in order to save his sanity (he kept
writing the same thing over and over, yet to different people), I was
created.  Hopefully you will not take offence and will fix the problem
in your patch and resubmit it so that it can be accepted into the Linux
kernel tree.

You are receiving this message because of the following common error(s)
as indicated below:

- This looks like a new version of a previously submitted patch, but you
  did not list below the --- line any changes from the previous version.
  Please read the section entitled "The canonical patch format" in the
  kernel file, Documentation/SubmittingPatches for what needs to be done
  here to properly describe this.

If you wish to discuss this problem further, or you have questions about
how to resolve this issue, please feel free to respond to this email and
Greg will reply once he has dug out from the pending patches received
from other developers.

thanks,

greg k-h's patch email bot

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/3] Asynchronous shutdown interface and example implementation
  2022-03-28 23:00 [PATCH v1 0/3] Asynchronous shutdown interface and example implementation Tanjore Suresh
  2022-03-28 23:00 ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Tanjore Suresh
  2022-03-29  5:26 ` [PATCH v1 0/3] Asynchronous shutdown interface and example implementation Greg Kroah-Hartman
@ 2022-03-30  2:07 ` Keith Busch
  2022-03-30  6:25   ` Lukas Wunner
  2 siblings, 1 reply; 12+ messages in thread
From: Keith Busch @ 2022-03-30  2:07 UTC (permalink / raw)
  To: Tanjore Suresh
  Cc: Greg Kroah-Hartman, Rafael J . Wysocki, Christoph Hellwig,
	Sagi Grimberg, Bjorn Helgaas, linux-kernel, linux-nvme,
	linux-pci

On Mon, Mar 28, 2022 at 04:00:05PM -0700, Tanjore Suresh wrote:
> Problem:
> 
> Some of our machines are configured with  many NVMe devices and
> are validated for strict shutdown time requirements. Each NVMe
> device plugged into the system, typicaly takes about 4.5 secs
> to shutdown. A system with 16 such NVMe devices will takes
> approximately 80 secs to shutdown and go through reboot.
> 
> The current shutdown APIs as defined at bus level is defined to be
> synchronous. Therefore, more devices are in the system the greater
> the time it takes to shutdown. This shutdown time significantly
> contributes the machine reboot time.
> 
> Solution:
> 
> This patch set proposes an asynchronous shutdown interface at bus level,
> modifies the core driver, device shutdown routine to exploit the
> new interface while maintaining backward compatibility with synchronous
> implementation already existing (Patch 1 of 3) and exploits new interface
> to enable all PCI-E based devices to use asynchronous interface semantics
> if necessary (Patch 2 of 3). The implementation at PCI-E level also works
> in a backward compatible way, to allow exiting device implementation
> to work with current synchronous semantics. Only show cases an example
> implementation for NVMe device to exploit this asynchronous shutdown
> interface. (Patch 3 of 3).

Thanks, I agree we should improve shutdown times. I tried a while ago, but
lost track to follow up at the time. Here's the reference, fwiw, though it
may be out of date :):

  http://lists.infradead.org/pipermail/linux-nvme/2014-May/000826.html

The above solution is similiar to how probe waits on an async domain.
Maybe pci can schedule the async shutdown instead of relying on low-level
drivers so that everyone implicitly benefits instead of just nvme? I'll
double-check if that's reasonable, but I'll look through this series too.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/3] Asynchronous shutdown interface and example implementation
  2022-03-30  2:07 ` Keith Busch
@ 2022-03-30  6:25   ` Lukas Wunner
  2022-03-30 11:13     ` Rafael J. Wysocki
  0 siblings, 1 reply; 12+ messages in thread
From: Lukas Wunner @ 2022-03-30  6:25 UTC (permalink / raw)
  To: Keith Busch
  Cc: Tanjore Suresh, Greg Kroah-Hartman, Rafael J . Wysocki,
	Christoph Hellwig, Sagi Grimberg, Bjorn Helgaas, linux-kernel,
	linux-nvme, linux-pci

On Tue, Mar 29, 2022 at 08:07:51PM -0600, Keith Busch wrote:
> Thanks, I agree we should improve shutdown times. I tried a while ago, but
> lost track to follow up at the time. Here's the reference, fwiw, though it
> may be out of date :):
> 
>   http://lists.infradead.org/pipermail/linux-nvme/2014-May/000826.html
> 
> The above solution is similiar to how probe waits on an async domain.
> Maybe pci can schedule the async shutdown instead of relying on low-level
> drivers so that everyone implicitly benefits instead of just nvme? I'll
> double-check if that's reasonable, but I'll look through this series too.

Using the async API seems much more reasonable than adding new callbacks.

However I'd argue that it shouldn't be necessary to amend any drivers,
this should all be doable in the driver core:  Basically a device needs
to wait for its children and device links consumers to shutdown, apart
from that everything should be able to run asynchronously.

Thanks,

Lukas

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/3] Asynchronous shutdown interface and example implementation
  2022-03-30  6:25   ` Lukas Wunner
@ 2022-03-30 11:13     ` Rafael J. Wysocki
  0 siblings, 0 replies; 12+ messages in thread
From: Rafael J. Wysocki @ 2022-03-30 11:13 UTC (permalink / raw)
  To: Lukas Wunner
  Cc: Keith Busch, Tanjore Suresh, Greg Kroah-Hartman,
	Rafael J . Wysocki, Christoph Hellwig, Sagi Grimberg,
	Bjorn Helgaas, Linux Kernel Mailing List, linux-nvme, Linux PCI

On Wed, Mar 30, 2022 at 8:25 AM Lukas Wunner <lukas@wunner.de> wrote:
>
> On Tue, Mar 29, 2022 at 08:07:51PM -0600, Keith Busch wrote:
> > Thanks, I agree we should improve shutdown times. I tried a while ago, but
> > lost track to follow up at the time. Here's the reference, fwiw, though it
> > may be out of date :):
> >
> >   http://lists.infradead.org/pipermail/linux-nvme/2014-May/000826.html
> >
> > The above solution is similiar to how probe waits on an async domain.
> > Maybe pci can schedule the async shutdown instead of relying on low-level
> > drivers so that everyone implicitly benefits instead of just nvme? I'll
> > double-check if that's reasonable, but I'll look through this series too.
>
> Using the async API seems much more reasonable than adding new callbacks.
>
> However I'd argue that it shouldn't be necessary to amend any drivers,
> this should all be doable in the driver core:  Basically a device needs
> to wait for its children and device links consumers to shutdown, apart
> from that everything should be able to run asynchronously.

Well, this is done already in the system-wide and hibernation paths.
It should be possible to implement asynchronous shutdown analogously.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH v1 1/3] driver core: Support asynchronous driver shutdown
  2022-03-29  0:19   ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Oliver O'Halloran
@ 2022-03-30 14:12     ` Belanger, Martin
  2022-03-31 12:07       ` Daniel Wagner
  2022-03-31 16:57     ` Jonathan Derrick
  1 sibling, 1 reply; 12+ messages in thread
From: Belanger, Martin @ 2022-03-30 14:12 UTC (permalink / raw)
  To: Oliver O'Halloran, Tanjore Suresh
  Cc: Greg Kroah-Hartman, Rafael J . Wysocki, Christoph Hellwig,
	Sagi Grimberg, Bjorn Helgaas, Linux Kernel Mailing List,
	linux-nvme, linux-pci

> From: Linux-nvme <linux-nvme-bounces@lists.infradead.org> On Behalf Of
> Oliver O'Halloran
> Sent: Monday, March 28, 2022 8:20 PM
> To: Tanjore Suresh
> Cc: Greg Kroah-Hartman; Rafael J . Wysocki; Christoph Hellwig; Sagi Grimberg;
> Bjorn Helgaas; Linux Kernel Mailing List; linux-nvme@lists.infradead.org; linux-
> pci
> Subject: Re: [PATCH v1 1/3] driver core: Support asynchronous driver shutdown
> 
> 
> On Tue, Mar 29, 2022 at 10:35 AM Tanjore Suresh <tansuresh@google.com>
> wrote:
> >
> > This changes the bus driver interface with additional entry points to
> > enable devices to implement asynchronous shutdown. The existing
> > synchronous interface to shutdown is unmodified and retained for
> > backward compatibility.
> >
> > This changes the common device shutdown code to enable devices to
> > participate in asynchronous shutdown implementation.
> 
> nice to see someone looking at improving the shutdown path

Agreed!

I know this patch is mainly for PCI devices, however, NVMe over Fabrics 
devices can suffer even longer shutdowns. Last September, I reported 
that shutting down an NVMe-oF TCP connection while the network is down 
will result in a 1-minute deadlock. That's because the driver tries to perform 
a proper shutdown by sending commands to the remote target and the 
timeout for unanswered commands is 1-minute. If one needs to shut down 
several NVMe-oF connections, each connection will be shut down sequentially 
taking each 1 minute. Try running "nvme disconnect-all" while the network 
is down and you'll see what I mean. Of course, the KATO is supposed to 
detect when connectivity is lost, but if you have a long KATO (e.g. 2 minutes)
you will most likely hit this condition.

Here's the patch I proposed in September, which shortens the timeout to 
5 sec on a disconnect.

http://lists.infradead.org/pipermail/linux-nvme/2021-September/027867.html

Regards,
Martin Belanger

> 
> > Signed-off-by: Tanjore Suresh <tansuresh@google.com>
> > ---
> >  drivers/base/core.c        | 39 +++++++++++++++++++++++++++++++++++++-
> >  include/linux/device/bus.h | 10 ++++++++++
> >  2 files changed, 48 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/base/core.c b/drivers/base/core.c index
> > 3d6430eb0c6a..359e7067e8b8 100644
> > --- a/drivers/base/core.c
> > +++ b/drivers/base/core.c
> > @@ -4479,6 +4479,7 @@ EXPORT_SYMBOL_GPL(device_change_owner);
> > *snip*
> 
> This all seems a bit dangerous and I'm wondering what systems you've tested
> these changes with. I had a look at implementing something similar a few years
> ago and one case that always concerned me was embedded systems where the
> PCIe root complex also has a driver bound.
> Say you've got the following PCIe topology:
> 
> 00:00.0 - root port
> 01:00.0 - nvme drive
> 
> With the current implementation of device_shutdown() we can guarantee that
> the child device (the nvme) is shut down before we start trying to shut down the
> parent device (the root complex) so there's no possibility of deadlocks and
> other dependency headaches. With this implementation of async shutdown we
> lose that guarantee and I'm not sure what the consequences are. Personally I
> was never able to convince myself it was safe, but maybe you're braver than I
> am :)
> 
> That all said, there's probably only a few kinds of device that will really want to
> implement async shutdown support so maybe you can restrict it to leaf devices
> and flip the ordering around to something
> like:
> 
> for_each_device(dev) {
>    if (can_async(dev) && has_no_children(dev))
>       start_async_shutdown(dev)
> }
> wait_for_all_async_shutdowns_to_finish()
> 
> // tear down the remaining system devices synchronously
> for_each_device(dev)
>    do_sync_shutdown(dev)
> 
> >  /*
> > diff --git a/include/linux/device/bus.h b/include/linux/device/bus.h
> > index a039ab809753..e261819601e9 100644
> > --- a/include/linux/device/bus.h
> > +++ b/include/linux/device/bus.h
> > @@ -93,6 +101,8 @@ struct bus_type {
> >         void (*sync_state)(struct device *dev);
> >         void (*remove)(struct device *dev);
> >         void (*shutdown)(struct device *dev);
> > +       void (*shutdown_pre)(struct device *dev);
> > +       void (*shutdown_post)(struct device *dev);
> 
> Call them shutdown_async_start() / shutdown_async_end() or something IMO.
> These names are not at all helpful and they're easy to mix up their role with
> the class based shutdown_pre / _post


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 1/3] driver core: Support asynchronous driver shutdown
  2022-03-30 14:12     ` Belanger, Martin
@ 2022-03-31 12:07       ` Daniel Wagner
  0 siblings, 0 replies; 12+ messages in thread
From: Daniel Wagner @ 2022-03-31 12:07 UTC (permalink / raw)
  To: Belanger, Martin
  Cc: Oliver O'Halloran, Tanjore Suresh, Greg Kroah-Hartman,
	Rafael J . Wysocki, Christoph Hellwig, Sagi Grimberg,
	Bjorn Helgaas, Linux Kernel Mailing List, linux-nvme, linux-pci

On Wed, Mar 30, 2022 at 02:12:18PM +0000, Belanger, Martin wrote:
> I know this patch is mainly for PCI devices, however, NVMe over Fabrics 
> devices can suffer even longer shutdowns. Last September, I reported 
> that shutting down an NVMe-oF TCP connection while the network is down 
> will result in a 1-minute deadlock. That's because the driver tries to perform 
> a proper shutdown by sending commands to the remote target and the 
> timeout for unanswered commands is 1-minute. If one needs to shut down 
> several NVMe-oF connections, each connection will be shut down sequentially 
> taking each 1 minute. Try running "nvme disconnect-all" while the network 
> is down and you'll see what I mean. Of course, the KATO is supposed to 
> detect when connectivity is lost, but if you have a long KATO (e.g. 2 minutes)
> you will most likely hit this condition.

I've debugging something similar:

[44888.710527] nvme nvme0: Removing ctrl: NQN "xxx"
[44898.981684] nvme nvme0: failed to send request -32
[44960.982977] nvme nvme0: queue 0: timeout request 0x18 type 4
[44960.983099] nvme nvme0: Property Set error: 881, offset 0x14

Currently testing this patch:

+++ b/drivers/nvme/host/tcp.c
@@ -1103,9 +1103,12 @@ static int nvme_tcp_try_send(struct nvme_tcp_queue *queue)
        if (ret == -EAGAIN) {
                ret = 0;
        } else if (ret < 0) {
+               struct request *rq = blk_mq_rq_from_pdu(queue->request);
+
                dev_err(queue->ctrl->ctrl.device,
                        "failed to send request %d\n", ret);
-               if (ret != -EPIPE && ret != -ECONNRESET)
+               if ((ret != -EPIPE && ret != -ECONNRESET) ||
+                   rq->cmd_flags & REQ_FAILFAST_DRIVER)
                        nvme_tcp_fail_request(queue->request);
                nvme_tcp_done_send_req(queue);
        }

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 1/3] driver core: Support asynchronous driver shutdown
  2022-03-29  0:19   ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Oliver O'Halloran
  2022-03-30 14:12     ` Belanger, Martin
@ 2022-03-31 16:57     ` Jonathan Derrick
  1 sibling, 0 replies; 12+ messages in thread
From: Jonathan Derrick @ 2022-03-31 16:57 UTC (permalink / raw)
  To: Oliver O'Halloran, Tanjore Suresh
  Cc: Greg Kroah-Hartman, Rafael J . Wysocki, Christoph Hellwig,
	Sagi Grimberg, Bjorn Helgaas, Linux Kernel Mailing List,
	linux-nvme, linux-pci



On 3/28/2022 6:19 PM, Oliver O'Halloran wrote:
> On Tue, Mar 29, 2022 at 10:35 AM Tanjore Suresh <tansuresh@google.com> wrote:
>>
>> This changes the bus driver interface with additional entry points
>> to enable devices to implement asynchronous shutdown. The existing
>> synchronous interface to shutdown is unmodified and retained for
>> backward compatibility.
>>
>> This changes the common device shutdown code to enable devices to
>> participate in asynchronous shutdown implementation.
> 
> nice to see someone looking at improving the shutdown path
> 
>> Signed-off-by: Tanjore Suresh <tansuresh@google.com>
>> ---
>>   drivers/base/core.c        | 39 +++++++++++++++++++++++++++++++++++++-
>>   include/linux/device/bus.h | 10 ++++++++++
>>   2 files changed, 48 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/base/core.c b/drivers/base/core.c
>> index 3d6430eb0c6a..359e7067e8b8 100644
>> --- a/drivers/base/core.c
>> +++ b/drivers/base/core.c
>> @@ -4479,6 +4479,7 @@ EXPORT_SYMBOL_GPL(device_change_owner);
>> *snip*
> 
> This all seems a bit dangerous and I'm wondering what systems you've
> tested these changes with. I had a look at implementing something
> similar a few years ago and one case that always concerned me was
> embedded systems where the PCIe root complex also has a driver bound.
> Say you've got the following PCIe topology:
> 
> 00:00.0 - root port
> 01:00.0 - nvme drive
> 
> With the current implementation of device_shutdown() we can guarantee
> that the child device (the nvme) is shut down before we start trying
> to shut down the parent device (the root complex) so there's no
> possibility of deadlocks and other dependency headaches. With this
> implementation of async shutdown we lose that guarantee and I'm not
> sure what the consequences are. Personally I was never able to
> convince myself it was safe, but maybe you're braver than I am :)
> 
> That all said, there's probably only a few kinds of device that will
> really want to implement async shutdown support so maybe you can
> restrict it to leaf devices and flip the ordering around to something
> like:

It seems like it might be helpful to split the async shutdowns into 
refcounted hierarchies and proceed with the next level up when all the 
refs are in.

Ex:
00:00.0 - RP
   01:00.0 - NVMe A
   02:00.0 - Bridge USP
     03:00.0 - Bridge DSP
       04:00.0 - NVMe B
     03:00.1 - Bridge DSP
       05:00.0 - NVMe C

NVMe A could start shutting down at the beginning of the hierarchy 
traversal. Then async shutdown of bus 3 wouldn't start until all 
children of bus 3 are shutdown.

You could probably do this by having the async_shutdown_list in the pci_bus.

> 
> for_each_device(dev) {
>     if (can_async(dev) && has_no_children(dev))
>        start_async_shutdown(dev)
> }
> wait_for_all_async_shutdowns_to_finish()
> 
> // tear down the remaining system devices synchronously
> for_each_device(dev)
>     do_sync_shutdown(dev)
> 
>>   /*
>> diff --git a/include/linux/device/bus.h b/include/linux/device/bus.h
>> index a039ab809753..e261819601e9 100644
>> --- a/include/linux/device/bus.h
>> +++ b/include/linux/device/bus.h
>> @@ -93,6 +101,8 @@ struct bus_type {
>>          void (*sync_state)(struct device *dev);
>>          void (*remove)(struct device *dev);
>>          void (*shutdown)(struct device *dev);
>> +       void (*shutdown_pre)(struct device *dev);
>> +       void (*shutdown_post)(struct device *dev);
> 
> Call them shutdown_async_start() / shutdown_async_end() or something
> IMO. These names are not at all helpful and they're easy to mix up
> their role with the class based shutdown_pre / _post
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-03-31 16:57 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-28 23:00 [PATCH v1 0/3] Asynchronous shutdown interface and example implementation Tanjore Suresh
2022-03-28 23:00 ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Tanjore Suresh
2022-03-28 23:00   ` [PATCH v1 2/3] PCI: Support asynchronous shutdown Tanjore Suresh
2022-03-28 23:00     ` [PATCH v1 3/3] nvme: Add async shutdown support Tanjore Suresh
2022-03-29  0:19   ` [PATCH v1 1/3] driver core: Support asynchronous driver shutdown Oliver O'Halloran
2022-03-30 14:12     ` Belanger, Martin
2022-03-31 12:07       ` Daniel Wagner
2022-03-31 16:57     ` Jonathan Derrick
2022-03-29  5:26 ` [PATCH v1 0/3] Asynchronous shutdown interface and example implementation Greg Kroah-Hartman
2022-03-30  2:07 ` Keith Busch
2022-03-30  6:25   ` Lukas Wunner
2022-03-30 11:13     ` Rafael J. Wysocki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).