All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/10] Centrelize common fabrics code to core drivers
@ 2021-10-20 10:38 Max Gurtovoy
  2021-10-20 10:38 ` [PATCH 01/10] nvme: add connect_work attribute to nvme ctrl Max Gurtovoy
                   ` (9 more replies)
  0 siblings, 10 replies; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

Hi Christoph, Sagi, Keith and Co,
This series is intended to move common reconnection, device removal,
error recovery and reset logic to common fabrics and core drivers.
In this stage, it mainly deals with the RDMA and TCP drivers which had
the same logic for the above tasks. FC driver can start using this
framework in the future after some preparation work that should be done.

This will ease on code maintenance and reduce the code duplication
between the RDMA and TCP drivers.

Since there are some attributes that were moved to generic nvme_ctrl
structure, I've measured perf impact.

NVMe PCI Perf numbers using NVIDIA's NVMe SNAP Bluefield-2 device
(Fio 16 jobs, 128 iodepth, libaio):

IO size   rand READ (before/after)  rand WRITE (before/after)
-------- -------------------------  --------------------------
 512B          2189k/2197k                2031k/2092k
 1KB           2188k/2201k                2027k/2088k
 2KB           2166k/2181k                2023k/2078k
 4KB           2153k/2169k                2025k/2076k
 8KB           1381k/1381k                1378k/1375k
 16KB          669k/676k                  660k/664k
 32KB          320k/322k                  305k/305k
 64KB          157k/157k                  151k/151k

As we can see, there is no impact on the performance.

Changes from v1:
 - collect some "Reviewed-by" signatures (Sagi, Hannes, Chaitanya)
 - added reset flow to this series (thus added setup_ctrl callback)
 - added more common code to nvmf_init_ctrl
 - rename nvmf_teardown_ctrl to nvmf_uninit_ctrl (Sagi)
 - add bool arguments to use RDMA/TCP callbacks directly (Hannes)

Max Gurtovoy (10):
  nvme: add connect_work attribute to nvme ctrl
  nvme-fabrics: introduce nvmf_reconnect_or_remove API
  nvme: add err_work attribute to nvme ctrl
  nvme-fabrics: introduce nvmf_error_recovery API
  nvme/nvme-fabrics: introduce nvmf_error_recovery_work API
  nvme/nvme-fabrics: introduce nvmf_reconnect_ctrl_work API
  nvme-fabrics: add nvmf_init_ctrl/nvmf_uninit_ctrl API
  nvme-rdma: update WARN_ON condition during reset
  nvme/nvme-fabrics: move reset ctrl flow to common code
  nvme-fabrics: set common attributes during nvmf_init_ctrl

 drivers/nvme/host/fabrics.c | 121 +++++++++++++++++
 drivers/nvme/host/fabrics.h |   3 +
 drivers/nvme/host/fc.c      |  23 ++--
 drivers/nvme/host/nvme.h    |   8 ++
 drivers/nvme/host/rdma.c    | 252 +++++++++++-------------------------
 drivers/nvme/host/tcp.c     | 152 +++-------------------
 6 files changed, 241 insertions(+), 318 deletions(-)

-- 
2.18.1



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 01/10] nvme: add connect_work attribute to nvme ctrl
  2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
@ 2021-10-20 10:38 ` Max Gurtovoy
  2021-11-02 22:59   ` James Smart
  2021-10-20 10:38 ` [PATCH 02/10] nvme-fabrics: introduce nvmf_reconnect_or_remove API Max Gurtovoy
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

This structure is duplicated for all the fabric controllers. Move it to
common code.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/fc.c   | 23 ++++++++++++-----------
 drivers/nvme/host/nvme.h |  1 +
 drivers/nvme/host/rdma.c | 10 ++++------
 drivers/nvme/host/tcp.c  |  9 ++++-----
 4 files changed, 21 insertions(+), 22 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index aa14ad963d91..4c7dffa8126e 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -167,7 +167,6 @@ struct nvme_fc_ctrl {
 	struct blk_mq_tag_set	tag_set;
 
 	struct work_struct	ioerr_work;
-	struct delayed_work	connect_work;
 
 	struct kref		ref;
 	unsigned long		flags;
@@ -567,7 +566,7 @@ nvme_fc_resume_controller(struct nvme_fc_ctrl *ctrl)
 			"NVME-FC{%d}: connectivity re-established. "
 			"Attempting reconnect\n", ctrl->cnum);
 
-		queue_delayed_work(nvme_wq, &ctrl->connect_work, 0);
+		queue_delayed_work(nvme_wq, &ctrl->ctrl.connect_work, 0);
 		break;
 
 	case NVME_CTRL_RESETTING:
@@ -3263,7 +3262,7 @@ nvme_fc_delete_ctrl(struct nvme_ctrl *nctrl)
 	struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl);
 
 	cancel_work_sync(&ctrl->ioerr_work);
-	cancel_delayed_work_sync(&ctrl->connect_work);
+	cancel_delayed_work_sync(&ctrl->ctrl.connect_work);
 	/*
 	 * kill the association on the link side.  this will block
 	 * waiting for io to terminate
@@ -3300,7 +3299,8 @@ nvme_fc_reconnect_or_delete(struct nvme_fc_ctrl *ctrl, int status)
 		else if (time_after(jiffies + recon_delay, rport->dev_loss_end))
 			recon_delay = rport->dev_loss_end - jiffies;
 
-		queue_delayed_work(nvme_wq, &ctrl->connect_work, recon_delay);
+		queue_delayed_work(nvme_wq, &ctrl->ctrl.connect_work,
+				   recon_delay);
 	} else {
 		if (portptr->port_state == FC_OBJSTATE_ONLINE) {
 			if (status > 0 && (status & NVME_SC_DNR))
@@ -3340,12 +3340,13 @@ nvme_fc_reset_ctrl_work(struct work_struct *work)
 			"to CONNECTING\n", ctrl->cnum);
 
 	if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE) {
-		if (!queue_delayed_work(nvme_wq, &ctrl->connect_work, 0)) {
+		if (!queue_delayed_work(nvme_wq, &ctrl->ctrl.connect_work,
+					0)) {
 			dev_err(ctrl->ctrl.device,
 				"NVME-FC{%d}: failed to schedule connect "
 				"after reset\n", ctrl->cnum);
 		} else {
-			flush_delayed_work(&ctrl->connect_work);
+			flush_delayed_work(&ctrl->ctrl.connect_work);
 		}
 	} else {
 		nvme_fc_reconnect_or_delete(ctrl, -ENOTCONN);
@@ -3373,7 +3374,7 @@ nvme_fc_connect_ctrl_work(struct work_struct *work)
 
 	struct nvme_fc_ctrl *ctrl =
 			container_of(to_delayed_work(work),
-				struct nvme_fc_ctrl, connect_work);
+				struct nvme_fc_ctrl, ctrl.connect_work);
 
 	ret = nvme_fc_create_association(ctrl);
 	if (ret)
@@ -3485,7 +3486,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
 	kref_init(&ctrl->ref);
 
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_fc_reset_ctrl_work);
-	INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work);
+	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work, nvme_fc_connect_ctrl_work);
 	INIT_WORK(&ctrl->ioerr_work, nvme_fc_ctrl_ioerr_work);
 	spin_lock_init(&ctrl->lock);
 
@@ -3561,14 +3562,14 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
 		goto fail_ctrl;
 	}
 
-	if (!queue_delayed_work(nvme_wq, &ctrl->connect_work, 0)) {
+	if (!queue_delayed_work(nvme_wq, &ctrl->ctrl.connect_work, 0)) {
 		dev_err(ctrl->ctrl.device,
 			"NVME-FC{%d}: failed to schedule initial connect\n",
 			ctrl->cnum);
 		goto fail_ctrl;
 	}
 
-	flush_delayed_work(&ctrl->connect_work);
+	flush_delayed_work(&ctrl->ctrl.connect_work);
 
 	dev_info(ctrl->ctrl.device,
 		"NVME-FC{%d}: new ctrl: NQN \"%s\"\n",
@@ -3580,7 +3581,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
 	nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING);
 	cancel_work_sync(&ctrl->ioerr_work);
 	cancel_work_sync(&ctrl->ctrl.reset_work);
-	cancel_delayed_work_sync(&ctrl->connect_work);
+	cancel_delayed_work_sync(&ctrl->ctrl.connect_work);
 
 	ctrl->ctrl.opts = NULL;
 
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index ed79a6c7e804..81ca5dd9b7f9 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -343,6 +343,7 @@ struct nvme_ctrl {
 	unsigned long flags;
 #define NVME_CTRL_FAILFAST_EXPIRED	0
 	struct nvmf_ctrl_options *opts;
+	struct delayed_work connect_work;
 
 	struct page *discard_page;
 	unsigned long discard_page_busy;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 0498801542eb..fbfa18a47bd8 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -110,8 +110,6 @@ struct nvme_rdma_ctrl {
 
 	struct nvme_rdma_qe	async_event_sqe;
 
-	struct delayed_work	reconnect_work;
-
 	struct list_head	list;
 
 	struct blk_mq_tag_set	admin_tag_set;
@@ -1078,7 +1076,7 @@ static void nvme_rdma_reconnect_or_remove(struct nvme_rdma_ctrl *ctrl)
 	if (nvmf_should_reconnect(&ctrl->ctrl)) {
 		dev_info(ctrl->ctrl.device, "Reconnecting in %d seconds...\n",
 			ctrl->ctrl.opts->reconnect_delay);
-		queue_delayed_work(nvme_wq, &ctrl->reconnect_work,
+		queue_delayed_work(nvme_wq, &ctrl->ctrl.connect_work,
 				ctrl->ctrl.opts->reconnect_delay * HZ);
 	} else {
 		nvme_delete_ctrl(&ctrl->ctrl);
@@ -1166,7 +1164,7 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
 static void nvme_rdma_reconnect_ctrl_work(struct work_struct *work)
 {
 	struct nvme_rdma_ctrl *ctrl = container_of(to_delayed_work(work),
-			struct nvme_rdma_ctrl, reconnect_work);
+			struct nvme_rdma_ctrl, ctrl.connect_work);
 
 	++ctrl->ctrl.nr_reconnects;
 
@@ -2230,7 +2228,7 @@ static const struct blk_mq_ops nvme_rdma_admin_mq_ops = {
 static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
 {
 	cancel_work_sync(&ctrl->err_work);
-	cancel_delayed_work_sync(&ctrl->reconnect_work);
+	cancel_delayed_work_sync(&ctrl->ctrl.connect_work);
 
 	nvme_rdma_teardown_io_queues(ctrl, shutdown);
 	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
@@ -2358,7 +2356,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
 		goto out_free_ctrl;
 	}
 
-	INIT_DELAYED_WORK(&ctrl->reconnect_work,
+	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work,
 			nvme_rdma_reconnect_ctrl_work);
 	INIT_WORK(&ctrl->err_work, nvme_rdma_error_recovery_work);
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_rdma_reset_ctrl_work);
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 3c1c29dd3020..3ace20e39c86 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -127,7 +127,6 @@ struct nvme_tcp_ctrl {
 	struct nvme_ctrl	ctrl;
 
 	struct work_struct	err_work;
-	struct delayed_work	connect_work;
 	struct nvme_tcp_request async_req;
 	u32			io_queues[HCTX_MAX_TYPES];
 };
@@ -1983,7 +1982,7 @@ static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl)
 	if (nvmf_should_reconnect(ctrl)) {
 		dev_info(ctrl->device, "Reconnecting in %d seconds...\n",
 			ctrl->opts->reconnect_delay);
-		queue_delayed_work(nvme_wq, &to_tcp_ctrl(ctrl)->connect_work,
+		queue_delayed_work(nvme_wq, &ctrl->connect_work,
 				ctrl->opts->reconnect_delay * HZ);
 	} else {
 		dev_info(ctrl->device, "Removing controller...\n");
@@ -2066,7 +2065,7 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
 static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work)
 {
 	struct nvme_tcp_ctrl *tcp_ctrl = container_of(to_delayed_work(work),
-			struct nvme_tcp_ctrl, connect_work);
+			struct nvme_tcp_ctrl, ctrl.connect_work);
 	struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
 
 	++ctrl->nr_reconnects;
@@ -2113,7 +2112,7 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
 static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
 {
 	cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work);
-	cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work);
+	cancel_delayed_work_sync(&ctrl->connect_work);
 
 	nvme_tcp_teardown_io_queues(ctrl, shutdown);
 	blk_mq_quiesce_queue(ctrl->admin_q);
@@ -2513,7 +2512,7 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev,
 	ctrl->ctrl.sqsize = opts->queue_size - 1;
 	ctrl->ctrl.kato = opts->kato;
 
-	INIT_DELAYED_WORK(&ctrl->connect_work,
+	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work,
 			nvme_tcp_reconnect_ctrl_work);
 	INIT_WORK(&ctrl->err_work, nvme_tcp_error_recovery_work);
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work);
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 02/10] nvme-fabrics: introduce nvmf_reconnect_or_remove API
  2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
  2021-10-20 10:38 ` [PATCH 01/10] nvme: add connect_work attribute to nvme ctrl Max Gurtovoy
@ 2021-10-20 10:38 ` Max Gurtovoy
  2021-11-02 23:38   ` James Smart
  2021-10-20 10:38 ` [PATCH 03/10] nvme: add err_work attribute to nvme ctrl Max Gurtovoy
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

This logic is duplicated today for RDMA and TCP controllers. Move it to
the fabrics driver and export it as a new API.

Also update the RDMA/TCP transport drivers to use this API and remove
the duplicated code.

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/fabrics.c | 21 +++++++++++++++++++++
 drivers/nvme/host/fabrics.h |  1 +
 drivers/nvme/host/rdma.c    | 25 +++----------------------
 drivers/nvme/host/tcp.c     | 26 +++-----------------------
 4 files changed, 28 insertions(+), 45 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 668c6bb7a567..4a1ef67c6fb3 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -472,6 +472,27 @@ bool nvmf_should_reconnect(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvmf_should_reconnect);
 
+void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
+{
+	/* If we are resetting/deleting then do nothing */
+	if (ctrl->state != NVME_CTRL_CONNECTING) {
+		WARN_ON_ONCE(ctrl->state == NVME_CTRL_NEW ||
+			ctrl->state == NVME_CTRL_LIVE);
+		return;
+	}
+
+	if (nvmf_should_reconnect(ctrl)) {
+		dev_info(ctrl->device, "Reconnecting in %d seconds...\n",
+			ctrl->opts->reconnect_delay);
+		queue_delayed_work(nvme_wq, &ctrl->connect_work,
+				ctrl->opts->reconnect_delay * HZ);
+	} else {
+		dev_info(ctrl->device, "Removing controller...\n");
+		nvme_delete_ctrl(ctrl);
+	}
+}
+EXPORT_SYMBOL_GPL(nvmf_reconnect_or_remove);
+
 /**
  * nvmf_register_transport() - NVMe Fabrics Library registration function.
  * @ops:	Transport ops instance to be registered to the
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index a146cb903869..de213ab26977 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -188,6 +188,7 @@ void nvmf_unregister_transport(struct nvmf_transport_ops *ops);
 void nvmf_free_options(struct nvmf_ctrl_options *opts);
 int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size);
 bool nvmf_should_reconnect(struct nvme_ctrl *ctrl);
+void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl);
 bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
 		struct nvmf_ctrl_options *opts);
 
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index fbfa18a47bd8..f31a56d8fd73 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1064,25 +1064,6 @@ static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl)
 	kfree(ctrl);
 }
 
-static void nvme_rdma_reconnect_or_remove(struct nvme_rdma_ctrl *ctrl)
-{
-	/* If we are resetting/deleting then do nothing */
-	if (ctrl->ctrl.state != NVME_CTRL_CONNECTING) {
-		WARN_ON_ONCE(ctrl->ctrl.state == NVME_CTRL_NEW ||
-			ctrl->ctrl.state == NVME_CTRL_LIVE);
-		return;
-	}
-
-	if (nvmf_should_reconnect(&ctrl->ctrl)) {
-		dev_info(ctrl->ctrl.device, "Reconnecting in %d seconds...\n",
-			ctrl->ctrl.opts->reconnect_delay);
-		queue_delayed_work(nvme_wq, &ctrl->ctrl.connect_work,
-				ctrl->ctrl.opts->reconnect_delay * HZ);
-	} else {
-		nvme_delete_ctrl(&ctrl->ctrl);
-	}
-}
-
 static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
 {
 	int ret;
@@ -1181,7 +1162,7 @@ static void nvme_rdma_reconnect_ctrl_work(struct work_struct *work)
 requeue:
 	dev_info(ctrl->ctrl.device, "Failed reconnect attempt %d\n",
 			ctrl->ctrl.nr_reconnects);
-	nvme_rdma_reconnect_or_remove(ctrl);
+	nvmf_reconnect_or_remove(&ctrl->ctrl);
 }
 
 static void nvme_rdma_error_recovery_work(struct work_struct *work)
@@ -1202,7 +1183,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 		return;
 	}
 
-	nvme_rdma_reconnect_or_remove(ctrl);
+	nvmf_reconnect_or_remove(&ctrl->ctrl);
 }
 
 static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
@@ -2265,7 +2246,7 @@ static void nvme_rdma_reset_ctrl_work(struct work_struct *work)
 
 out_fail:
 	++ctrl->ctrl.nr_reconnects;
-	nvme_rdma_reconnect_or_remove(ctrl);
+	nvmf_reconnect_or_remove(&ctrl->ctrl);
 }
 
 static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 3ace20e39c86..530ff76d4ac9 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1970,26 +1970,6 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
 	nvme_tcp_destroy_io_queues(ctrl, remove);
 }
 
-static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl)
-{
-	/* If we are resetting/deleting then do nothing */
-	if (ctrl->state != NVME_CTRL_CONNECTING) {
-		WARN_ON_ONCE(ctrl->state == NVME_CTRL_NEW ||
-			ctrl->state == NVME_CTRL_LIVE);
-		return;
-	}
-
-	if (nvmf_should_reconnect(ctrl)) {
-		dev_info(ctrl->device, "Reconnecting in %d seconds...\n",
-			ctrl->opts->reconnect_delay);
-		queue_delayed_work(nvme_wq, &ctrl->connect_work,
-				ctrl->opts->reconnect_delay * HZ);
-	} else {
-		dev_info(ctrl->device, "Removing controller...\n");
-		nvme_delete_ctrl(ctrl);
-	}
-}
-
 static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
 {
 	struct nvmf_ctrl_options *opts = ctrl->opts;
@@ -2083,7 +2063,7 @@ static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work)
 requeue:
 	dev_info(ctrl->device, "Failed reconnect attempt %d\n",
 			ctrl->nr_reconnects);
-	nvme_tcp_reconnect_or_remove(ctrl);
+	nvmf_reconnect_or_remove(ctrl);
 }
 
 static void nvme_tcp_error_recovery_work(struct work_struct *work)
@@ -2106,7 +2086,7 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
 		return;
 	}
 
-	nvme_tcp_reconnect_or_remove(ctrl);
+	nvmf_reconnect_or_remove(ctrl);
 }
 
 static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
@@ -2150,7 +2130,7 @@ static void nvme_reset_ctrl_work(struct work_struct *work)
 
 out_fail:
 	++ctrl->nr_reconnects;
-	nvme_tcp_reconnect_or_remove(ctrl);
+	nvmf_reconnect_or_remove(ctrl);
 }
 
 static void nvme_tcp_free_ctrl(struct nvme_ctrl *nctrl)
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 03/10] nvme: add err_work attribute to nvme ctrl
  2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
  2021-10-20 10:38 ` [PATCH 01/10] nvme: add connect_work attribute to nvme ctrl Max Gurtovoy
  2021-10-20 10:38 ` [PATCH 02/10] nvme-fabrics: introduce nvmf_reconnect_or_remove API Max Gurtovoy
@ 2021-10-20 10:38 ` Max Gurtovoy
  2021-10-20 11:05   ` Hannes Reinecke
  2021-11-02 23:53   ` James Smart
  2021-10-20 10:38 ` [PATCH 04/10] nvme-fabrics: introduce nvmf_error_recovery API Max Gurtovoy
                   ` (6 subsequent siblings)
  9 siblings, 2 replies; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

This structure is duplicated for RDMA/TCP fabric controllers. Move it to
common code.

FC controllers might use this attribute in the future instead of a local
ioerr_work attribute.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/nvme.h |  1 +
 drivers/nvme/host/rdma.c |  9 ++++-----
 drivers/nvme/host/tcp.c  | 12 +++++-------
 3 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 81ca5dd9b7f9..f9e1ce93d61d 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -344,6 +344,7 @@ struct nvme_ctrl {
 #define NVME_CTRL_FAILFAST_EXPIRED	0
 	struct nvmf_ctrl_options *opts;
 	struct delayed_work connect_work;
+	struct work_struct err_work;
 
 	struct page *discard_page;
 	unsigned long discard_page_busy;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index f31a56d8fd73..da7f61a5fac4 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -106,7 +106,6 @@ struct nvme_rdma_ctrl {
 
 	/* other member variables */
 	struct blk_mq_tag_set	tag_set;
-	struct work_struct	err_work;
 
 	struct nvme_rdma_qe	async_event_sqe;
 
@@ -1168,7 +1167,7 @@ static void nvme_rdma_reconnect_ctrl_work(struct work_struct *work)
 static void nvme_rdma_error_recovery_work(struct work_struct *work)
 {
 	struct nvme_rdma_ctrl *ctrl = container_of(work,
-			struct nvme_rdma_ctrl, err_work);
+			struct nvme_rdma_ctrl, ctrl.err_work);
 
 	nvme_stop_keep_alive(&ctrl->ctrl);
 	nvme_rdma_teardown_io_queues(ctrl, false);
@@ -1192,7 +1191,7 @@ static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
 		return;
 
 	dev_warn(ctrl->ctrl.device, "starting error recovery\n");
-	queue_work(nvme_reset_wq, &ctrl->err_work);
+	queue_work(nvme_reset_wq, &ctrl->ctrl.err_work);
 }
 
 static void nvme_rdma_end_request(struct nvme_rdma_request *req)
@@ -2208,7 +2207,7 @@ static const struct blk_mq_ops nvme_rdma_admin_mq_ops = {
 
 static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
 {
-	cancel_work_sync(&ctrl->err_work);
+	cancel_work_sync(&ctrl->ctrl.err_work);
 	cancel_delayed_work_sync(&ctrl->ctrl.connect_work);
 
 	nvme_rdma_teardown_io_queues(ctrl, shutdown);
@@ -2339,7 +2338,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
 
 	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work,
 			nvme_rdma_reconnect_ctrl_work);
-	INIT_WORK(&ctrl->err_work, nvme_rdma_error_recovery_work);
+	INIT_WORK(&ctrl->ctrl.err_work, nvme_rdma_error_recovery_work);
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_rdma_reset_ctrl_work);
 
 	ctrl->ctrl.queue_count = opts->nr_io_queues + opts->nr_write_queues +
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 530ff76d4ac9..07a9cc4f2274 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -126,7 +126,6 @@ struct nvme_tcp_ctrl {
 	struct sockaddr_storage src_addr;
 	struct nvme_ctrl	ctrl;
 
-	struct work_struct	err_work;
 	struct nvme_tcp_request async_req;
 	u32			io_queues[HCTX_MAX_TYPES];
 };
@@ -486,7 +485,7 @@ static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl)
 		return;
 
 	dev_warn(ctrl->device, "starting error recovery\n");
-	queue_work(nvme_reset_wq, &to_tcp_ctrl(ctrl)->err_work);
+	queue_work(nvme_reset_wq, &ctrl->err_work);
 }
 
 static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
@@ -2068,9 +2067,8 @@ static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work)
 
 static void nvme_tcp_error_recovery_work(struct work_struct *work)
 {
-	struct nvme_tcp_ctrl *tcp_ctrl = container_of(work,
-				struct nvme_tcp_ctrl, err_work);
-	struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
+	struct nvme_ctrl *ctrl = container_of(work,
+				struct nvme_ctrl, err_work);
 
 	nvme_stop_keep_alive(ctrl);
 	nvme_tcp_teardown_io_queues(ctrl, false);
@@ -2091,7 +2089,7 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
 
 static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
 {
-	cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work);
+	cancel_work_sync(&ctrl->err_work);
 	cancel_delayed_work_sync(&ctrl->connect_work);
 
 	nvme_tcp_teardown_io_queues(ctrl, shutdown);
@@ -2494,7 +2492,7 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev,
 
 	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work,
 			nvme_tcp_reconnect_ctrl_work);
-	INIT_WORK(&ctrl->err_work, nvme_tcp_error_recovery_work);
+	INIT_WORK(&ctrl->ctrl.err_work, nvme_tcp_error_recovery_work);
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work);
 
 	if (!(opts->mask & NVMF_OPT_TRSVCID)) {
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 04/10] nvme-fabrics: introduce nvmf_error_recovery API
  2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
                   ` (2 preceding siblings ...)
  2021-10-20 10:38 ` [PATCH 03/10] nvme: add err_work attribute to nvme ctrl Max Gurtovoy
@ 2021-10-20 10:38 ` Max Gurtovoy
  2021-11-02 23:59   ` James Smart
  2021-10-20 10:38 ` [PATCH 05/10] nvme/nvme-fabrics: introduce nvmf_error_recovery_work API Max Gurtovoy
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

Error recovery mechanism is duplicated in RDMA and TCP transports. Move
this logic to common code.

Also update the RDMA/TCP transport drivers to use this API and remove
the duplicated code.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/fabrics.c | 10 ++++++++++
 drivers/nvme/host/fabrics.h |  1 +
 drivers/nvme/host/rdma.c    | 25 ++++++++-----------------
 drivers/nvme/host/tcp.c     | 19 +++++--------------
 4 files changed, 24 insertions(+), 31 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 4a1ef67c6fb3..2edd086fa922 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -493,6 +493,16 @@ void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvmf_reconnect_or_remove);
 
+void nvmf_error_recovery(struct nvme_ctrl *ctrl)
+{
+	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
+		return;
+
+	dev_warn(ctrl->device, "starting error recovery\n");
+	queue_work(nvme_reset_wq, &ctrl->err_work);
+}
+EXPORT_SYMBOL_GPL(nvmf_error_recovery);
+
 /**
  * nvmf_register_transport() - NVMe Fabrics Library registration function.
  * @ops:	Transport ops instance to be registered to the
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index de213ab26977..3d8ec7133fc8 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -189,6 +189,7 @@ void nvmf_free_options(struct nvmf_ctrl_options *opts);
 int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size);
 bool nvmf_should_reconnect(struct nvme_ctrl *ctrl);
 void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl);
+void nvmf_error_recovery(struct nvme_ctrl *ctrl);
 bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
 		struct nvmf_ctrl_options *opts);
 
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index da7f61a5fac4..1c57e371af61 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1185,15 +1185,6 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 	nvmf_reconnect_or_remove(&ctrl->ctrl);
 }
 
-static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
-{
-	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING))
-		return;
-
-	dev_warn(ctrl->ctrl.device, "starting error recovery\n");
-	queue_work(nvme_reset_wq, &ctrl->ctrl.err_work);
-}
-
 static void nvme_rdma_end_request(struct nvme_rdma_request *req)
 {
 	struct request *rq = blk_mq_rq_from_pdu(req);
@@ -1215,7 +1206,7 @@ static void nvme_rdma_wr_error(struct ib_cq *cq, struct ib_wc *wc,
 			     "%s for CQE 0x%p failed with status %s (%d)\n",
 			     op, wc->wr_cqe,
 			     ib_wc_status_msg(wc->status), wc->status);
-	nvme_rdma_error_recovery(ctrl);
+	nvmf_error_recovery(&ctrl->ctrl);
 }
 
 static void nvme_rdma_memreg_done(struct ib_cq *cq, struct ib_wc *wc)
@@ -1715,7 +1706,7 @@ static void nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue,
 		dev_err(queue->ctrl->ctrl.device,
 			"got bad command_id %#x on QP %#x\n",
 			cqe->command_id, queue->qp->qp_num);
-		nvme_rdma_error_recovery(queue->ctrl);
+		nvmf_error_recovery(&queue->ctrl->ctrl);
 		return;
 	}
 	req = blk_mq_rq_to_pdu(rq);
@@ -1729,7 +1720,7 @@ static void nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue,
 			dev_err(queue->ctrl->ctrl.device,
 				"Bogus remote invalidation for rkey %#x\n",
 				req->mr ? req->mr->rkey : 0);
-			nvme_rdma_error_recovery(queue->ctrl);
+			nvmf_error_recovery(&queue->ctrl->ctrl);
 		}
 	} else if (req->mr) {
 		int ret;
@@ -1739,7 +1730,7 @@ static void nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue,
 			dev_err(queue->ctrl->ctrl.device,
 				"Queueing INV WR for rkey %#x failed (%d)\n",
 				req->mr->rkey, ret);
-			nvme_rdma_error_recovery(queue->ctrl);
+			nvmf_error_recovery(&queue->ctrl->ctrl);
 		}
 		/* the local invalidation completion will end the request */
 		return;
@@ -1766,7 +1757,7 @@ static void nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	if (unlikely(wc->byte_len < len)) {
 		dev_err(queue->ctrl->ctrl.device,
 			"Unexpected nvme completion length(%d)\n", wc->byte_len);
-		nvme_rdma_error_recovery(queue->ctrl);
+		nvmf_error_recovery(&queue->ctrl->ctrl);
 		return;
 	}
 
@@ -1936,7 +1927,7 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
 	case RDMA_CM_EVENT_TIMEWAIT_EXIT:
 		dev_dbg(queue->ctrl->ctrl.device,
 			"disconnect received - connection closed\n");
-		nvme_rdma_error_recovery(queue->ctrl);
+		nvmf_error_recovery(&queue->ctrl->ctrl);
 		break;
 	case RDMA_CM_EVENT_DEVICE_REMOVAL:
 		/* device removal is handled via the ib_client API */
@@ -1944,7 +1935,7 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
 	default:
 		dev_err(queue->ctrl->ctrl.device,
 			"Unexpected RDMA CM event (%d)\n", ev->event);
-		nvme_rdma_error_recovery(queue->ctrl);
+		nvmf_error_recovery(&queue->ctrl->ctrl);
 		break;
 	}
 
@@ -2000,7 +1991,7 @@ nvme_rdma_timeout(struct request *rq, bool reserved)
 	 * LIVE state should trigger the normal error recovery which will
 	 * handle completing this request.
 	 */
-	nvme_rdma_error_recovery(ctrl);
+	nvmf_error_recovery(&ctrl->ctrl);
 	return BLK_EH_RESET_TIMER;
 }
 
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 07a9cc4f2274..fe1f2fec457b 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -479,15 +479,6 @@ static void nvme_tcp_init_recv_ctx(struct nvme_tcp_queue *queue)
 	queue->ddgst_remaining = 0;
 }
 
-static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl)
-{
-	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
-		return;
-
-	dev_warn(ctrl->device, "starting error recovery\n");
-	queue_work(nvme_reset_wq, &ctrl->err_work);
-}
-
 static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
 		struct nvme_completion *cqe)
 {
@@ -499,7 +490,7 @@ static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
 		dev_err(queue->ctrl->ctrl.device,
 			"got bad cqe.command_id %#x on queue %d\n",
 			cqe->command_id, nvme_tcp_queue_id(queue));
-		nvme_tcp_error_recovery(&queue->ctrl->ctrl);
+		nvmf_error_recovery(&queue->ctrl->ctrl);
 		return -EINVAL;
 	}
 
@@ -541,7 +532,7 @@ static int nvme_tcp_handle_c2h_data(struct nvme_tcp_queue *queue,
 		dev_err(queue->ctrl->ctrl.device,
 			"queue %d tag %#x SUCCESS set but not last PDU\n",
 			nvme_tcp_queue_id(queue), rq->tag);
-		nvme_tcp_error_recovery(&queue->ctrl->ctrl);
+		nvmf_error_recovery(&queue->ctrl->ctrl);
 		return -EPROTO;
 	}
 
@@ -850,7 +841,7 @@ static int nvme_tcp_recv_skb(read_descriptor_t *desc, struct sk_buff *skb,
 			dev_err(queue->ctrl->ctrl.device,
 				"receive failed:  %d\n", result);
 			queue->rd_enabled = false;
-			nvme_tcp_error_recovery(&queue->ctrl->ctrl);
+			nvmf_error_recovery(&queue->ctrl->ctrl);
 			return result;
 		}
 	}
@@ -898,7 +889,7 @@ static void nvme_tcp_state_change(struct sock *sk)
 	case TCP_LAST_ACK:
 	case TCP_FIN_WAIT1:
 	case TCP_FIN_WAIT2:
-		nvme_tcp_error_recovery(&queue->ctrl->ctrl);
+		nvmf_error_recovery(&queue->ctrl->ctrl);
 		break;
 	default:
 		dev_info(queue->ctrl->ctrl.device,
@@ -2252,7 +2243,7 @@ nvme_tcp_timeout(struct request *rq, bool reserved)
 	 * LIVE state should trigger the normal error recovery which will
 	 * handle completing this request.
 	 */
-	nvme_tcp_error_recovery(ctrl);
+	nvmf_error_recovery(ctrl);
 	return BLK_EH_RESET_TIMER;
 }
 
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 05/10] nvme/nvme-fabrics: introduce nvmf_error_recovery_work API
  2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
                   ` (3 preceding siblings ...)
  2021-10-20 10:38 ` [PATCH 04/10] nvme-fabrics: introduce nvmf_error_recovery API Max Gurtovoy
@ 2021-10-20 10:38 ` Max Gurtovoy
  2021-11-03  0:04   ` James Smart
  2021-10-20 10:38 ` [PATCH 06/10] nvme/nvme-fabrics: introduce nvmf_reconnect_ctrl_work API Max Gurtovoy
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

Error recovery work is duplicated in RDMA and TCP transports. Move this
logic to common code. For that, introduce 2 new ctrl ops to teardown IO
and admin queue.

Also update the RDMA/TCP transport drivers to use this API and remove
the duplicated code.

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/fabrics.c | 23 +++++++++++
 drivers/nvme/host/fabrics.h |  1 +
 drivers/nvme/host/nvme.h    |  4 ++
 drivers/nvme/host/rdma.c    | 78 +++++++++++++++----------------------
 drivers/nvme/host/tcp.c     | 46 +++++++---------------
 5 files changed, 73 insertions(+), 79 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 2edd086fa922..5a770196eb60 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -493,6 +493,29 @@ void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvmf_reconnect_or_remove);
 
+void nvmf_error_recovery_work(struct work_struct *work)
+{
+	struct nvme_ctrl *ctrl = container_of(work,
+				struct nvme_ctrl, err_work);
+
+	nvme_stop_keep_alive(ctrl);
+	ctrl->ops->teardown_ctrl_io_queues(ctrl, false);
+	/* unquiesce to fail fast pending requests */
+	nvme_start_queues(ctrl);
+	ctrl->ops->teardown_ctrl_admin_queue(ctrl, false);
+	blk_mq_unquiesce_queue(ctrl->admin_q);
+
+	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
+		/* state change failure is ok if we started ctrl delete */
+		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
+			     ctrl->state != NVME_CTRL_DELETING_NOIO);
+		return;
+	}
+
+	nvmf_reconnect_or_remove(ctrl);
+}
+EXPORT_SYMBOL_GPL(nvmf_error_recovery_work);
+
 void nvmf_error_recovery(struct nvme_ctrl *ctrl)
 {
 	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index 3d8ec7133fc8..8655eff74ed0 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -190,6 +190,7 @@ int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size);
 bool nvmf_should_reconnect(struct nvme_ctrl *ctrl);
 void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl);
 void nvmf_error_recovery(struct nvme_ctrl *ctrl);
+void nvmf_error_recovery_work(struct work_struct *work);
 bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
 		struct nvmf_ctrl_options *opts);
 
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index f9e1ce93d61d..5cdf2ec45e9a 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -493,6 +493,10 @@ struct nvme_ctrl_ops {
 	void (*submit_async_event)(struct nvme_ctrl *ctrl);
 	void (*delete_ctrl)(struct nvme_ctrl *ctrl);
 	int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
+
+	/* Fabrics only */
+	void (*teardown_ctrl_io_queues)(struct nvme_ctrl *ctrl, bool remove);
+	void (*teardown_ctrl_admin_queue)(struct nvme_ctrl *ctrl, bool remove);
 };
 
 /*
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 1c57e371af61..4e42f1956181 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1019,29 +1019,33 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
 	return ret;
 }
 
-static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl,
+static void nvme_rdma_teardown_admin_queue(struct nvme_ctrl *nctrl,
 		bool remove)
 {
-	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
-	blk_sync_queue(ctrl->ctrl.admin_q);
+	struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
+
+	blk_mq_quiesce_queue(nctrl->admin_q);
+	blk_sync_queue(nctrl->admin_q);
 	nvme_rdma_stop_queue(&ctrl->queues[0]);
-	nvme_cancel_admin_tagset(&ctrl->ctrl);
+	nvme_cancel_admin_tagset(nctrl);
 	if (remove)
-		blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+		blk_mq_unquiesce_queue(nctrl->admin_q);
 	nvme_rdma_destroy_admin_queue(ctrl, remove);
 }
 
-static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
+static void nvme_rdma_teardown_io_queues(struct nvme_ctrl *nctrl,
 		bool remove)
 {
-	if (ctrl->ctrl.queue_count > 1) {
-		nvme_start_freeze(&ctrl->ctrl);
-		nvme_stop_queues(&ctrl->ctrl);
-		nvme_sync_io_queues(&ctrl->ctrl);
+	struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
+
+	if (nctrl->queue_count > 1) {
+		nvme_start_freeze(nctrl);
+		nvme_stop_queues(nctrl);
+		nvme_sync_io_queues(nctrl);
 		nvme_rdma_stop_io_queues(ctrl);
-		nvme_cancel_tagset(&ctrl->ctrl);
+		nvme_cancel_tagset(nctrl);
 		if (remove)
-			nvme_start_queues(&ctrl->ctrl);
+			nvme_start_queues(nctrl);
 		nvme_rdma_destroy_io_queues(ctrl, remove);
 	}
 }
@@ -1164,27 +1168,6 @@ static void nvme_rdma_reconnect_ctrl_work(struct work_struct *work)
 	nvmf_reconnect_or_remove(&ctrl->ctrl);
 }
 
-static void nvme_rdma_error_recovery_work(struct work_struct *work)
-{
-	struct nvme_rdma_ctrl *ctrl = container_of(work,
-			struct nvme_rdma_ctrl, ctrl.err_work);
-
-	nvme_stop_keep_alive(&ctrl->ctrl);
-	nvme_rdma_teardown_io_queues(ctrl, false);
-	nvme_start_queues(&ctrl->ctrl);
-	nvme_rdma_teardown_admin_queue(ctrl, false);
-	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
-
-	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
-		/* state change failure is ok if we started ctrl delete */
-		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
-			     ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
-		return;
-	}
-
-	nvmf_reconnect_or_remove(&ctrl->ctrl);
-}
-
 static void nvme_rdma_end_request(struct nvme_rdma_request *req)
 {
 	struct request *rq = blk_mq_rq_from_pdu(req);
@@ -2201,13 +2184,13 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
 	cancel_work_sync(&ctrl->ctrl.err_work);
 	cancel_delayed_work_sync(&ctrl->ctrl.connect_work);
 
-	nvme_rdma_teardown_io_queues(ctrl, shutdown);
+	nvme_rdma_teardown_io_queues(&ctrl->ctrl, shutdown);
 	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	if (shutdown)
 		nvme_shutdown_ctrl(&ctrl->ctrl);
 	else
 		nvme_disable_ctrl(&ctrl->ctrl);
-	nvme_rdma_teardown_admin_queue(ctrl, shutdown);
+	nvme_rdma_teardown_admin_queue(&ctrl->ctrl, shutdown);
 }
 
 static void nvme_rdma_delete_ctrl(struct nvme_ctrl *ctrl)
@@ -2240,16 +2223,19 @@ static void nvme_rdma_reset_ctrl_work(struct work_struct *work)
 }
 
 static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
-	.name			= "rdma",
-	.module			= THIS_MODULE,
-	.flags			= NVME_F_FABRICS | NVME_F_METADATA_SUPPORTED,
-	.reg_read32		= nvmf_reg_read32,
-	.reg_read64		= nvmf_reg_read64,
-	.reg_write32		= nvmf_reg_write32,
-	.free_ctrl		= nvme_rdma_free_ctrl,
-	.submit_async_event	= nvme_rdma_submit_async_event,
-	.delete_ctrl		= nvme_rdma_delete_ctrl,
-	.get_address		= nvmf_get_address,
+	.name				= "rdma",
+	.module				= THIS_MODULE,
+	.flags				= NVME_F_FABRICS |
+					  NVME_F_METADATA_SUPPORTED,
+	.reg_read32			= nvmf_reg_read32,
+	.reg_read64			= nvmf_reg_read64,
+	.reg_write32			= nvmf_reg_write32,
+	.free_ctrl			= nvme_rdma_free_ctrl,
+	.submit_async_event		= nvme_rdma_submit_async_event,
+	.delete_ctrl			= nvme_rdma_delete_ctrl,
+	.get_address			= nvmf_get_address,
+	.teardown_ctrl_io_queues	= nvme_rdma_teardown_io_queues,
+	.teardown_ctrl_admin_queue	= nvme_rdma_teardown_admin_queue,
 };
 
 /*
@@ -2329,7 +2315,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
 
 	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work,
 			nvme_rdma_reconnect_ctrl_work);
-	INIT_WORK(&ctrl->ctrl.err_work, nvme_rdma_error_recovery_work);
+	INIT_WORK(&ctrl->ctrl.err_work, nvmf_error_recovery_work);
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_rdma_reset_ctrl_work);
 
 	ctrl->ctrl.queue_count = opts->nr_io_queues + opts->nr_write_queues +
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index fe1f2fec457b..679eb3c2b8fd 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2056,28 +2056,6 @@ static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work)
 	nvmf_reconnect_or_remove(ctrl);
 }
 
-static void nvme_tcp_error_recovery_work(struct work_struct *work)
-{
-	struct nvme_ctrl *ctrl = container_of(work,
-				struct nvme_ctrl, err_work);
-
-	nvme_stop_keep_alive(ctrl);
-	nvme_tcp_teardown_io_queues(ctrl, false);
-	/* unquiesce to fail fast pending requests */
-	nvme_start_queues(ctrl);
-	nvme_tcp_teardown_admin_queue(ctrl, false);
-	blk_mq_unquiesce_queue(ctrl->admin_q);
-
-	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
-		/* state change failure is ok if we started ctrl delete */
-		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
-			     ctrl->state != NVME_CTRL_DELETING_NOIO);
-		return;
-	}
-
-	nvmf_reconnect_or_remove(ctrl);
-}
-
 static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
 {
 	cancel_work_sync(&ctrl->err_work);
@@ -2435,16 +2413,18 @@ static const struct blk_mq_ops nvme_tcp_admin_mq_ops = {
 };
 
 static const struct nvme_ctrl_ops nvme_tcp_ctrl_ops = {
-	.name			= "tcp",
-	.module			= THIS_MODULE,
-	.flags			= NVME_F_FABRICS,
-	.reg_read32		= nvmf_reg_read32,
-	.reg_read64		= nvmf_reg_read64,
-	.reg_write32		= nvmf_reg_write32,
-	.free_ctrl		= nvme_tcp_free_ctrl,
-	.submit_async_event	= nvme_tcp_submit_async_event,
-	.delete_ctrl		= nvme_tcp_delete_ctrl,
-	.get_address		= nvmf_get_address,
+	.name				= "tcp",
+	.module				= THIS_MODULE,
+	.flags				= NVME_F_FABRICS,
+	.reg_read32			= nvmf_reg_read32,
+	.reg_read64			= nvmf_reg_read64,
+	.reg_write32			= nvmf_reg_write32,
+	.free_ctrl			= nvme_tcp_free_ctrl,
+	.submit_async_event		= nvme_tcp_submit_async_event,
+	.delete_ctrl			= nvme_tcp_delete_ctrl,
+	.get_address			= nvmf_get_address,
+	.teardown_ctrl_io_queues	= nvme_tcp_teardown_io_queues,
+	.teardown_ctrl_admin_queue	= nvme_tcp_teardown_admin_queue,
 };
 
 static bool
@@ -2483,7 +2463,7 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev,
 
 	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work,
 			nvme_tcp_reconnect_ctrl_work);
-	INIT_WORK(&ctrl->ctrl.err_work, nvme_tcp_error_recovery_work);
+	INIT_WORK(&ctrl->ctrl.err_work, nvmf_error_recovery_work);
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work);
 
 	if (!(opts->mask & NVMF_OPT_TRSVCID)) {
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 06/10] nvme/nvme-fabrics: introduce nvmf_reconnect_ctrl_work API
  2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
                   ` (4 preceding siblings ...)
  2021-10-20 10:38 ` [PATCH 05/10] nvme/nvme-fabrics: introduce nvmf_error_recovery_work API Max Gurtovoy
@ 2021-10-20 10:38 ` Max Gurtovoy
  2021-11-03  0:15   ` James Smart
  2021-10-20 10:38 ` [PATCH 07/10] nvme-fabrics: add nvmf_init_ctrl/nvmf_uninit_ctrl API Max Gurtovoy
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

Reconnect work is duplicated in RDMA and TCP transports. Move this logic
to common code. For that, introduce a new ctrl op to setup a ctrl.

Also update the RDMA/TCP transport drivers to use this API and remove
the duplicated code.

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/fabrics.c | 24 +++++++++++
 drivers/nvme/host/fabrics.h |  1 +
 drivers/nvme/host/nvme.h    |  1 +
 drivers/nvme/host/rdma.c    | 82 ++++++++++++++-----------------------
 drivers/nvme/host/tcp.c     | 28 +------------
 5 files changed, 58 insertions(+), 78 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 5a770196eb60..6a2283e09164 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -526,6 +526,30 @@ void nvmf_error_recovery(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvmf_error_recovery);
 
+void nvmf_reconnect_ctrl_work(struct work_struct *work)
+{
+	struct nvme_ctrl *ctrl = container_of(to_delayed_work(work),
+			struct nvme_ctrl, connect_work);
+
+	++ctrl->nr_reconnects;
+
+	if (ctrl->ops->setup_ctrl(ctrl, false))
+		goto requeue;
+
+	dev_info(ctrl->device, "Successfully reconnected (%d attempt)\n",
+			ctrl->nr_reconnects);
+
+	ctrl->nr_reconnects = 0;
+
+	return;
+
+requeue:
+	dev_info(ctrl->device, "Failed reconnect attempt %d\n",
+		 ctrl->nr_reconnects);
+	nvmf_reconnect_or_remove(ctrl);
+}
+EXPORT_SYMBOL_GPL(nvmf_reconnect_ctrl_work);
+
 /**
  * nvmf_register_transport() - NVMe Fabrics Library registration function.
  * @ops:	Transport ops instance to be registered to the
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index 8655eff74ed0..49c98b69647f 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -191,6 +191,7 @@ bool nvmf_should_reconnect(struct nvme_ctrl *ctrl);
 void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl);
 void nvmf_error_recovery(struct nvme_ctrl *ctrl);
 void nvmf_error_recovery_work(struct work_struct *work);
+void nvmf_reconnect_ctrl_work(struct work_struct *work);
 bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
 		struct nvmf_ctrl_options *opts);
 
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 5cdf2ec45e9a..e137db2760d8 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -497,6 +497,7 @@ struct nvme_ctrl_ops {
 	/* Fabrics only */
 	void (*teardown_ctrl_io_queues)(struct nvme_ctrl *ctrl, bool remove);
 	void (*teardown_ctrl_admin_queue)(struct nvme_ctrl *ctrl, bool remove);
+	int (*setup_ctrl)(struct nvme_ctrl *ctrl, bool new);
 };
 
 /*
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 4e42f1956181..9c62f3766f49 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1067,8 +1067,9 @@ static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl)
 	kfree(ctrl);
 }
 
-static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
+static int nvme_rdma_setup_ctrl(struct nvme_ctrl *nctrl, bool new)
 {
+	struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
 	int ret;
 	bool changed;
 
@@ -1076,98 +1077,75 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
 	if (ret)
 		return ret;
 
-	if (ctrl->ctrl.icdoff) {
+	if (nctrl->icdoff) {
 		ret = -EOPNOTSUPP;
-		dev_err(ctrl->ctrl.device, "icdoff is not supported!\n");
+		dev_err(nctrl->device, "icdoff is not supported!\n");
 		goto destroy_admin;
 	}
 
-	if (!(ctrl->ctrl.sgls & (1 << 2))) {
+	if (!(nctrl->sgls & (1 << 2))) {
 		ret = -EOPNOTSUPP;
-		dev_err(ctrl->ctrl.device,
+		dev_err(nctrl->device,
 			"Mandatory keyed sgls are not supported!\n");
 		goto destroy_admin;
 	}
 
-	if (ctrl->ctrl.opts->queue_size > ctrl->ctrl.sqsize + 1) {
-		dev_warn(ctrl->ctrl.device,
+	if (nctrl->opts->queue_size > nctrl->sqsize + 1) {
+		dev_warn(nctrl->device,
 			"queue_size %zu > ctrl sqsize %u, clamping down\n",
-			ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1);
+			nctrl->opts->queue_size, nctrl->sqsize + 1);
 	}
 
-	if (ctrl->ctrl.sqsize + 1 > ctrl->ctrl.maxcmd) {
-		dev_warn(ctrl->ctrl.device,
+	if (nctrl->sqsize + 1 > nctrl->maxcmd) {
+		dev_warn(nctrl->device,
 			"sqsize %u > ctrl maxcmd %u, clamping down\n",
-			ctrl->ctrl.sqsize + 1, ctrl->ctrl.maxcmd);
-		ctrl->ctrl.sqsize = ctrl->ctrl.maxcmd - 1;
+			nctrl->sqsize + 1, nctrl->maxcmd);
+		nctrl->sqsize = nctrl->maxcmd - 1;
 	}
 
-	if (ctrl->ctrl.sgls & (1 << 20))
+	if (nctrl->sgls & (1 << 20))
 		ctrl->use_inline_data = true;
 
-	if (ctrl->ctrl.queue_count > 1) {
+	if (nctrl->queue_count > 1) {
 		ret = nvme_rdma_configure_io_queues(ctrl, new);
 		if (ret)
 			goto destroy_admin;
 	}
 
-	changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
+	changed = nvme_change_ctrl_state(nctrl, NVME_CTRL_LIVE);
 	if (!changed) {
 		/*
 		 * state change failure is ok if we started ctrl delete,
 		 * unless we're during creation of a new controller to
 		 * avoid races with teardown flow.
 		 */
-		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
-			     ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
+		WARN_ON_ONCE(nctrl->state != NVME_CTRL_DELETING &&
+			     nctrl->state != NVME_CTRL_DELETING_NOIO);
 		WARN_ON_ONCE(new);
 		ret = -EINVAL;
 		goto destroy_io;
 	}
 
-	nvme_start_ctrl(&ctrl->ctrl);
+	nvme_start_ctrl(nctrl);
 	return 0;
 
 destroy_io:
-	if (ctrl->ctrl.queue_count > 1) {
-		nvme_stop_queues(&ctrl->ctrl);
-		nvme_sync_io_queues(&ctrl->ctrl);
+	if (nctrl->queue_count > 1) {
+		nvme_stop_queues(nctrl);
+		nvme_sync_io_queues(nctrl);
 		nvme_rdma_stop_io_queues(ctrl);
-		nvme_cancel_tagset(&ctrl->ctrl);
+		nvme_cancel_tagset(nctrl);
 		nvme_rdma_destroy_io_queues(ctrl, new);
 	}
 destroy_admin:
-	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
-	blk_sync_queue(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(nctrl->admin_q);
+	blk_sync_queue(nctrl->admin_q);
 	nvme_rdma_stop_queue(&ctrl->queues[0]);
-	nvme_cancel_admin_tagset(&ctrl->ctrl);
+	nvme_cancel_admin_tagset(nctrl);
 	nvme_rdma_destroy_admin_queue(ctrl, new);
 	return ret;
 }
 
-static void nvme_rdma_reconnect_ctrl_work(struct work_struct *work)
-{
-	struct nvme_rdma_ctrl *ctrl = container_of(to_delayed_work(work),
-			struct nvme_rdma_ctrl, ctrl.connect_work);
-
-	++ctrl->ctrl.nr_reconnects;
-
-	if (nvme_rdma_setup_ctrl(ctrl, false))
-		goto requeue;
-
-	dev_info(ctrl->ctrl.device, "Successfully reconnected (%d attempts)\n",
-			ctrl->ctrl.nr_reconnects);
-
-	ctrl->ctrl.nr_reconnects = 0;
-
-	return;
-
-requeue:
-	dev_info(ctrl->ctrl.device, "Failed reconnect attempt %d\n",
-			ctrl->ctrl.nr_reconnects);
-	nvmf_reconnect_or_remove(&ctrl->ctrl);
-}
-
 static void nvme_rdma_end_request(struct nvme_rdma_request *req)
 {
 	struct request *rq = blk_mq_rq_from_pdu(req);
@@ -2212,7 +2190,7 @@ static void nvme_rdma_reset_ctrl_work(struct work_struct *work)
 		return;
 	}
 
-	if (nvme_rdma_setup_ctrl(ctrl, false))
+	if (nvme_rdma_setup_ctrl(&ctrl->ctrl, false))
 		goto out_fail;
 
 	return;
@@ -2236,6 +2214,7 @@ static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
 	.get_address			= nvmf_get_address,
 	.teardown_ctrl_io_queues	= nvme_rdma_teardown_io_queues,
 	.teardown_ctrl_admin_queue	= nvme_rdma_teardown_admin_queue,
+	.setup_ctrl			= nvme_rdma_setup_ctrl,
 };
 
 /*
@@ -2313,8 +2292,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
 		goto out_free_ctrl;
 	}
 
-	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work,
-			nvme_rdma_reconnect_ctrl_work);
+	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work, nvmf_reconnect_ctrl_work);
 	INIT_WORK(&ctrl->ctrl.err_work, nvmf_error_recovery_work);
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_rdma_reset_ctrl_work);
 
@@ -2337,7 +2315,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
 	changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING);
 	WARN_ON_ONCE(!changed);
 
-	ret = nvme_rdma_setup_ctrl(ctrl, true);
+	ret = nvme_rdma_setup_ctrl(&ctrl->ctrl, true);
 	if (ret)
 		goto out_uninit_ctrl;
 
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 679eb3c2b8fd..e6e8de2dcc8e 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2032,30 +2032,6 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
 	return ret;
 }
 
-static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work)
-{
-	struct nvme_tcp_ctrl *tcp_ctrl = container_of(to_delayed_work(work),
-			struct nvme_tcp_ctrl, ctrl.connect_work);
-	struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
-
-	++ctrl->nr_reconnects;
-
-	if (nvme_tcp_setup_ctrl(ctrl, false))
-		goto requeue;
-
-	dev_info(ctrl->device, "Successfully reconnected (%d attempt)\n",
-			ctrl->nr_reconnects);
-
-	ctrl->nr_reconnects = 0;
-
-	return;
-
-requeue:
-	dev_info(ctrl->device, "Failed reconnect attempt %d\n",
-			ctrl->nr_reconnects);
-	nvmf_reconnect_or_remove(ctrl);
-}
-
 static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
 {
 	cancel_work_sync(&ctrl->err_work);
@@ -2425,6 +2401,7 @@ static const struct nvme_ctrl_ops nvme_tcp_ctrl_ops = {
 	.get_address			= nvmf_get_address,
 	.teardown_ctrl_io_queues	= nvme_tcp_teardown_io_queues,
 	.teardown_ctrl_admin_queue	= nvme_tcp_teardown_admin_queue,
+	.setup_ctrl			= nvme_tcp_setup_ctrl,
 };
 
 static bool
@@ -2461,8 +2438,7 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev,
 	ctrl->ctrl.sqsize = opts->queue_size - 1;
 	ctrl->ctrl.kato = opts->kato;
 
-	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work,
-			nvme_tcp_reconnect_ctrl_work);
+	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work, nvmf_reconnect_ctrl_work);
 	INIT_WORK(&ctrl->ctrl.err_work, nvmf_error_recovery_work);
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work);
 
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 07/10] nvme-fabrics: add nvmf_init_ctrl/nvmf_uninit_ctrl API
  2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
                   ` (5 preceding siblings ...)
  2021-10-20 10:38 ` [PATCH 06/10] nvme/nvme-fabrics: introduce nvmf_reconnect_ctrl_work API Max Gurtovoy
@ 2021-10-20 10:38 ` Max Gurtovoy
  2021-11-03  0:19   ` James Smart
  2021-10-20 10:38 ` [PATCH 08/10] nvme-rdma: update WARN_ON condition during reset Max Gurtovoy
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

Centralize the initalization and un-initialization of fabrics specific
settings. For now, only used by RDMA and TCP fabric transports.

Also, convert the reconnect_work and error_recovery_work to be static
functions since they are not used outside the fabrics driver anymore.

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/fabrics.c | 20 ++++++++++++++++----
 drivers/nvme/host/fabrics.h |  4 ++--
 drivers/nvme/host/rdma.c    |  7 ++-----
 drivers/nvme/host/tcp.c     |  7 ++-----
 4 files changed, 22 insertions(+), 16 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 6a2283e09164..e50f6b32a286 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -493,7 +493,7 @@ void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvmf_reconnect_or_remove);
 
-void nvmf_error_recovery_work(struct work_struct *work)
+static void nvmf_error_recovery_work(struct work_struct *work)
 {
 	struct nvme_ctrl *ctrl = container_of(work,
 				struct nvme_ctrl, err_work);
@@ -514,7 +514,6 @@ void nvmf_error_recovery_work(struct work_struct *work)
 
 	nvmf_reconnect_or_remove(ctrl);
 }
-EXPORT_SYMBOL_GPL(nvmf_error_recovery_work);
 
 void nvmf_error_recovery(struct nvme_ctrl *ctrl)
 {
@@ -526,7 +525,7 @@ void nvmf_error_recovery(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvmf_error_recovery);
 
-void nvmf_reconnect_ctrl_work(struct work_struct *work)
+static void nvmf_reconnect_ctrl_work(struct work_struct *work)
 {
 	struct nvme_ctrl *ctrl = container_of(to_delayed_work(work),
 			struct nvme_ctrl, connect_work);
@@ -548,7 +547,20 @@ void nvmf_reconnect_ctrl_work(struct work_struct *work)
 		 ctrl->nr_reconnects);
 	nvmf_reconnect_or_remove(ctrl);
 }
-EXPORT_SYMBOL_GPL(nvmf_reconnect_ctrl_work);
+
+void nvmf_init_ctrl(struct nvme_ctrl *ctrl)
+{
+	INIT_DELAYED_WORK(&ctrl->connect_work, nvmf_reconnect_ctrl_work);
+	INIT_WORK(&ctrl->err_work, nvmf_error_recovery_work);
+}
+EXPORT_SYMBOL_GPL(nvmf_init_ctrl);
+
+void nvmf_uninit_ctrl(struct nvme_ctrl *ctrl)
+{
+	cancel_work_sync(&ctrl->err_work);
+	cancel_delayed_work_sync(&ctrl->connect_work);
+}
+EXPORT_SYMBOL_GPL(nvmf_uninit_ctrl);
 
 /**
  * nvmf_register_transport() - NVMe Fabrics Library registration function.
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index 49c98b69647f..06933e7a4ff4 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -190,8 +190,8 @@ int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size);
 bool nvmf_should_reconnect(struct nvme_ctrl *ctrl);
 void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl);
 void nvmf_error_recovery(struct nvme_ctrl *ctrl);
-void nvmf_error_recovery_work(struct work_struct *work);
-void nvmf_reconnect_ctrl_work(struct work_struct *work);
+void nvmf_init_ctrl(struct nvme_ctrl *ctrl);
+void nvmf_uninit_ctrl(struct nvme_ctrl *ctrl);
 bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
 		struct nvmf_ctrl_options *opts);
 
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 9c62f3766f49..8e1e8c8c8a0d 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -2159,9 +2159,7 @@ static const struct blk_mq_ops nvme_rdma_admin_mq_ops = {
 
 static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
 {
-	cancel_work_sync(&ctrl->ctrl.err_work);
-	cancel_delayed_work_sync(&ctrl->ctrl.connect_work);
-
+	nvmf_uninit_ctrl(&ctrl->ctrl);
 	nvme_rdma_teardown_io_queues(&ctrl->ctrl, shutdown);
 	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	if (shutdown)
@@ -2292,8 +2290,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
 		goto out_free_ctrl;
 	}
 
-	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work, nvmf_reconnect_ctrl_work);
-	INIT_WORK(&ctrl->ctrl.err_work, nvmf_error_recovery_work);
+	nvmf_init_ctrl(&ctrl->ctrl);
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_rdma_reset_ctrl_work);
 
 	ctrl->ctrl.queue_count = opts->nr_io_queues + opts->nr_write_queues +
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index e6e8de2dcc8e..7f50b423388f 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2034,9 +2034,7 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
 
 static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown)
 {
-	cancel_work_sync(&ctrl->err_work);
-	cancel_delayed_work_sync(&ctrl->connect_work);
-
+	nvmf_uninit_ctrl(ctrl);
 	nvme_tcp_teardown_io_queues(ctrl, shutdown);
 	blk_mq_quiesce_queue(ctrl->admin_q);
 	if (shutdown)
@@ -2438,8 +2436,7 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev,
 	ctrl->ctrl.sqsize = opts->queue_size - 1;
 	ctrl->ctrl.kato = opts->kato;
 
-	INIT_DELAYED_WORK(&ctrl->ctrl.connect_work, nvmf_reconnect_ctrl_work);
-	INIT_WORK(&ctrl->ctrl.err_work, nvmf_error_recovery_work);
+	nvmf_init_ctrl(&ctrl->ctrl);
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work);
 
 	if (!(opts->mask & NVMF_OPT_TRSVCID)) {
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 08/10] nvme-rdma: update WARN_ON condition during reset
  2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
                   ` (6 preceding siblings ...)
  2021-10-20 10:38 ` [PATCH 07/10] nvme-fabrics: add nvmf_init_ctrl/nvmf_uninit_ctrl API Max Gurtovoy
@ 2021-10-20 10:38 ` Max Gurtovoy
  2021-10-20 10:38 ` [PATCH 09/10] nvme/nvme-fabrics: move reset ctrl flow to common code Max Gurtovoy
  2021-10-20 10:38 ` [PATCH 10/10] nvme-fabrics: set common attributes during nvmf_init_ctrl Max Gurtovoy
  9 siblings, 0 replies; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

Changing state to NVME_CTRL_CONNECTING might fail if we started ctrl
delete procedure already. This is allowed in NVMe/TCP transport, so no
reason to have different WARN_ON_ONCE condition for NVMe/RDMA.

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/rdma.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 8e1e8c8c8a0d..f4eeafee05e5 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -2183,8 +2183,9 @@ static void nvme_rdma_reset_ctrl_work(struct work_struct *work)
 	nvme_rdma_shutdown_ctrl(ctrl, false);
 
 	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
-		/* state change failure should never happen */
-		WARN_ON_ONCE(1);
+		/* state change failure is ok if we started ctrl delete */
+		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
+			     ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
 		return;
 	}
 
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 09/10] nvme/nvme-fabrics: move reset ctrl flow to common code
  2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
                   ` (7 preceding siblings ...)
  2021-10-20 10:38 ` [PATCH 08/10] nvme-rdma: update WARN_ON condition during reset Max Gurtovoy
@ 2021-10-20 10:38 ` Max Gurtovoy
  2021-11-03  0:27   ` James Smart
  2021-10-20 10:38 ` [PATCH 10/10] nvme-fabrics: set common attributes during nvmf_init_ctrl Max Gurtovoy
  9 siblings, 1 reply; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

Reset work is duplicated in RDMA and TCP transports. Move this logic
to common code. For that, introduce a new ctrl op to teardown a ctrl.

Also update the RDMA/TCP transport drivers to use this API and
remove the duplicated code.

Make nvmf_reconnect_or_remove function static since it's only used
inside the fabrics driver now.

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/fabrics.c | 29 +++++++++++++++++++++++--
 drivers/nvme/host/fabrics.h |  1 -
 drivers/nvme/host/nvme.h    |  1 +
 drivers/nvme/host/rdma.c    | 43 ++++++++-----------------------------
 drivers/nvme/host/tcp.c     | 27 +----------------------
 5 files changed, 38 insertions(+), 63 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index e50f6b32a286..e13891619de0 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -472,7 +472,7 @@ bool nvmf_should_reconnect(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvmf_should_reconnect);
 
-void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
+static void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
 {
 	/* If we are resetting/deleting then do nothing */
 	if (ctrl->state != NVME_CTRL_CONNECTING) {
@@ -491,7 +491,6 @@ void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
 		nvme_delete_ctrl(ctrl);
 	}
 }
-EXPORT_SYMBOL_GPL(nvmf_reconnect_or_remove);
 
 static void nvmf_error_recovery_work(struct work_struct *work)
 {
@@ -548,10 +547,36 @@ static void nvmf_reconnect_ctrl_work(struct work_struct *work)
 	nvmf_reconnect_or_remove(ctrl);
 }
 
+void nvmf_reset_ctrl_work(struct work_struct *work)
+{
+	struct nvme_ctrl *ctrl =
+		container_of(work, struct nvme_ctrl, reset_work);
+
+	nvme_stop_ctrl(ctrl);
+	ctrl->ops->teardown_ctrl(ctrl, false);
+
+	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
+		/* state change failure is ok if we started ctrl delete */
+		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
+			     ctrl->state != NVME_CTRL_DELETING_NOIO);
+		return;
+	}
+
+	if (ctrl->ops->setup_ctrl(ctrl, false))
+		goto out_fail;
+
+	return;
+
+out_fail:
+	++ctrl->nr_reconnects;
+	nvmf_reconnect_or_remove(ctrl);
+}
+
 void nvmf_init_ctrl(struct nvme_ctrl *ctrl)
 {
 	INIT_DELAYED_WORK(&ctrl->connect_work, nvmf_reconnect_ctrl_work);
 	INIT_WORK(&ctrl->err_work, nvmf_error_recovery_work);
+	INIT_WORK(&ctrl->reset_work, nvmf_reset_ctrl_work);
 }
 EXPORT_SYMBOL_GPL(nvmf_init_ctrl);
 
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index 06933e7a4ff4..ca405ecf90dc 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -188,7 +188,6 @@ void nvmf_unregister_transport(struct nvmf_transport_ops *ops);
 void nvmf_free_options(struct nvmf_ctrl_options *opts);
 int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size);
 bool nvmf_should_reconnect(struct nvme_ctrl *ctrl);
-void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl);
 void nvmf_error_recovery(struct nvme_ctrl *ctrl);
 void nvmf_init_ctrl(struct nvme_ctrl *ctrl);
 void nvmf_uninit_ctrl(struct nvme_ctrl *ctrl);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index e137db2760d8..c654da7e45c4 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -498,6 +498,7 @@ struct nvme_ctrl_ops {
 	void (*teardown_ctrl_io_queues)(struct nvme_ctrl *ctrl, bool remove);
 	void (*teardown_ctrl_admin_queue)(struct nvme_ctrl *ctrl, bool remove);
 	int (*setup_ctrl)(struct nvme_ctrl *ctrl, bool new);
+	void (*teardown_ctrl)(struct nvme_ctrl *ctrl, bool shutdown);
 };
 
 /*
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index f4eeafee05e5..c49ad0b3ffdf 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -2157,46 +2157,21 @@ static const struct blk_mq_ops nvme_rdma_admin_mq_ops = {
 	.timeout	= nvme_rdma_timeout,
 };
 
-static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
+static void nvme_rdma_teardown_ctrl(struct nvme_ctrl *nctrl, bool shutdown)
 {
-	nvmf_uninit_ctrl(&ctrl->ctrl);
-	nvme_rdma_teardown_io_queues(&ctrl->ctrl, shutdown);
-	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
+	nvmf_uninit_ctrl(nctrl);
+	nvme_rdma_teardown_io_queues(nctrl, shutdown);
+	blk_mq_quiesce_queue(nctrl->admin_q);
 	if (shutdown)
-		nvme_shutdown_ctrl(&ctrl->ctrl);
+		nvme_shutdown_ctrl(nctrl);
 	else
-		nvme_disable_ctrl(&ctrl->ctrl);
-	nvme_rdma_teardown_admin_queue(&ctrl->ctrl, shutdown);
+		nvme_disable_ctrl(nctrl);
+	nvme_rdma_teardown_admin_queue(nctrl, shutdown);
 }
 
 static void nvme_rdma_delete_ctrl(struct nvme_ctrl *ctrl)
 {
-	nvme_rdma_shutdown_ctrl(to_rdma_ctrl(ctrl), true);
-}
-
-static void nvme_rdma_reset_ctrl_work(struct work_struct *work)
-{
-	struct nvme_rdma_ctrl *ctrl =
-		container_of(work, struct nvme_rdma_ctrl, ctrl.reset_work);
-
-	nvme_stop_ctrl(&ctrl->ctrl);
-	nvme_rdma_shutdown_ctrl(ctrl, false);
-
-	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
-		/* state change failure is ok if we started ctrl delete */
-		WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
-			     ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
-		return;
-	}
-
-	if (nvme_rdma_setup_ctrl(&ctrl->ctrl, false))
-		goto out_fail;
-
-	return;
-
-out_fail:
-	++ctrl->ctrl.nr_reconnects;
-	nvmf_reconnect_or_remove(&ctrl->ctrl);
+	nvme_rdma_teardown_ctrl(ctrl, true);
 }
 
 static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
@@ -2214,6 +2189,7 @@ static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = {
 	.teardown_ctrl_io_queues	= nvme_rdma_teardown_io_queues,
 	.teardown_ctrl_admin_queue	= nvme_rdma_teardown_admin_queue,
 	.setup_ctrl			= nvme_rdma_setup_ctrl,
+	.teardown_ctrl			= nvme_rdma_teardown_ctrl,
 };
 
 /*
@@ -2292,7 +2268,6 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
 	}
 
 	nvmf_init_ctrl(&ctrl->ctrl);
-	INIT_WORK(&ctrl->ctrl.reset_work, nvme_rdma_reset_ctrl_work);
 
 	ctrl->ctrl.queue_count = opts->nr_io_queues + opts->nr_write_queues +
 				opts->nr_poll_queues + 1;
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 7f50b423388f..e65bdd0db4d5 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2049,31 +2049,6 @@ static void nvme_tcp_delete_ctrl(struct nvme_ctrl *ctrl)
 	nvme_tcp_teardown_ctrl(ctrl, true);
 }
 
-static void nvme_reset_ctrl_work(struct work_struct *work)
-{
-	struct nvme_ctrl *ctrl =
-		container_of(work, struct nvme_ctrl, reset_work);
-
-	nvme_stop_ctrl(ctrl);
-	nvme_tcp_teardown_ctrl(ctrl, false);
-
-	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
-		/* state change failure is ok if we started ctrl delete */
-		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
-			     ctrl->state != NVME_CTRL_DELETING_NOIO);
-		return;
-	}
-
-	if (nvme_tcp_setup_ctrl(ctrl, false))
-		goto out_fail;
-
-	return;
-
-out_fail:
-	++ctrl->nr_reconnects;
-	nvmf_reconnect_or_remove(ctrl);
-}
-
 static void nvme_tcp_free_ctrl(struct nvme_ctrl *nctrl)
 {
 	struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
@@ -2400,6 +2375,7 @@ static const struct nvme_ctrl_ops nvme_tcp_ctrl_ops = {
 	.teardown_ctrl_io_queues	= nvme_tcp_teardown_io_queues,
 	.teardown_ctrl_admin_queue	= nvme_tcp_teardown_admin_queue,
 	.setup_ctrl			= nvme_tcp_setup_ctrl,
+	.teardown_ctrl			= nvme_tcp_teardown_ctrl,
 };
 
 static bool
@@ -2437,7 +2413,6 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev,
 	ctrl->ctrl.kato = opts->kato;
 
 	nvmf_init_ctrl(&ctrl->ctrl);
-	INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work);
 
 	if (!(opts->mask & NVMF_OPT_TRSVCID)) {
 		opts->trsvcid =
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 10/10] nvme-fabrics: set common attributes during nvmf_init_ctrl
  2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
                   ` (8 preceding siblings ...)
  2021-10-20 10:38 ` [PATCH 09/10] nvme/nvme-fabrics: move reset ctrl flow to common code Max Gurtovoy
@ 2021-10-20 10:38 ` Max Gurtovoy
  2021-11-03  0:30   ` James Smart
  9 siblings, 1 reply; 21+ messages in thread
From: Max Gurtovoy @ 2021-10-20 10:38 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare, jsmart2021, Max Gurtovoy

Move common logic for setting opts, queue_count, sqsize and kato from
RDMA/TCP drivers to the fabrics driver.

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/fabrics.c |  8 +++++++-
 drivers/nvme/host/fabrics.h |  2 +-
 drivers/nvme/host/rdma.c    | 10 ++--------
 drivers/nvme/host/tcp.c     |  8 +-------
 4 files changed, 11 insertions(+), 17 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index e13891619de0..335fef227baf 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -572,8 +572,14 @@ void nvmf_reset_ctrl_work(struct work_struct *work)
 	nvmf_reconnect_or_remove(ctrl);
 }
 
-void nvmf_init_ctrl(struct nvme_ctrl *ctrl)
+void nvmf_init_ctrl(struct nvme_ctrl *ctrl, struct nvmf_ctrl_options *opts)
 {
+	ctrl->opts = opts;
+	ctrl->queue_count = opts->nr_io_queues + opts->nr_write_queues +
+			    opts->nr_poll_queues + 1;
+	ctrl->sqsize = opts->queue_size - 1;
+	ctrl->kato = opts->kato;
+
 	INIT_DELAYED_WORK(&ctrl->connect_work, nvmf_reconnect_ctrl_work);
 	INIT_WORK(&ctrl->err_work, nvmf_error_recovery_work);
 	INIT_WORK(&ctrl->reset_work, nvmf_reset_ctrl_work);
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index ca405ecf90dc..0a694fd3b01e 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -189,7 +189,7 @@ void nvmf_free_options(struct nvmf_ctrl_options *opts);
 int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size);
 bool nvmf_should_reconnect(struct nvme_ctrl *ctrl);
 void nvmf_error_recovery(struct nvme_ctrl *ctrl);
-void nvmf_init_ctrl(struct nvme_ctrl *ctrl);
+void nvmf_init_ctrl(struct nvme_ctrl *ctrl, struct nvmf_ctrl_options *opts);
 void nvmf_uninit_ctrl(struct nvme_ctrl *ctrl);
 bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
 		struct nvmf_ctrl_options *opts);
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index c49ad0b3ffdf..04d741075376 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -2231,8 +2231,9 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
 	ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);
 	if (!ctrl)
 		return ERR_PTR(-ENOMEM);
-	ctrl->ctrl.opts = opts;
+
 	INIT_LIST_HEAD(&ctrl->list);
+	nvmf_init_ctrl(&ctrl->ctrl, opts);
 
 	if (!(opts->mask & NVMF_OPT_TRSVCID)) {
 		opts->trsvcid =
@@ -2267,13 +2268,6 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
 		goto out_free_ctrl;
 	}
 
-	nvmf_init_ctrl(&ctrl->ctrl);
-
-	ctrl->ctrl.queue_count = opts->nr_io_queues + opts->nr_write_queues +
-				opts->nr_poll_queues + 1;
-	ctrl->ctrl.sqsize = opts->queue_size - 1;
-	ctrl->ctrl.kato = opts->kato;
-
 	ret = -ENOMEM;
 	ctrl->queues = kcalloc(ctrl->ctrl.queue_count, sizeof(*ctrl->queues),
 				GFP_KERNEL);
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index e65bdd0db4d5..5b9a876291f4 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2406,13 +2406,7 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev,
 		return ERR_PTR(-ENOMEM);
 
 	INIT_LIST_HEAD(&ctrl->list);
-	ctrl->ctrl.opts = opts;
-	ctrl->ctrl.queue_count = opts->nr_io_queues + opts->nr_write_queues +
-				opts->nr_poll_queues + 1;
-	ctrl->ctrl.sqsize = opts->queue_size - 1;
-	ctrl->ctrl.kato = opts->kato;
-
-	nvmf_init_ctrl(&ctrl->ctrl);
+	nvmf_init_ctrl(&ctrl->ctrl, opts);
 
 	if (!(opts->mask & NVMF_OPT_TRSVCID)) {
 		opts->trsvcid =
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 03/10] nvme: add err_work attribute to nvme ctrl
  2021-10-20 10:38 ` [PATCH 03/10] nvme: add err_work attribute to nvme ctrl Max Gurtovoy
@ 2021-10-20 11:05   ` Hannes Reinecke
  2021-11-02 23:53   ` James Smart
  1 sibling, 0 replies; 21+ messages in thread
From: Hannes Reinecke @ 2021-10-20 11:05 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, jsmart2021

On 10/20/21 12:38 PM, Max Gurtovoy wrote:
> This structure is duplicated for RDMA/TCP fabric controllers. Move it to
> common code.
> 
> FC controllers might use this attribute in the future instead of a local
> ioerr_work attribute.
> 
> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/host/nvme.h |  1 +
>   drivers/nvme/host/rdma.c |  9 ++++-----
>   drivers/nvme/host/tcp.c  | 12 +++++-------
>   3 files changed, 10 insertions(+), 12 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 01/10] nvme: add connect_work attribute to nvme ctrl
  2021-10-20 10:38 ` [PATCH 01/10] nvme: add connect_work attribute to nvme ctrl Max Gurtovoy
@ 2021-11-02 22:59   ` James Smart
  0 siblings, 0 replies; 21+ messages in thread
From: James Smart @ 2021-11-02 22:59 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare

On 10/20/2021 3:38 AM, Max Gurtovoy wrote:
> This structure is duplicated for all the fabric controllers. Move it to
> common code.
> 
> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/host/fc.c   | 23 ++++++++++++-----------
>   drivers/nvme/host/nvme.h |  1 +
>   drivers/nvme/host/rdma.c | 10 ++++------
>   drivers/nvme/host/tcp.c  |  9 ++++-----
>   4 files changed, 21 insertions(+), 22 deletions(-)

Looks good

Reviewed-by: James Smart <jsmart2021@gmail.com>

-- james


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 02/10] nvme-fabrics: introduce nvmf_reconnect_or_remove API
  2021-10-20 10:38 ` [PATCH 02/10] nvme-fabrics: introduce nvmf_reconnect_or_remove API Max Gurtovoy
@ 2021-11-02 23:38   ` James Smart
  0 siblings, 0 replies; 21+ messages in thread
From: James Smart @ 2021-11-02 23:38 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare

On 10/20/2021 3:38 AM, Max Gurtovoy wrote:
> This logic is duplicated today for RDMA and TCP controllers. Move it to
> the fabrics driver and export it as a new API.
> 
> Also update the RDMA/TCP transport drivers to use this API and remove
> the duplicated code.
> 
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/host/fabrics.c | 21 +++++++++++++++++++++
>   drivers/nvme/host/fabrics.h |  1 +
>   drivers/nvme/host/rdma.c    | 25 +++----------------------
>   drivers/nvme/host/tcp.c     | 26 +++-----------------------
>   4 files changed, 28 insertions(+), 45 deletions(-)
> 
> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
> index 668c6bb7a567..4a1ef67c6fb3 100644
> --- a/drivers/nvme/host/fabrics.c
> +++ b/drivers/nvme/host/fabrics.c
> @@ -472,6 +472,27 @@ bool nvmf_should_reconnect(struct nvme_ctrl *ctrl)
>   }
>   EXPORT_SYMBOL_GPL(nvmf_should_reconnect);
>   
> +void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
> +{
> +	/* If we are resetting/deleting then do nothing */
> +	if (ctrl->state != NVME_CTRL_CONNECTING) {
> +		WARN_ON_ONCE(ctrl->state == NVME_CTRL_NEW ||
> +			ctrl->state == NVME_CTRL_LIVE);
> +		return;
> +	}
> +
> +	if (nvmf_should_reconnect(ctrl)) {
> +		dev_info(ctrl->device, "Reconnecting in %d seconds...\n",
> +			ctrl->opts->reconnect_delay);
> +		queue_delayed_work(nvme_wq, &ctrl->connect_work,
> +				ctrl->opts->reconnect_delay * HZ);
> +	} else {
> +		dev_info(ctrl->device, "Removing controller...\n");
> +		nvme_delete_ctrl(ctrl);
> +	}
> +}
> +EXPORT_SYMBOL_GPL(nvmf_reconnect_or_remove);
> +

This won't be sufficient for FC so it can't use it.  I'd have to think 
if there's a way to restructure or wrapper it. But not a great fit.

I do think what FC is doing relative to NVME_SC_DNR should be done in 
rdma/tcp as well.

In other words, this should minimally be:

void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl, int status)
{
         /* If we are resetting/deleting then do nothing */
         if (ctrl->state != NVME_CTRL_CONNECTING) {
                 WARN_ON_ONCE(ctrl->state == NVME_CTRL_NEW ||
                         ctrl->state == NVME_CTRL_LIVE);
                 return;
         }

         if (!(status > 0 && status & NVME_SC_DNR) &&
             nvmf_should_reconnect(ctrl)) {
                 dev_info(ctrl->device, "Reconnecting in %d seconds...\n",
                         ctrl->opts->reconnect_delay);
                 queue_delayed_work(nvme_wq, &ctrl->connect_work,
                                 ctrl->opts->reconnect_delay * HZ);
         } else {
                 dev_info(ctrl->device, "Removing controller...\n");
                 nvme_delete_ctrl(ctrl);
         }
}
EXPORT_SYMBOL_GPL(nvmf_reconnect_or_remove);

then change the callee's to set status to pass the return value from the 
status that caused the reschedule. It'll either be set to a -Exxx value 
or to a NVME status code returned by one of the core routines during the 
controller init. This allows an uncorrectable failure during controller 
init will just fail w/o rescheduling.


...
> @@ -1181,7 +1162,7 @@ static void nvme_rdma_reconnect_ctrl_work(struct work_struct *work)
>   requeue:
>   	dev_info(ctrl->ctrl.device, "Failed reconnect attempt %d\n",
>   			ctrl->ctrl.nr_reconnects);
> -	nvme_rdma_reconnect_or_remove(ctrl);
> +	nvmf_reconnect_or_remove(&ctrl->ctrl);

This would become:

@@ -2,10 +2,12 @@ static void nvme_rdma_reconnect_ctrl_wor
  {
  	struct nvme_rdma_ctrl *ctrl = container_of(to_delayed_work(work),
  			struct nvme_rdma_ctrl, reconnect_work);
+	int ret;

  	++ctrl->ctrl.nr_reconnects;

-	if (nvme_rdma_setup_ctrl(ctrl, false))
+	ret = nvme_rdma_setup_ctrl(ctrl, false);
+	if (ret)
  		goto requeue;

  	dev_info(ctrl->ctrl.device, "Successfully reconnected (%d attempts)\n",
@@ -18,5 +20,5 @@ static void nvme_rdma_reconnect_ctrl_wor
  requeue:
  	dev_info(ctrl->ctrl.device, "Failed reconnect attempt %d\n",
  			ctrl->ctrl.nr_reconnects);
-	nvme_rdma_reconnect_or_remove(ctrl);
+	nvme_rdma_reconnect_or_remove(ctrl, ret);
  }


>   }
>   
>   static void nvme_rdma_error_recovery_work(struct work_struct *work)
> @@ -1202,7 +1183,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>   		return;
>   	}
>   
> -	nvme_rdma_reconnect_or_remove(ctrl);
> +	nvmf_reconnect_or_remove(&ctrl->ctrl);
>   }

@@ -16,5 +16,5 @@ static void nvme_rdma_error_recovery_wor
  		return;
  	}

-	nvme_rdma_reconnect_or_remove(ctrl);
+	nvme_rdma_reconnect_or_remove(ctrl, 0);
  }


>   
>   static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
> @@ -2265,7 +2246,7 @@ static void nvme_rdma_reset_ctrl_work(struct work_struct *work)
>   
>   out_fail:
>   	++ctrl->ctrl.nr_reconnects;
> -	nvme_rdma_reconnect_or_remove(ctrl);
> +	nvmf_reconnect_or_remove(&ctrl->ctrl);
>   }

@@ -2,6 +2,7 @@ static void nvme_rdma_reset_ctrl_work(st
  {
  	struct nvme_rdma_ctrl *ctrl =
  		container_of(work, struct nvme_rdma_ctrl, ctrl.reset_work);
+	int ret;

  	nvme_stop_ctrl(&ctrl->ctrl);
  	nvme_rdma_shutdown_ctrl(ctrl, false);
@@ -12,12 +13,13 @@ static void nvme_rdma_reset_ctrl_work(st
  		return;
  	}

-	if (nvme_rdma_setup_ctrl(ctrl, false))
+	ret = nvme_rdma_setup_ctrl(ctrl, false);
+	if (ret)
  		goto out_fail;

  	return;

  out_fail:
  	++ctrl->ctrl.nr_reconnects;
-	nvme_rdma_reconnect_or_remove(ctrl);
+	nvme_rdma_reconnect_or_remove(ctrl, ret);
  }


And similar mods to tcp.

-- james




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 03/10] nvme: add err_work attribute to nvme ctrl
  2021-10-20 10:38 ` [PATCH 03/10] nvme: add err_work attribute to nvme ctrl Max Gurtovoy
  2021-10-20 11:05   ` Hannes Reinecke
@ 2021-11-02 23:53   ` James Smart
  1 sibling, 0 replies; 21+ messages in thread
From: James Smart @ 2021-11-02 23:53 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare

On 10/20/2021 3:38 AM, Max Gurtovoy wrote:
> This structure is duplicated for RDMA/TCP fabric controllers. Move it to
> common code.
> 
> FC controllers might use this attribute in the future instead of a local
> ioerr_work attribute.
> 
> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/host/nvme.h |  1 +
>   drivers/nvme/host/rdma.c |  9 ++++-----
>   drivers/nvme/host/tcp.c  | 12 +++++-------
>   3 files changed, 10 insertions(+), 12 deletions(-)
> 

This looks fine for commonizing rdma and tcp.

However, when I look at the calling sequences, such as 
rdma_error_recovery_work(), I do think rdma/tcp should look a lot like 
fc with it resulting in a call to nvme_reset_ctrl as that's what it 
really is. Gets rid of yet-another-independent-copy of stopping keep 
alive, queue teardown, queue state change, etc.  Just use the reset routine.

-- james




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 04/10] nvme-fabrics: introduce nvmf_error_recovery API
  2021-10-20 10:38 ` [PATCH 04/10] nvme-fabrics: introduce nvmf_error_recovery API Max Gurtovoy
@ 2021-11-02 23:59   ` James Smart
  0 siblings, 0 replies; 21+ messages in thread
From: James Smart @ 2021-11-02 23:59 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare

On 10/20/2021 3:38 AM, Max Gurtovoy wrote:
> Error recovery mechanism is duplicated in RDMA and TCP transports. Move
> this logic to common code.
> 
> Also update the RDMA/TCP transport drivers to use this API and remove
> the duplicated code.
> 
> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/host/fabrics.c | 10 ++++++++++
>   drivers/nvme/host/fabrics.h |  1 +
>   drivers/nvme/host/rdma.c    | 25 ++++++++-----------------
>   drivers/nvme/host/tcp.c     | 19 +++++--------------
>   4 files changed, 24 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
> index 4a1ef67c6fb3..2edd086fa922 100644
> --- a/drivers/nvme/host/fabrics.c
> +++ b/drivers/nvme/host/fabrics.c
> @@ -493,6 +493,16 @@ void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
>   }
>   EXPORT_SYMBOL_GPL(nvmf_reconnect_or_remove);
>   
> +void nvmf_error_recovery(struct nvme_ctrl *ctrl)
> +{
> +	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> +		return;
> +
> +	dev_warn(ctrl->device, "starting error recovery\n");
> +	queue_work(nvme_reset_wq, &ctrl->err_work);
> +}
> +EXPORT_SYMBOL_GPL(nvmf_error_recovery);
> +

don't disagree with this - but my comments on patch 3 relate to this. It 
should be making a call to nvme_reset_ctlr. no need to have another 
place where state is moved to RESETTING.

-- james



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 05/10] nvme/nvme-fabrics: introduce nvmf_error_recovery_work API
  2021-10-20 10:38 ` [PATCH 05/10] nvme/nvme-fabrics: introduce nvmf_error_recovery_work API Max Gurtovoy
@ 2021-11-03  0:04   ` James Smart
  0 siblings, 0 replies; 21+ messages in thread
From: James Smart @ 2021-11-03  0:04 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare

On 10/20/2021 3:38 AM, Max Gurtovoy wrote:
> Error recovery work is duplicated in RDMA and TCP transports. Move this
> logic to common code. For that, introduce 2 new ctrl ops to teardown IO
> and admin queue.
> 
> Also update the RDMA/TCP transport drivers to use this API and remove
> the duplicated code.
> 
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/host/fabrics.c | 23 +++++++++++
>   drivers/nvme/host/fabrics.h |  1 +
>   drivers/nvme/host/nvme.h    |  4 ++
>   drivers/nvme/host/rdma.c    | 78 +++++++++++++++----------------------
>   drivers/nvme/host/tcp.c     | 46 +++++++---------------
>   5 files changed, 73 insertions(+), 79 deletions(-)
> 
> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
> index 2edd086fa922..5a770196eb60 100644
> --- a/drivers/nvme/host/fabrics.c
> +++ b/drivers/nvme/host/fabrics.c
> @@ -493,6 +493,29 @@ void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
>   }
>   EXPORT_SYMBOL_GPL(nvmf_reconnect_or_remove);
>   
> +void nvmf_error_recovery_work(struct work_struct *work)
> +{
> +	struct nvme_ctrl *ctrl = container_of(work,
> +				struct nvme_ctrl, err_work);
> +
> +	nvme_stop_keep_alive(ctrl);
> +	ctrl->ops->teardown_ctrl_io_queues(ctrl, false);
> +	/* unquiesce to fail fast pending requests */
> +	nvme_start_queues(ctrl);
> +	ctrl->ops->teardown_ctrl_admin_queue(ctrl, false);
> +	blk_mq_unquiesce_queue(ctrl->admin_q);
> +
> +	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
> +		/* state change failure is ok if we started ctrl delete */
> +		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
> +			     ctrl->state != NVME_CTRL_DELETING_NOIO);
> +		return;
> +	}
> +
> +	nvmf_reconnect_or_remove(ctrl);
> +}
> +EXPORT_SYMBOL_GPL(nvmf_error_recovery_work);
> +

Same comments on just use nvme_reset_ctrl().

-- james



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 06/10] nvme/nvme-fabrics: introduce nvmf_reconnect_ctrl_work API
  2021-10-20 10:38 ` [PATCH 06/10] nvme/nvme-fabrics: introduce nvmf_reconnect_ctrl_work API Max Gurtovoy
@ 2021-11-03  0:15   ` James Smart
  0 siblings, 0 replies; 21+ messages in thread
From: James Smart @ 2021-11-03  0:15 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare

On 10/20/2021 3:38 AM, Max Gurtovoy wrote:
> Reconnect work is duplicated in RDMA and TCP transports. Move this logic
> to common code. For that, introduce a new ctrl op to setup a ctrl.
> 
> Also update the RDMA/TCP transport drivers to use this API and remove
> the duplicated code.
> 
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/host/fabrics.c | 24 +++++++++++
>   drivers/nvme/host/fabrics.h |  1 +
>   drivers/nvme/host/nvme.h    |  1 +
>   drivers/nvme/host/rdma.c    | 82 ++++++++++++++-----------------------
>   drivers/nvme/host/tcp.c     | 28 +------------
>   5 files changed, 58 insertions(+), 78 deletions(-)
> 
> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
> index 5a770196eb60..6a2283e09164 100644
> --- a/drivers/nvme/host/fabrics.c
> +++ b/drivers/nvme/host/fabrics.c
> @@ -526,6 +526,30 @@ void nvmf_error_recovery(struct nvme_ctrl *ctrl)
>   }
>   EXPORT_SYMBOL_GPL(nvmf_error_recovery);
>   
> +void nvmf_reconnect_ctrl_work(struct work_struct *work)
> +{
> +	struct nvme_ctrl *ctrl = container_of(to_delayed_work(work),
> +			struct nvme_ctrl, connect_work);
> +
> +	++ctrl->nr_reconnects;
> +
> +	if (ctrl->ops->setup_ctrl(ctrl, false))
> +		goto requeue;
> +
> +	dev_info(ctrl->device, "Successfully reconnected (%d attempt)\n",
> +			ctrl->nr_reconnects);
> +
> +	ctrl->nr_reconnects = 0;
> +
> +	return;
> +
> +requeue:
> +	dev_info(ctrl->device, "Failed reconnect attempt %d\n",
> +		 ctrl->nr_reconnects);
> +	nvmf_reconnect_or_remove(ctrl);
> +}
> +EXPORT_SYMBOL_GPL(nvmf_reconnect_ctrl_work);
> +
>   /**
>    * nvmf_register_transport() - NVMe Fabrics Library registration function.
>    * @ops:	Transport ops instance to be registered to the
> diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
> index 8655eff74ed0..49c98b69647f 100644
> --- a/drivers/nvme/host/fabrics.h
> +++ b/drivers/nvme/host/fabrics.h
> @@ -191,6 +191,7 @@ bool nvmf_should_reconnect(struct nvme_ctrl *ctrl);
>   void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl);
>   void nvmf_error_recovery(struct nvme_ctrl *ctrl);
>   void nvmf_error_recovery_work(struct work_struct *work);
> +void nvmf_reconnect_ctrl_work(struct work_struct *work);
>   bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
>   		struct nvmf_ctrl_options *opts);
>   
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index 5cdf2ec45e9a..e137db2760d8 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -497,6 +497,7 @@ struct nvme_ctrl_ops {
>   	/* Fabrics only */
>   	void (*teardown_ctrl_io_queues)(struct nvme_ctrl *ctrl, bool remove);
>   	void (*teardown_ctrl_admin_queue)(struct nvme_ctrl *ctrl, bool remove);
> +	int (*setup_ctrl)(struct nvme_ctrl *ctrl, bool new);
>   };
>   
>   /*


Add in a routine to call to check for connectivity and we're pretty 
close to something FC could adapt to.

But I'd really like to get rid of the "new" argument that gets passed 
around. We should be able to have some kind of state on the fabric 
controller that allows us to derive new or not (fc did this - see 
ioq_live).

-- james




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 07/10] nvme-fabrics: add nvmf_init_ctrl/nvmf_uninit_ctrl API
  2021-10-20 10:38 ` [PATCH 07/10] nvme-fabrics: add nvmf_init_ctrl/nvmf_uninit_ctrl API Max Gurtovoy
@ 2021-11-03  0:19   ` James Smart
  0 siblings, 0 replies; 21+ messages in thread
From: James Smart @ 2021-11-03  0:19 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare

On 10/20/2021 3:38 AM, Max Gurtovoy wrote:
> Centralize the initalization and un-initialization of fabrics specific
> settings. For now, only used by RDMA and TCP fabric transports.
> 
> Also, convert the reconnect_work and error_recovery_work to be static
> functions since they are not used outside the fabrics driver anymore.
> 
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/host/fabrics.c | 20 ++++++++++++++++----
>   drivers/nvme/host/fabrics.h |  4 ++--
>   drivers/nvme/host/rdma.c    |  7 ++-----
>   drivers/nvme/host/tcp.c     |  7 ++-----
>   4 files changed, 22 insertions(+), 16 deletions(-)
> 

Looks fine...

Reviewed-by: James Smart <jsmart2021@gmail.com>

-- james



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 09/10] nvme/nvme-fabrics: move reset ctrl flow to common code
  2021-10-20 10:38 ` [PATCH 09/10] nvme/nvme-fabrics: move reset ctrl flow to common code Max Gurtovoy
@ 2021-11-03  0:27   ` James Smart
  0 siblings, 0 replies; 21+ messages in thread
From: James Smart @ 2021-11-03  0:27 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare

On 10/20/2021 3:38 AM, Max Gurtovoy wrote:
> Reset work is duplicated in RDMA and TCP transports. Move this logic
> to common code. For that, introduce a new ctrl op to teardown a ctrl.
> 
> Also update the RDMA/TCP transport drivers to use this API and
> remove the duplicated code.
> 
> Make nvmf_reconnect_or_remove function static since it's only used
> inside the fabrics driver now.
> 
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/host/fabrics.c | 29 +++++++++++++++++++++++--
>   drivers/nvme/host/fabrics.h |  1 -
>   drivers/nvme/host/nvme.h    |  1 +
>   drivers/nvme/host/rdma.c    | 43 ++++++++-----------------------------
>   drivers/nvme/host/tcp.c     | 27 +----------------------
>   5 files changed, 38 insertions(+), 63 deletions(-)
> 
> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
> index e50f6b32a286..e13891619de0 100644
> --- a/drivers/nvme/host/fabrics.c
> +++ b/drivers/nvme/host/fabrics.c
> @@ -472,7 +472,7 @@ bool nvmf_should_reconnect(struct nvme_ctrl *ctrl)
>   }
>   EXPORT_SYMBOL_GPL(nvmf_should_reconnect);
>   
> -void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
> +static void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
>   {
>   	/* If we are resetting/deleting then do nothing */
>   	if (ctrl->state != NVME_CTRL_CONNECTING) {
> @@ -491,7 +491,6 @@ void nvmf_reconnect_or_remove(struct nvme_ctrl *ctrl)
>   		nvme_delete_ctrl(ctrl);
>   	}
>   }
> -EXPORT_SYMBOL_GPL(nvmf_reconnect_or_remove);
>   
>   static void nvmf_error_recovery_work(struct work_struct *work)
>   {
> @@ -548,10 +547,36 @@ static void nvmf_reconnect_ctrl_work(struct work_struct *work)
>   	nvmf_reconnect_or_remove(ctrl);
>   }
>   
> +void nvmf_reset_ctrl_work(struct work_struct *work)
> +{
> +	struct nvme_ctrl *ctrl =
> +		container_of(work, struct nvme_ctrl, reset_work);
> +
> +	nvme_stop_ctrl(ctrl);
> +	ctrl->ops->teardown_ctrl(ctrl, false);
> +
> +	if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
> +		/* state change failure is ok if we started ctrl delete */
> +		WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
> +			     ctrl->state != NVME_CTRL_DELETING_NOIO);
> +		return;
> +	}
> +
> +	if (ctrl->ops->setup_ctrl(ctrl, false))
> +		goto out_fail;
> +
> +	return;
> +
> +out_fail:
> +	++ctrl->nr_reconnects;
> +	nvmf_reconnect_or_remove(ctrl);
> +}
> +

This is close to what FC could use - just need a callout to check for 
connectivity and schedule a connect rather than attempt to connect to 
something not there.

Rest is fine for rdma/tcp. We would need something different to 
commonize this path for all 3 transports. Something to do later.

-- james






^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 10/10] nvme-fabrics: set common attributes during nvmf_init_ctrl
  2021-10-20 10:38 ` [PATCH 10/10] nvme-fabrics: set common attributes during nvmf_init_ctrl Max Gurtovoy
@ 2021-11-03  0:30   ` James Smart
  0 siblings, 0 replies; 21+ messages in thread
From: James Smart @ 2021-11-03  0:30 UTC (permalink / raw)
  To: Max Gurtovoy, linux-nvme, hch, kbusch, sagi
  Cc: chaitanyak, israelr, oren, hare

On 10/20/2021 3:38 AM, Max Gurtovoy wrote:
> Move common logic for setting opts, queue_count, sqsize and kato from
> RDMA/TCP drivers to the fabrics driver.
> 
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/host/fabrics.c |  8 +++++++-
>   drivers/nvme/host/fabrics.h |  2 +-
>   drivers/nvme/host/rdma.c    | 10 ++--------
>   drivers/nvme/host/tcp.c     |  8 +-------
>   4 files changed, 11 insertions(+), 17 deletions(-)
> 

Looks reasonable and something for FC to consider.

Reviewed-by: James Smart <jsmart2021@gmail.com>

-- james


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2021-11-03  0:30 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-20 10:38 [PATCH v2 0/10] Centrelize common fabrics code to core drivers Max Gurtovoy
2021-10-20 10:38 ` [PATCH 01/10] nvme: add connect_work attribute to nvme ctrl Max Gurtovoy
2021-11-02 22:59   ` James Smart
2021-10-20 10:38 ` [PATCH 02/10] nvme-fabrics: introduce nvmf_reconnect_or_remove API Max Gurtovoy
2021-11-02 23:38   ` James Smart
2021-10-20 10:38 ` [PATCH 03/10] nvme: add err_work attribute to nvme ctrl Max Gurtovoy
2021-10-20 11:05   ` Hannes Reinecke
2021-11-02 23:53   ` James Smart
2021-10-20 10:38 ` [PATCH 04/10] nvme-fabrics: introduce nvmf_error_recovery API Max Gurtovoy
2021-11-02 23:59   ` James Smart
2021-10-20 10:38 ` [PATCH 05/10] nvme/nvme-fabrics: introduce nvmf_error_recovery_work API Max Gurtovoy
2021-11-03  0:04   ` James Smart
2021-10-20 10:38 ` [PATCH 06/10] nvme/nvme-fabrics: introduce nvmf_reconnect_ctrl_work API Max Gurtovoy
2021-11-03  0:15   ` James Smart
2021-10-20 10:38 ` [PATCH 07/10] nvme-fabrics: add nvmf_init_ctrl/nvmf_uninit_ctrl API Max Gurtovoy
2021-11-03  0:19   ` James Smart
2021-10-20 10:38 ` [PATCH 08/10] nvme-rdma: update WARN_ON condition during reset Max Gurtovoy
2021-10-20 10:38 ` [PATCH 09/10] nvme/nvme-fabrics: move reset ctrl flow to common code Max Gurtovoy
2021-11-03  0:27   ` James Smart
2021-10-20 10:38 ` [PATCH 10/10] nvme-fabrics: set common attributes during nvmf_init_ctrl Max Gurtovoy
2021-11-03  0:30   ` James Smart

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.