All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/7] nvme_fc: add dev_loss_tmo support
@ 2017-05-04 18:07 jsmart2021
  2017-05-04 18:07 ` [RFC 1/7] nvme_fc: change ctlr state assignments during reset/reconnect jsmart2021
                   ` (7 more replies)
  0 siblings, 8 replies; 11+ messages in thread
From: jsmart2021 @ 2017-05-04 18:07 UTC (permalink / raw)


From: James Smart <jsmart2021@gmail.com>

FC, on the SCSI side, has long had a device loss timeout which governed
how long it would hide connectivity loss to remote targets. There is a
SCSI FC transport maintaining this value, and SCSI LLDDs are applying
this value (and admins used to configuring it) to FC targets that may
support FC-NVME or both SCSI and FC-NVME. Eventually, the SCSI FC
transport will be moved into something independent from and above SCSI
so that SCSI and NVME protocols can be peers. That's fairly distant.
In the meantime, to add the functionality now, and sync with the SCSI
FC transport, the LLDD will be used as the conduit. The LLDD can specify
the initial value at nvme_fc_register_remoteport() by the LLDD, and if
the value changes via the SCSI FC transport (or by LLDD desire), the
LLDD can call a nvme_fc transport routine to update the value.

The more interesting part, and why this is an RFC, is that there are
conflicting items in the exiting nvme fabrics implementation.
Fabrics currently specify a similar window to devloss for the
controller at connect time, via options parameters. Should the
transport have the port setting override the connect parameters or
should the connect parameters for the individual controller override
the port setting ?  Also, a reconnect window only makes sense if
the target stays alive for the duration of the window. Currently,
there is no attempt to set KATO so that it is aligned with the window.
In fact, in most cases, KATO is set far smaller than reconnect windows.
There's also no way currently, other than snooping, for the transport
to know the KATO value set in the connect command.

So, what is proposed for FC is, it sets the lowest window of the three
items. Please refer to the third patch "add dev_loss_tmo to controller"
for more details.

Comments are appreciated.

-- james


James Smart (7):
  nvme_fc: change ctlr state assignments during reset/reconnect
  nvme_fc: add a dev_loss_tmo field to the remoteport
  nvme_fc: add dev_loss_tmo to controller
  nvme_fc: check connectivity before initiating reconnects
  nvme_fc: change failure code on remoteport connectivity loss
  nvme_fc: move remote port get/put/free location
  nvme_fc: add dev_loss_tmo timeout and remoteport resume support

 drivers/nvme/host/fc.c         | 487 +++++++++++++++++++++++++++++++++++------
 include/linux/nvme-fc-driver.h |  11 +-
 2 files changed, 432 insertions(+), 66 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC 1/7] nvme_fc: change ctlr state assignments during reset/reconnect
  2017-05-04 18:07 [RFC 0/7] nvme_fc: add dev_loss_tmo support jsmart2021
@ 2017-05-04 18:07 ` jsmart2021
  2017-05-04 18:07 ` [RFC 2/7] nvme_fc: add a dev_loss_tmo field to the remoteport jsmart2021
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: jsmart2021 @ 2017-05-04 18:07 UTC (permalink / raw)


From: James Smart <jsmart2021@gmail.com>

Existing code:
Set NVME_CTRL_RESETTING upon entry from the core reset_ctrl
callback and left it set that way until reconnected.
Set NVME_CTRL_REcONNECTING after a transport detected error
and left it set that way until reconnected.

Revise the code so that NVME_CTRL_RESETTING is always set when
tearing down the association regardless of why/how, and after
the association is torn down, transition to NVME_CTRL_RECONNECTING
while it attempts to establish a new association with the target.

The RESETTING->RECONNECTING state transition is dependent upon the
patch that enables the transition.

Signed-off-by: James Smart <james.smart at broadcom.com>
---
 drivers/nvme/host/fc.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 9993ff8d5656..ac7e8145e5ec 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -1759,7 +1759,7 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
 	if (ctrl->queue_count > 1)
 		nvme_stop_queues(&ctrl->ctrl);
 
-	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RECONNECTING)) {
+	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING)) {
 		dev_err(ctrl->ctrl.device,
 			"NVME-FC{%d}: error_recovery: Couldn't change state "
 			"to RECONNECTING\n", ctrl->cnum);
@@ -2628,6 +2628,13 @@ nvme_fc_reset_ctrl_work(struct work_struct *work)
 	/* will block will waiting for io to terminate */
 	nvme_fc_delete_association(ctrl);
 
+	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RECONNECTING)) {
+		dev_err(ctrl->ctrl.device,
+			"NVME-FC{%d}: controller reset: Couldn't change "
+			"state to RECONNECTING\n", ctrl->cnum);
+		return;
+	}
+
 	ret = nvme_fc_create_association(ctrl);
 	if (ret)
 		nvme_fc_reconnect_or_delete(ctrl, ret);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 2/7] nvme_fc: add a dev_loss_tmo field to the remoteport
  2017-05-04 18:07 [RFC 0/7] nvme_fc: add dev_loss_tmo support jsmart2021
  2017-05-04 18:07 ` [RFC 1/7] nvme_fc: change ctlr state assignments during reset/reconnect jsmart2021
@ 2017-05-04 18:07 ` jsmart2021
  2017-05-04 18:07 ` [RFC 3/7] nvme_fc: add dev_loss_tmo to controller jsmart2021
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: jsmart2021 @ 2017-05-04 18:07 UTC (permalink / raw)


From: James Smart <jsmart2021@gmail.com>

Add a dev_loss_tmo value, paralleling the SCSI FC transport, for device
connectivity loss. The transport can initialize it in the
nvme_fc_register_remoteport() call or fall back on a default of 60s.

Signed-off-by: James Smart <james.smart at broadcom.com>
---
 drivers/nvme/host/fc.c         | 14 ++++++++++++++
 include/linux/nvme-fc-driver.h |  9 +++++++--
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index ac7e8145e5ec..89a5aa3c8cd9 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -45,6 +45,10 @@ enum nvme_fc_queue_flags {
 
 #define NVMEFC_QUEUE_DELAY	3		/* ms units */
 
+#define NVME_FC_DEFAULT_DEV_LOSS_TMO	60	/* seconds */
+#define NVME_FC_EXPECTED_RECONNECT_TM	2	/* seconds - E_D_TOV */
+#define NVME_FC_MIN_DEV_LOSS_TMO	(2 * NVME_FC_EXPECTED_RECONNECT_TM)
+
 struct nvme_fc_queue {
 	struct nvme_fc_ctrl	*ctrl;
 	struct device		*dev;
@@ -415,6 +419,12 @@ nvme_fc_register_remoteport(struct nvme_fc_local_port *localport,
 	unsigned long flags;
 	int ret, idx;
 
+	if (pinfo->dev_loss_tmo &&
+			pinfo->dev_loss_tmo < NVME_FC_MIN_DEV_LOSS_TMO) {
+		ret = -EINVAL;
+		goto out_reghost_failed;
+	}
+
 	newrec = kmalloc((sizeof(*newrec) + lport->ops->remote_priv_sz),
 			 GFP_KERNEL);
 	if (!newrec) {
@@ -448,6 +458,10 @@ nvme_fc_register_remoteport(struct nvme_fc_local_port *localport,
 	newrec->remoteport.port_id = pinfo->port_id;
 	newrec->remoteport.port_state = FC_OBJSTATE_ONLINE;
 	newrec->remoteport.port_num = idx;
+	if (pinfo->dev_loss_tmo)
+		newrec->remoteport.dev_loss_tmo = pinfo->dev_loss_tmo;
+	else
+		newrec->remoteport.dev_loss_tmo = NVME_FC_DEFAULT_DEV_LOSS_TMO;
 
 	spin_lock_irqsave(&nvme_fc_lock, flags);
 	list_add_tail(&newrec->endp_list, &lport->endp_list);
diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
index abcefb708008..7df9faef5af0 100644
--- a/include/linux/nvme-fc-driver.h
+++ b/include/linux/nvme-fc-driver.h
@@ -40,6 +40,8 @@
  * @node_name: FC WWNN for the port
  * @port_name: FC WWPN for the port
  * @port_role: What NVME roles are supported (see FC_PORT_ROLE_xxx)
+ * @dev_loss_tmo: maximum delay for reconnects to an association on
+ *             this device. Used only on a remoteport.
  *
  * Initialization values for dynamic port fields:
  * @port_id:      FC N_Port_ID currently assigned the port. Upper 8 bits must
@@ -50,6 +52,7 @@ struct nvme_fc_port_info {
 	u64			port_name;
 	u32			port_role;
 	u32			port_id;
+	u32			dev_loss_tmo;
 };
 
 
@@ -244,6 +247,9 @@ struct nvme_fc_local_port {
  *             The length of the buffer corresponds to the remote_priv_sz
  *             value specified in the nvme_fc_port_template supplied by
  *             the LLDD.
+ * @dev_loss_tmo: maximum delay for reconnects to an association on
+ *             this device. To modify, lldd must call
+ *             nvme_fc_set_remoteport_devloss().
  *
  * Fields with dynamic values. Values may change base on link or login
  * state. LLDD may reference fields directly to change them. Initialized by
@@ -259,10 +265,9 @@ struct nvme_fc_remote_port {
 	u32 port_role;
 	u64 node_name;
 	u64 port_name;
-
 	struct nvme_fc_local_port *localport;
-
 	void *private;
+	u32 dev_loss_tmo;
 
 	/* dynamic fields */
 	u32 port_id;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 3/7] nvme_fc: add dev_loss_tmo to controller
  2017-05-04 18:07 [RFC 0/7] nvme_fc: add dev_loss_tmo support jsmart2021
  2017-05-04 18:07 ` [RFC 1/7] nvme_fc: change ctlr state assignments during reset/reconnect jsmart2021
  2017-05-04 18:07 ` [RFC 2/7] nvme_fc: add a dev_loss_tmo field to the remoteport jsmart2021
@ 2017-05-04 18:07 ` jsmart2021
  2017-05-04 18:07 ` [RFC 4/7] nvme_fc: check connectivity before initiating reconnects jsmart2021
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: jsmart2021 @ 2017-05-04 18:07 UTC (permalink / raw)


From: James Smart <jsmart2021@gmail.com>

This patch adds a dev_loss_tmo value to the controller. The value
is initialized from the remoteport.

The patch also adds a lldd-callable routine,
nvme_fc_set_remoteport_devloss() to change the value on the remoteport
and apply the new value to the controllers on the remote port.

The dev_loss_tmo value set on the controller will ultimately be the
maximum window allowed for reconnection, whether it was started due
to controller reset, transport error, or loss of connectivity to the
target.

When setting the value on the controller, the following is considered:
1. (max_connects * reconnect_delay), which may have been set by the
  request for a connection, is the user-specified connectivity window

2. kato is how long the target can survive a connectivity loss before
it fails - so it better be greater than or equal to the dev_loss_tmo
value selected. (Note: hack to know what fabrics layer will set
kato to)

3. The remoteport dev_loss_tmo is the value trying to be applied from
the fc layer.

The transport selects the smallest of the 3 values.  No change is
ever made to kato. After selecting the smallest value, if the
user-specified values have been overrun, they're recalculated to
correspond to dev_loss_tmo.

Signed-off-by: James Smart <james.smart at broadcom.com>
---
 drivers/nvme/host/fc.c         | 116 +++++++++++++++++++++++++++++++++++++++++
 include/linux/nvme-fc-driver.h |   2 +
 2 files changed, 118 insertions(+)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 89a5aa3c8cd9..a975a48f00a5 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -167,6 +167,7 @@ struct nvme_fc_ctrl {
 	struct work_struct	delete_work;
 	struct work_struct	reset_work;
 	struct delayed_work	connect_work;
+	u32			dev_loss_tmo;
 
 	struct kref		ref;
 	u32			flags;
@@ -2725,6 +2726,119 @@ static const struct blk_mq_ops nvme_fc_admin_mq_ops = {
 };
 
 
+static void
+nvme_fc_set_ctrl_devloss(struct nvme_fc_ctrl *ctrl,
+		struct nvmf_ctrl_options *opts)
+{
+	u32 dev_loss_tmo;
+
+	/*
+	 * dev_loss_tmo will be the max amount of time after an association
+	 * failure that will be allowed for a new association to be
+	 * established. It doesn't matter why the original association
+	 * failed (FC connectivity loss, transport error, admin-request).
+	 * The new association must be established before dev_loss_tmo
+	 * expires or the controller will be torn down.
+	 *
+	 * If the connect parameters are less than the FC port dev_loss_tmo
+	 * parameter, scale dev_loss_tmo to the connect parameters.
+	 *
+	 * If the connect parameters are larger than the FC port
+	 * dev_loss_tmo parameter, adjust the connect parameters so that
+	 * there is at least 1 attempt at a reconnect attempt before failing.
+	 * Note: reconnects will be attempted only if there is FC connectivity.
+	 */
+
+	if (opts->max_reconnects < 1)
+		opts->max_reconnects = 1;
+	dev_loss_tmo = opts->reconnect_delay * opts->max_reconnects;
+
+	if (opts->kato && dev_loss_tmo > (opts->kato + NVME_KATO_GRACE)) {
+		dev_warn(ctrl->ctrl.device,
+			"NVME-FC{%d}: scaling reconnect window to "
+			"keep alive timeout (%d)\n",
+			ctrl->cnum, opts->kato + NVME_KATO_GRACE);
+		dev_loss_tmo = opts->kato + NVME_KATO_GRACE;
+	}
+
+	ctrl->dev_loss_tmo =
+		min_t(u32, ctrl->rport->remoteport.dev_loss_tmo, dev_loss_tmo);
+	if (ctrl->dev_loss_tmo < ctrl->rport->remoteport.dev_loss_tmo)
+		dev_warn(ctrl->ctrl.device,
+			"NVME-FC{%d}: scaling dev_loss_tmo to reconnect "
+			"window (%d)\n",
+			ctrl->cnum, ctrl->dev_loss_tmo);
+
+	/* resync dev_loss_tmo with the reconnect window */
+	if (ctrl->dev_loss_tmo < opts->reconnect_delay * opts->max_reconnects) {
+		if (!ctrl->dev_loss_tmo)
+			opts->max_reconnects = 0;
+		else {
+			opts->reconnect_delay =
+				min_t(u32, opts->reconnect_delay,
+					ctrl->dev_loss_tmo -
+						NVME_FC_EXPECTED_RECONNECT_TM);
+			opts->max_reconnects = DIV_ROUND_UP(ctrl->dev_loss_tmo,
+						opts->reconnect_delay);
+			dev_warn(ctrl->ctrl.device,
+				"NVME-FC{%d}: dev_loss_tmo %d: scaling "
+				"reconnect delay %d max reconnects %d\n",
+				ctrl->cnum, ctrl->dev_loss_tmo,
+				opts->reconnect_delay, opts->max_reconnects);
+		}
+	}
+}
+
+int
+nvme_fc_set_remoteport_devloss(struct nvme_fc_remote_port *portptr,
+			u32 dev_loss_tmo)
+{
+	struct nvme_fc_rport *rport = remoteport_to_rport(portptr);
+	struct nvme_fc_ctrl *ctrl;
+	unsigned long flags;
+
+	/*
+	 * Allow dev_loss_tmo set to 0. This will allow
+	 * nvme_fc_unregister_remoteport() to immediately delete
+	 * controllers without waiting a dev_loss_tmo timeout.
+	 */
+	if (dev_loss_tmo && dev_loss_tmo < NVME_FC_MIN_DEV_LOSS_TMO)
+		return -ERANGE;
+
+	spin_lock_irqsave(&rport->lock, flags);
+
+	if (portptr->port_state != FC_OBJSTATE_ONLINE) {
+		spin_unlock_irqrestore(&rport->lock, flags);
+		return -EINVAL;
+	}
+
+	rport->remoteport.dev_loss_tmo = dev_loss_tmo;
+
+	list_for_each_entry(ctrl, &rport->ctrl_list, ctrl_list) {
+		/* Apply values for use in next reconnect cycle */
+		nvme_fc_set_ctrl_devloss(ctrl, ctrl->ctrl.opts);
+
+		/*
+		 * if kato is smaller than device loss, if connectivity
+		 * is lost, the controller could fail before we give up.
+		 * Accept the value, but warn as it is a bad idea
+		 */
+		if (dev_loss_tmo > ctrl->ctrl.opts->kato)
+			dev_warn(ctrl->ctrl.device,
+				"NVME-FC{%d}: controller may fail prior "
+				"to dev_loss_tmo (%d) due to keep alive "
+				"timeout (%d)\n",
+				ctrl->cnum, ctrl->dev_loss_tmo,
+				ctrl->ctrl.opts->kato);
+
+	}
+
+	spin_unlock_irqrestore(&rport->lock, flags);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(nvme_fc_set_remoteport_devloss);
+
 static struct nvme_ctrl *
 nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
 	struct nvme_fc_lport *lport, struct nvme_fc_rport *rport)
@@ -2816,6 +2930,8 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
 	list_add_tail(&ctrl->ctrl_list, &rport->ctrl_list);
 	spin_unlock_irqrestore(&rport->lock, flags);
 
+	nvme_fc_set_ctrl_devloss(ctrl, opts);
+
 	ret = nvme_fc_create_association(ctrl);
 	if (ret) {
 		ctrl->ctrl.opts = NULL;
diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
index 7df9faef5af0..ab5f66f4c0f8 100644
--- a/include/linux/nvme-fc-driver.h
+++ b/include/linux/nvme-fc-driver.h
@@ -451,6 +451,8 @@ int nvme_fc_register_remoteport(struct nvme_fc_local_port *localport,
 
 int nvme_fc_unregister_remoteport(struct nvme_fc_remote_port *remoteport);
 
+int nvme_fc_set_remoteport_devloss(struct nvme_fc_remote_port *remoteport,
+			u32 dev_loss_tmo);
 
 
 /*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 4/7] nvme_fc: check connectivity before initiating reconnects
  2017-05-04 18:07 [RFC 0/7] nvme_fc: add dev_loss_tmo support jsmart2021
                   ` (2 preceding siblings ...)
  2017-05-04 18:07 ` [RFC 3/7] nvme_fc: add dev_loss_tmo to controller jsmart2021
@ 2017-05-04 18:07 ` jsmart2021
  2017-05-04 18:07 ` [RFC 5/7] nvme_fc: change failure code on remoteport connectivity loss jsmart2021
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: jsmart2021 @ 2017-05-04 18:07 UTC (permalink / raw)


From: James Smart <jsmart2021@gmail.com>

check remoteport connectivity before initiating reconnects

Signed-off-by: James Smart <james.smart at broadcom.com>
---
 drivers/nvme/host/fc.c | 37 +++++++++++++++++++++++++++++++------
 1 file changed, 31 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index a975a48f00a5..419d4a85218d 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -586,6 +586,19 @@ nvme_fc_unregister_remoteport(struct nvme_fc_remote_port *portptr)
 }
 EXPORT_SYMBOL_GPL(nvme_fc_unregister_remoteport);
 
+static inline bool
+nvme_fc_rport_is_online(struct nvme_fc_rport *rport)
+{
+	unsigned long flags;
+	bool online;
+
+	spin_lock_irqsave(&rport->lock, flags);
+	online = (rport->remoteport.port_state == FC_OBJSTATE_ONLINE);
+	spin_unlock_irqrestore(&rport->lock, flags);
+
+	return online;
+}
+
 
 /* *********************** FC-NVME DMA Handling **************************** */
 
@@ -2318,6 +2331,9 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 
 	++ctrl->ctrl.opts->nr_reconnects;
 
+	if (!nvme_fc_rport_is_online(ctrl->rport))
+		return -ENODEV;
+
 	/*
 	 * Create the admin queue
 	 */
@@ -2619,6 +2635,12 @@ nvme_fc_reconnect_or_delete(struct nvme_fc_ctrl *ctrl, int status)
 		ctrl->cnum, status);
 
 	if (nvmf_should_reconnect(&ctrl->ctrl)) {
+		/*
+		 * Only schedule the reconnect if the remote port is online
+		 */
+		if (!nvme_fc_rport_is_online(ctrl->rport))
+			return;
+
 		dev_info(ctrl->ctrl.device,
 			"NVME-FC{%d}: Reconnect attempt in %d seconds.\n",
 			ctrl->cnum, ctrl->ctrl.opts->reconnect_delay);
@@ -2650,12 +2672,15 @@ nvme_fc_reset_ctrl_work(struct work_struct *work)
 		return;
 	}
 
-	ret = nvme_fc_create_association(ctrl);
-	if (ret)
-		nvme_fc_reconnect_or_delete(ctrl, ret);
-	else
-		dev_info(ctrl->ctrl.device,
-			"NVME-FC{%d}: controller reset complete\n", ctrl->cnum);
+	if (nvme_fc_rport_is_online(ctrl->rport)) {
+		ret = nvme_fc_create_association(ctrl);
+		if (ret)
+			nvme_fc_reconnect_or_delete(ctrl, ret);
+		else
+			dev_info(ctrl->ctrl.device,
+				"NVME-FC{%d}: controller reset complete\n",
+				ctrl->cnum);
+	}
 }
 
 /*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 5/7] nvme_fc: change failure code on remoteport connectivity loss
  2017-05-04 18:07 [RFC 0/7] nvme_fc: add dev_loss_tmo support jsmart2021
                   ` (3 preceding siblings ...)
  2017-05-04 18:07 ` [RFC 4/7] nvme_fc: check connectivity before initiating reconnects jsmart2021
@ 2017-05-04 18:07 ` jsmart2021
  2017-05-04 18:07 ` [RFC 6/7] nvme_fc: move remote port get/put/free location jsmart2021
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: jsmart2021 @ 2017-05-04 18:07 UTC (permalink / raw)


From: James Smart <jsmart2021@gmail.com>

Rather than return BLK_MQ_RQ_QUEUE_ERROR if connectivity has been
lost to the remoteport, which gets reflected all the way back to the
user, change failure location to let the lldd bounce it and have it
fall back into the busy logic which requeues the io.

This addresses io failures that occur on ios issued right at the time
of connectivity loss.

Note: check of connectivity is not done under a lock to avoid a
fast-path performance penalty. Thus expectation is that LLDD will
validate the connectivity as well.

Signed-off-by: James Smart <james.smart at broadcom.com>
---
 drivers/nvme/host/fc.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 419d4a85218d..a666f87e2437 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -1916,13 +1916,6 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
 	u32 csn;
 	int ret;
 
-	/*
-	 * before attempting to send the io, check to see if we believe
-	 * the target device is present
-	 */
-	if (ctrl->rport->remoteport.port_state != FC_OBJSTATE_ONLINE)
-		return BLK_MQ_RQ_QUEUE_ERROR;
-
 	if (!nvme_fc_ctrl_get(ctrl))
 		return BLK_MQ_RQ_QUEUE_ERROR;
 
@@ -1998,7 +1991,8 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
 
 		nvme_fc_ctrl_put(ctrl);
 
-		if (ret != -EBUSY)
+		if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE &&
+				ret != -EBUSY)
 			return BLK_MQ_RQ_QUEUE_ERROR;
 
 		if (op->rq) {
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 6/7] nvme_fc: move remote port get/put/free location
  2017-05-04 18:07 [RFC 0/7] nvme_fc: add dev_loss_tmo support jsmart2021
                   ` (4 preceding siblings ...)
  2017-05-04 18:07 ` [RFC 5/7] nvme_fc: change failure code on remoteport connectivity loss jsmart2021
@ 2017-05-04 18:07 ` jsmart2021
  2017-05-04 18:07 ` [RFC 7/7] nvme_fc: add dev_loss_tmo timeout and remoteport resume support jsmart2021
  2017-05-04 19:24 ` [RFC 0/7] nvme_fc: add dev_loss_tmo support James Smart
  7 siblings, 0 replies; 11+ messages in thread
From: jsmart2021 @ 2017-05-04 18:07 UTC (permalink / raw)


From: James Smart <jsmart2021@gmail.com>

move nvme_fc_rport_get/put and rport free to higher in the file to
avoid adding prototypes to resolve references in upcoming code additions

Signed-off-by: James Smart <james.smart at broadcom.com>
---
 drivers/nvme/host/fc.c | 78 +++++++++++++++++++++++++-------------------------
 1 file changed, 39 insertions(+), 39 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index a666f87e2437..484b7d55676c 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -394,6 +394,45 @@ nvme_fc_unregister_localport(struct nvme_fc_local_port *portptr)
 }
 EXPORT_SYMBOL_GPL(nvme_fc_unregister_localport);
 
+static void
+nvme_fc_free_rport(struct kref *ref)
+{
+	struct nvme_fc_rport *rport =
+		container_of(ref, struct nvme_fc_rport, ref);
+	struct nvme_fc_lport *lport =
+			localport_to_lport(rport->remoteport.localport);
+	unsigned long flags;
+
+	WARN_ON(rport->remoteport.port_state != FC_OBJSTATE_DELETED);
+	WARN_ON(!list_empty(&rport->ctrl_list));
+
+	/* remove from lport list */
+	spin_lock_irqsave(&nvme_fc_lock, flags);
+	list_del(&rport->endp_list);
+	spin_unlock_irqrestore(&nvme_fc_lock, flags);
+
+	/* let the LLDD know we've finished tearing it down */
+	lport->ops->remoteport_delete(&rport->remoteport);
+
+	ida_simple_remove(&lport->endp_cnt, rport->remoteport.port_num);
+
+	kfree(rport);
+
+	nvme_fc_lport_put(lport);
+}
+
+static void
+nvme_fc_rport_put(struct nvme_fc_rport *rport)
+{
+	kref_put(&rport->ref, nvme_fc_free_rport);
+}
+
+static int
+nvme_fc_rport_get(struct nvme_fc_rport *rport)
+{
+	return kref_get_unless_zero(&rport->ref);
+}
+
 /**
  * nvme_fc_register_remoteport - transport entry point called by an
  *                              LLDD to register the existence of a NVME
@@ -481,45 +520,6 @@ nvme_fc_register_remoteport(struct nvme_fc_local_port *localport,
 }
 EXPORT_SYMBOL_GPL(nvme_fc_register_remoteport);
 
-static void
-nvme_fc_free_rport(struct kref *ref)
-{
-	struct nvme_fc_rport *rport =
-		container_of(ref, struct nvme_fc_rport, ref);
-	struct nvme_fc_lport *lport =
-			localport_to_lport(rport->remoteport.localport);
-	unsigned long flags;
-
-	WARN_ON(rport->remoteport.port_state != FC_OBJSTATE_DELETED);
-	WARN_ON(!list_empty(&rport->ctrl_list));
-
-	/* remove from lport list */
-	spin_lock_irqsave(&nvme_fc_lock, flags);
-	list_del(&rport->endp_list);
-	spin_unlock_irqrestore(&nvme_fc_lock, flags);
-
-	/* let the LLDD know we've finished tearing it down */
-	lport->ops->remoteport_delete(&rport->remoteport);
-
-	ida_simple_remove(&lport->endp_cnt, rport->remoteport.port_num);
-
-	kfree(rport);
-
-	nvme_fc_lport_put(lport);
-}
-
-static void
-nvme_fc_rport_put(struct nvme_fc_rport *rport)
-{
-	kref_put(&rport->ref, nvme_fc_free_rport);
-}
-
-static int
-nvme_fc_rport_get(struct nvme_fc_rport *rport)
-{
-	return kref_get_unless_zero(&rport->ref);
-}
-
 static int
 nvme_fc_abort_lsops(struct nvme_fc_rport *rport)
 {
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 7/7] nvme_fc: add dev_loss_tmo timeout and remoteport resume support
  2017-05-04 18:07 [RFC 0/7] nvme_fc: add dev_loss_tmo support jsmart2021
                   ` (5 preceding siblings ...)
  2017-05-04 18:07 ` [RFC 6/7] nvme_fc: move remote port get/put/free location jsmart2021
@ 2017-05-04 18:07 ` jsmart2021
  2017-05-04 19:24 ` [RFC 0/7] nvme_fc: add dev_loss_tmo support James Smart
  7 siblings, 0 replies; 11+ messages in thread
From: jsmart2021 @ 2017-05-04 18:07 UTC (permalink / raw)


From: James Smart <jsmart2021@gmail.com>

This patch adds the dev_loss_tmo functionality to the transport.

When a remoteport is unregistered (connectivity lost), it is marked
DELETED and the following is perfomed on all the controllers on the
remoteport: the controller is reset to delete the current association.
Once the association is terminated, the dev_loss_tmo timer is started.
A reconnect is not scheduled as there is no connectivity. Note: the
start of the dev_loss_tmo timer is in the generic
delete-association/create-new-association path. Thus it will be started
regardless of whether the reset was due to remote port connectivity
loss, a controller reset, or a transport run-time error.

When a remoteport is registered (connectivity established), the
transport searches the list of remoteport structures that have pending
deletions (controllers waiting to have dev_loss_tmo fire, thus
preventing remoteport deletion). The transport looks for a matching
wwnn/wwpn. If one is found, the remoteport is transitioned back to
ONLINE, and the following occurs on all controllers on the remoteport:
any controllers in a RECONNECTING state have reconnection attempts kicked
off. If the controller was DELETING, it's natural RECONNECT transition
will start a reconnect attempt.

Once a controller successfully reconnects to a new association, any
dev_loss_tmo timer for it is terminated.

If a dev_loss_tmo timer for a controller fires, the controller is
unconditionally deleted.

Signed-off-by: James Smart <james.smart at broadcom.com>
---
 drivers/nvme/host/fc.c | 225 ++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 214 insertions(+), 11 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 484b7d55676c..a3d4b061fe39 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -167,6 +167,7 @@ struct nvme_fc_ctrl {
 	struct work_struct	delete_work;
 	struct work_struct	reset_work;
 	struct delayed_work	connect_work;
+	struct delayed_work	dev_loss_work;
 	u32			dev_loss_tmo;
 
 	struct kref		ref;
@@ -433,6 +434,86 @@ nvme_fc_rport_get(struct nvme_fc_rport *rport)
 	return kref_get_unless_zero(&rport->ref);
 }
 
+static void
+nvme_fc_resume_controller(struct nvme_fc_ctrl *ctrl)
+{
+	switch (ctrl->ctrl.state) {
+	case NVME_CTRL_RECONNECTING:
+		/*
+		 * As all reconnects were suppressed, schedule a
+		 * connect.
+		 */
+		queue_delayed_work(nvme_fc_wq, &ctrl->connect_work, 0);
+		break;
+
+	case NVME_CTRL_RESETTING:
+		/*
+		 * Controller is already in the process of terminating the
+		 * association. No need to do anything further. The reconnect
+		 * step will naturally occur after the reset completes.
+		 */
+		break;
+
+	default:
+		/* no action to take - let it delete */
+		break;
+	}
+}
+
+static struct nvme_fc_rport *
+nvme_fc_attach_to_suspended_rport(struct nvme_fc_lport *lport,
+				struct nvme_fc_port_info *pinfo)
+{
+	struct nvme_fc_rport *rport;
+	struct nvme_fc_ctrl *ctrl;
+	unsigned long flags;
+
+	spin_lock_irqsave(&nvme_fc_lock, flags);
+
+	list_for_each_entry(rport, &lport->endp_list, endp_list) {
+		if (rport->remoteport.node_name != pinfo->node_name ||
+		    rport->remoteport.port_name != pinfo->port_name)
+			continue;
+
+		if (!nvme_fc_rport_get(rport)) {
+			rport = ERR_PTR(-ENOLCK);
+			goto out_done;
+		}
+
+		spin_unlock_irqrestore(&nvme_fc_lock, flags);
+
+		spin_lock_irqsave(&rport->lock, flags);
+
+		/* has it been unregistered */
+		if (rport->remoteport.port_state != FC_OBJSTATE_DELETED) {
+			/* means lldd called us twice */
+			spin_unlock_irqrestore(&rport->lock, flags);
+			nvme_fc_rport_put(rport);
+			return ERR_PTR(-ESTALE);
+		}
+
+		rport->remoteport.port_state = FC_OBJSTATE_ONLINE;
+
+		/*
+		 * kick off a reconnect attempt on all associations to the
+		 * remote port. A successful reconnects will resume i/o.
+		 */
+		list_for_each_entry(ctrl, &rport->ctrl_list, ctrl_list)
+			nvme_fc_resume_controller(ctrl);
+
+		spin_unlock_irqrestore(&rport->lock, flags);
+
+		return rport;
+	}
+
+	rport = NULL;
+
+out_done:
+	spin_unlock_irqrestore(&nvme_fc_lock, flags);
+
+	return rport;
+}
+
 /**
  * nvme_fc_register_remoteport - transport entry point called by an
  *                              LLDD to register the existence of a NVME
@@ -465,22 +546,45 @@ nvme_fc_register_remoteport(struct nvme_fc_local_port *localport,
 		goto out_reghost_failed;
 	}
 
+	if (!nvme_fc_lport_get(lport)) {
+		ret = -ESHUTDOWN;
+		goto out_reghost_failed;
+	}
+
+	/*
+	 * look to see if there is already a remoteport that is waiting
+	 * for a reconnect (within dev_loss_tmo) with the same WWN's.
+	 * If so, transition to it and reconnect.
+	 */
+	newrec = nvme_fc_attach_to_suspended_rport(lport, pinfo);
+
+	/* found an rport, but something about its state is bad */
+	if (IS_ERR(newrec)) {
+		ret = PTR_ERR(newrec);
+		goto out_lport_put;
+
+	/* found existing rport, which was resumed */
+	} else if (newrec) {
+		/* Ignore pinfo->dev_loss_tmo. Leave rport and ctlr's as is */
+
+		nvme_fc_lport_put(lport);
+		*portptr = &newrec->remoteport;
+		return 0;
+	}
+
+	/* nothing found - allocate a new remoteport struct */
+
 	newrec = kmalloc((sizeof(*newrec) + lport->ops->remote_priv_sz),
 			 GFP_KERNEL);
 	if (!newrec) {
 		ret = -ENOMEM;
-		goto out_reghost_failed;
-	}
-
-	if (!nvme_fc_lport_get(lport)) {
-		ret = -ESHUTDOWN;
-		goto out_kfree_rport;
+		goto out_lport_put;
 	}
 
 	idx = ida_simple_get(&lport->endp_cnt, 0, 0, GFP_KERNEL);
 	if (idx < 0) {
 		ret = -ENOSPC;
-		goto out_lport_put;
+		goto out_kfree_rport;
 	}
 
 	INIT_LIST_HEAD(&newrec->endp_list);
@@ -510,10 +614,10 @@ nvme_fc_register_remoteport(struct nvme_fc_local_port *localport,
 	*portptr = &newrec->remoteport;
 	return 0;
 
-out_lport_put:
-	nvme_fc_lport_put(lport);
 out_kfree_rport:
 	kfree(newrec);
+out_lport_put:
+	nvme_fc_lport_put(lport);
 out_reghost_failed:
 	*portptr = NULL;
 	return ret;
@@ -544,6 +648,74 @@ nvme_fc_abort_lsops(struct nvme_fc_rport *rport)
 	return 0;
 }
 
+static void
+nvmet_fc_start_dev_loss_tmo(struct nvme_fc_ctrl *ctrl, u32 dev_loss_tmo)
+{
+	/* if dev_loss_tmo==0, dev loss is immediate */
+	if (!dev_loss_tmo) {
+		dev_info(ctrl->ctrl.device,
+			"NVME-FC{%d}: controller connectivity lost. "
+			"Deleting controller.\n",
+			ctrl->cnum);
+		__nvme_fc_del_ctrl(ctrl);
+		return;
+	}
+
+	dev_info(ctrl->ctrl.device,
+		"NVME-FC{%d}: controller connectivity lost. Awaiting reconnect",
+		ctrl->cnum);
+
+	switch (ctrl->ctrl.state) {
+	case NVME_CTRL_LIVE:
+		/*
+		 * Schedule a controller reset. The reset will terminate
+		 * the association and schedule the dev_loss_tmo timer.
+		 * The reconnect after terminating the association will
+		 * note the rport state and will not be scheduled.
+		 * The controller will sit in that state, with io
+		 * suspended at the block layer, until either dev_loss_tmo
+		 * expires or the remoteport is re-registered. If
+		 * re-registered, an immediate connect attempt will be
+		 * made.
+		 */
+		if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING) ||
+		    !queue_work(nvme_fc_wq, &ctrl->reset_work))
+			__nvme_fc_del_ctrl(ctrl);
+		break;
+
+	case NVME_CTRL_RECONNECTING:
+		/*
+		 * The association has already been terminated and
+		 * dev_loss_tmo scheduled. The controller is either in
+		 * the process of connecting or has scheduled a
+		 * reconnect attempt.
+		 * If in the process of connecting, it will fail due
+		 * to loss of connectivity to the remoteport, and the
+		 * reconnect will not be scheduled as there is no
+		 * connectivity.
+		 * If awaiting the reconnect, terminate it as it'll only
+		 * fail.
+		 */
+		cancel_delayed_work_sync(&ctrl->connect_work);
+		break;
+
+	case NVME_CTRL_RESETTING:
+		/*
+		 * Controller is already in the process of terminating the
+		 * association. No need to do anything further. The reconnect
+		 * step will kick in naturally after the association is
+		 * terminated, detecting the lack of connectivity, and not
+		 * attempt a reconnect or schedule one.
+		 */
+		break;
+
+	case NVME_CTRL_DELETING:
+	default:
+		/* no action to take - let it delete */
+		break;
+	}
+}
+
 /**
  * nvme_fc_unregister_remoteport - transport entry point called by an
  *                              LLDD to deregister/remove a previously
@@ -573,15 +745,20 @@ nvme_fc_unregister_remoteport(struct nvme_fc_remote_port *portptr)
 	}
 	portptr->port_state = FC_OBJSTATE_DELETED;
 
-	/* tear down all associations to the remote port */
 	list_for_each_entry(ctrl, &rport->ctrl_list, ctrl_list)
-		__nvme_fc_del_ctrl(ctrl);
+		nvmet_fc_start_dev_loss_tmo(ctrl, portptr->dev_loss_tmo);
 
 	spin_unlock_irqrestore(&rport->lock, flags);
 
 	nvme_fc_abort_lsops(rport);
 
+	/*
+	 * release the reference, which will allow, if all controllers
+	 * go away, which should only occur after dev_loss_tmo occurs,
+	 * for the rport to be torn down.
+	 */
 	nvme_fc_rport_put(rport);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(nvme_fc_unregister_remoteport);
@@ -2434,6 +2611,8 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 		nvme_queue_async_events(&ctrl->ctrl);
 	}
 
+	cancel_delayed_work_sync(&ctrl->dev_loss_work);
+
 	return 0;	/* Success */
 
 out_term_aen_ops:
@@ -2552,6 +2731,7 @@ nvme_fc_delete_ctrl_work(struct work_struct *work)
 
 	cancel_work_sync(&ctrl->reset_work);
 	cancel_delayed_work_sync(&ctrl->connect_work);
+	cancel_delayed_work_sync(&ctrl->dev_loss_work);
 
 	/*
 	 * kill the association on the link side.  this will block
@@ -2666,6 +2846,9 @@ nvme_fc_reset_ctrl_work(struct work_struct *work)
 		return;
 	}
 
+	queue_delayed_work(nvme_fc_wq, &ctrl->dev_loss_work,
+			ctrl->dev_loss_tmo * HZ);
+
 	if (nvme_fc_rport_is_online(ctrl->rport)) {
 		ret = nvme_fc_create_association(ctrl);
 		if (ret)
@@ -2733,6 +2916,25 @@ nvme_fc_connect_ctrl_work(struct work_struct *work)
 			ctrl->cnum);
 }
 
+static void
+nvme_fc_dev_loss_ctrl_work(struct work_struct *work)
+{
+	struct nvme_fc_ctrl *ctrl =
+			container_of(to_delayed_work(work),
+				struct nvme_fc_ctrl, dev_loss_work);
+
+	if (ctrl->ctrl.state != NVME_CTRL_DELETING) {
+		dev_warn(ctrl->ctrl.device,
+			"NVME-FC{%d}: Device failed to reconnect within "
+			"dev_loss_tmo (%d seconds). Deleting controller\n",
+			ctrl->cnum, ctrl->dev_loss_tmo);
+		if (__nvme_fc_del_ctrl(ctrl))
+			dev_warn(ctrl->ctrl.device,
+				"NVME-FC{%d}: delete request failed\n",
+				ctrl->cnum);
+	}
+}
+
 
 static const struct blk_mq_ops nvme_fc_admin_mq_ops = {
 	.queue_rq	= nvme_fc_queue_rq,
@@ -2891,6 +3093,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
 	INIT_WORK(&ctrl->delete_work, nvme_fc_delete_ctrl_work);
 	INIT_WORK(&ctrl->reset_work, nvme_fc_reset_ctrl_work);
 	INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work);
+	INIT_DELAYED_WORK(&ctrl->dev_loss_work, nvme_fc_dev_loss_ctrl_work);
 	spin_lock_init(&ctrl->lock);
 
 	/* io queue count */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 0/7] nvme_fc: add dev_loss_tmo support
  2017-05-04 18:07 [RFC 0/7] nvme_fc: add dev_loss_tmo support jsmart2021
                   ` (6 preceding siblings ...)
  2017-05-04 18:07 ` [RFC 7/7] nvme_fc: add dev_loss_tmo timeout and remoteport resume support jsmart2021
@ 2017-05-04 19:24 ` James Smart
  2017-05-04 21:07   ` Christoph Hellwig
  7 siblings, 1 reply; 11+ messages in thread
From: James Smart @ 2017-05-04 19:24 UTC (permalink / raw)


On 5/4/2017 11:07 AM, jsmart2021@gmail.com wrote:
> Also, a reconnect window only makes sense if
> the target stays alive for the duration of the window. Currently,
> there is no attempt to set KATO so that it is aligned with the window.
> In fact, in most cases, KATO is set far smaller than reconnect windows.
> There's also no way currently, other than snooping, for the transport
> to know the KATO value set in the connect command.

I should give some additional context.

Currently, if there is an error or connectivity loss, the existing 
association is torn down immediately. Therefore, it is independent of 
KATO and KATO doesn't really matter as a new association will always 
replace the old one.

But, in the future, FC-NVME will support retransmission. Meaning we 
would not immediately delete the existing association and terminate all 
i/o. Therefore, it would be preferred if the association stays active at 
the target at least as long as our connectivity window (if possible), 
and i/o resumes, perhaps with retransmission to recover from errors that 
occurred during the loss of connectivity.

So, the first solution could ignore KATO, but I'd like to discuss it for 
the future.

-- james

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC 0/7] nvme_fc: add dev_loss_tmo support
  2017-05-04 19:24 ` [RFC 0/7] nvme_fc: add dev_loss_tmo support James Smart
@ 2017-05-04 21:07   ` Christoph Hellwig
  2017-05-04 23:17     ` James Smart
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2017-05-04 21:07 UTC (permalink / raw)


On Thu, May 04, 2017@12:24:34PM -0700, James Smart wrote:
> But, in the future, FC-NVME will support retransmission. Meaning we would
> not immediately delete the existing association and terminate all i/o.
> Therefore, it would be preferred if the association stays active at the
> target at least as long as our connectivity window (if possible), and i/o
> resumes, perhaps with retransmission to recover from errors that occurred
> during the loss of connectivity.

This is not the model the NVMe over Fabrics spec assumes.  If you want
this model please take it to the NVMe working group first, which so far
is left pretty much in the dark of a lot of these weird NVMe details
anyway.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC 0/7] nvme_fc: add dev_loss_tmo support
  2017-05-04 21:07   ` Christoph Hellwig
@ 2017-05-04 23:17     ` James Smart
  0 siblings, 0 replies; 11+ messages in thread
From: James Smart @ 2017-05-04 23:17 UTC (permalink / raw)


On 5/4/2017 2:07 PM, Christoph Hellwig wrote:
> On Thu, May 04, 2017@12:24:34PM -0700, James Smart wrote:
>> But, in the future, FC-NVME will support retransmission. Meaning we would
>> not immediately delete the existing association and terminate all i/o.
>> Therefore, it would be preferred if the association stays active at the
>> target at least as long as our connectivity window (if possible), and i/o
>> resumes, perhaps with retransmission to recover from errors that occurred
>> during the loss of connectivity.
>
> This is not the model the NVMe over Fabrics spec assumes.  If you want
> this model please take it to the NVMe working group first, which so far
> is left pretty much in the dark of a lot of these weird NVMe details
> anyway.
>

controller resets (CC.NE=0) would always fully reset. if the controller 
isn't reset, then the rest is simply in the retransmission model for the 
transport.

Regardless.. please ignore the KATO statements and provide feedback on 
how you think dev_loss_tmo from the transport should contend vs the 
connect option ctlr_loss_tmo.

-- james

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-05-04 23:17 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-04 18:07 [RFC 0/7] nvme_fc: add dev_loss_tmo support jsmart2021
2017-05-04 18:07 ` [RFC 1/7] nvme_fc: change ctlr state assignments during reset/reconnect jsmart2021
2017-05-04 18:07 ` [RFC 2/7] nvme_fc: add a dev_loss_tmo field to the remoteport jsmart2021
2017-05-04 18:07 ` [RFC 3/7] nvme_fc: add dev_loss_tmo to controller jsmart2021
2017-05-04 18:07 ` [RFC 4/7] nvme_fc: check connectivity before initiating reconnects jsmart2021
2017-05-04 18:07 ` [RFC 5/7] nvme_fc: change failure code on remoteport connectivity loss jsmart2021
2017-05-04 18:07 ` [RFC 6/7] nvme_fc: move remote port get/put/free location jsmart2021
2017-05-04 18:07 ` [RFC 7/7] nvme_fc: add dev_loss_tmo timeout and remoteport resume support jsmart2021
2017-05-04 19:24 ` [RFC 0/7] nvme_fc: add dev_loss_tmo support James Smart
2017-05-04 21:07   ` Christoph Hellwig
2017-05-04 23:17     ` James Smart

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.