linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support
@ 2020-02-05 18:37 James Smart
  2020-02-05 18:37 ` [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08 James Smart
                   ` (30 more replies)
  0 siblings, 31 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

At the tail end of FC-NVME (1) standardization, the process for
terminating an association was changed requiring interlock using FC-NVME
Disconnect Assocation LS's between both the host port and target port.
This was immediately relaxed with an ammendment to FC-NVME (1) and with
wording put into FC-NVME-2. The interlock was removed, but it still
required both the host port and target port to initiate Disconnect
Association LS's and respond to LS's.

The linux nvme-fc and nvmet-fc implementations were interoperable with
standards but the linux-driver api did not support all the functionality
needed. It was missing:
- nvme-fc: didn't support the reception of NVME LS's and the ability to
  transmit responses to an LS.
- nvmet-fc: didn't support the ability to send an NVME LS request. It
  also did not support a method for the transport to specify a remote
  port for an LS.

This patch adds the missing functionality. Specifically the patch set:
- Updates the header with the FC-NVME-2 standard out for final approval.
- Refactors data structure names that used to be dependent on role (ls
  requests were specific to nvme; ls responses were specific to nvmet)
  to generic names that can be used by both nvme-fc and nvmet-fc.
- Modifies the nvme-fc transport template with interfaces to receive
  NVME LS's and for the transport to then request LS responses to be
  sent.
- Modifies the nvmet-fc transport template with:
  - The current NVME LS receive interface was modified to supply a
    handle to indentify the remote port the LS as received from. If
    the LS creates an association, the handle may be used to initiate
    NVME LS requests to the remote port. An interface was put in place
    to invalidate the handle on connectivity losses.
  - Interfaces for the transport to request an NVME LS request to be
    performed as well as to abort that LS in cases of error/teardown. 
- The nvme-fc transport was modified to follow the standard:
  - Disconnect association logic was revised to send Disconnect LS as
    soon as all ABTS's were transmit rather than waiting for the ABTS
    process to fully complete.
  - Disconnect LS reception is supported, with reception initiating
    controller reset and reconnect.
  - Disconnect LS responses will not be transmit until association
    termination has transmit the Disconnect LS.
- The nvmet-fc transport was modified to follow the standard:
  - Disconnect assocation logic was revised to transmit a Disconnect LS
    request as soon as all ABTS's have been transmit. In the past, no
    Disconnect LS had been transmit.
  - Disconnect LS responses will not be sent until the Disconnect LS
    request has been transmit.
- nvme-fcloop: was updated with interfaces to allow testing of the
  transports.
- Along the way, cleanups and slight corrections were made to the
  transports.
- The lpfc driver was modified to support the new transport interfaces
  for both the nvme and nvmet transports.  As much of the functionality
  was already present, but specific to one side of the transport,
  existing code was refactored to create common routines. Addition of
  the new interfaces was able to slip in rather easily with the common
  routines.

This code was cut against the for-5.6 branch.

I'll work with Martin to minimize any work to merge these lpfc mods 
with lpfc changes in the scsi tree.

-- james



James Smart (29):
  nvme-fc: Sync header to FC-NVME-2 rev 1.08
  nvmet-fc: fix typo in comment
  nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request
  nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api
    header
  lpfc: adapt code to changed names in api header
  nvme-fcloop: Fix deallocation of working context
  nvme-fc nvmet-fc: refactor for common LS definitions
  nvmet-fc: Better size LS buffers
  nvme-fc: Ensure private pointers are NULL if no data
  nvmefc: Use common definitions for LS names, formatting, and
    validation
  nvme-fc: convert assoc_active flag to atomic
  nvme-fc: Add Disconnect Association Rcv support
  nvmet-fc: add LS failure messages
  nvmet-fc: perform small cleanups on unneeded checks
  nvmet-fc: track hostport handle for associations
  nvmet-fc: rename ls_list to ls_rcv_list
  nvmet-fc: Add Disconnect Association Xmt support
  nvme-fcloop: refactor to enable target to host LS
  nvme-fcloop: add target to host LS request support
  lpfc: Refactor lpfc nvme headers
  lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx
  lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions
  lpfc: Refactor NVME LS receive handling
  lpfc: Refactor Send LS Request support
  lpfc: Refactor Send LS Abort support
  lpfc: Refactor Send LS Response support
  lpfc: nvme: Add Receive LS Request and Send LS Response support to
    nvme
  lpfc: nvmet: Add support for NVME LS request hosthandle
  lpfc: nvmet: Add Send LS Request and Abort LS Request support

 drivers/nvme/host/fc.c             | 555 +++++++++++++++++++-----
 drivers/nvme/host/fc.h             | 227 ++++++++++
 drivers/nvme/target/fc.c           | 800 +++++++++++++++++++++++++----------
 drivers/nvme/target/fcloop.c       | 228 ++++++++--
 drivers/scsi/lpfc/lpfc.h           |   2 +-
 drivers/scsi/lpfc/lpfc_attr.c      |   3 -
 drivers/scsi/lpfc/lpfc_crtn.h      |   9 +-
 drivers/scsi/lpfc/lpfc_ct.c        |   1 -
 drivers/scsi/lpfc/lpfc_debugfs.c   |   5 +-
 drivers/scsi/lpfc/lpfc_hbadisc.c   |   8 +-
 drivers/scsi/lpfc/lpfc_init.c      |   7 +-
 drivers/scsi/lpfc/lpfc_mem.c       |   4 -
 drivers/scsi/lpfc/lpfc_nportdisc.c |  13 +-
 drivers/scsi/lpfc/lpfc_nvme.c      | 550 ++++++++++++++++--------
 drivers/scsi/lpfc/lpfc_nvme.h      | 198 +++++++++
 drivers/scsi/lpfc/lpfc_nvmet.c     | 837 +++++++++++++++++++++++--------------
 drivers/scsi/lpfc/lpfc_nvmet.h     | 158 -------
 drivers/scsi/lpfc/lpfc_sli.c       | 126 +++++-
 include/linux/nvme-fc-driver.h     | 368 +++++++++++-----
 include/linux/nvme-fc.h            |   9 +-
 20 files changed, 2970 insertions(+), 1138 deletions(-)
 create mode 100644 drivers/nvme/host/fc.h
 delete mode 100644 drivers/scsi/lpfc/lpfc_nvmet.h

-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-02-28 20:36   ` Sagi Grimberg
                     ` (2 more replies)
  2020-02-05 18:37 ` [PATCH 02/29] nvmet-fc: fix typo in comment James Smart
                   ` (29 subsequent siblings)
  30 siblings, 3 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

A couple of minor changes occurred between 1.06 and 1.08:
- Addition of NVME_SR_RSP opcode
- change of SR_RSP status code 1 to Reserved

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 include/linux/nvme-fc.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/nvme-fc.h b/include/linux/nvme-fc.h
index e8c30b39bb27..840fa9ac733f 100644
--- a/include/linux/nvme-fc.h
+++ b/include/linux/nvme-fc.h
@@ -4,8 +4,8 @@
  */
 
 /*
- * This file contains definitions relative to FC-NVME-2 r1.06
- * (T11-2019-00210-v001).
+ * This file contains definitions relative to FC-NVME-2 r1.08
+ * (T11-2019-00210-v004).
  */
 
 #ifndef _NVME_FC_H
@@ -81,7 +81,8 @@ struct nvme_fc_ersp_iu {
 };
 
 
-#define FCNVME_NVME_SR_OPCODE	0x01
+#define FCNVME_NVME_SR_OPCODE		0x01
+#define FCNVME_NVME_SR_RSP_OPCODE	0x02
 
 struct nvme_fc_nvme_sr_iu {
 	__u8			fc_id;
@@ -94,7 +95,7 @@ struct nvme_fc_nvme_sr_iu {
 
 enum {
 	FCNVME_SRSTAT_ACC		= 0x0,
-	FCNVME_SRSTAT_INV_FCID		= 0x1,
+	/* reserved			  0x1 */
 	/* reserved			  0x2 */
 	FCNVME_SRSTAT_LOGICAL_ERR	= 0x3,
 	FCNVME_SRSTAT_INV_QUALIF	= 0x4,
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 02/29] nvmet-fc: fix typo in comment
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
  2020-02-05 18:37 ` [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08 James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-02-28 20:36   ` Sagi Grimberg
                     ` (2 more replies)
  2020-02-05 18:37 ` [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request James Smart
                   ` (28 subsequent siblings)
  30 siblings, 3 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

Fix typo in comment: about should be abort

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/target/fc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index a0db6371b43e..a8ceb7721640 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -684,7 +684,7 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
 	disconnect = atomic_xchg(&queue->connected, 0);
 
 	spin_lock_irqsave(&queue->qlock, flags);
-	/* about outstanding io's */
+	/* abort outstanding io's */
 	for (i = 0; i < queue->sqsize; fod++, i++) {
 		if (fod->active) {
 			spin_lock(&fod->flock);
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
  2020-02-05 18:37 ` [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08 James Smart
  2020-02-05 18:37 ` [PATCH 02/29] nvmet-fc: fix typo in comment James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-02-28 20:38   ` Sagi Grimberg
                     ` (2 more replies)
  2020-02-05 18:37 ` [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header James Smart
                   ` (27 subsequent siblings)
  30 siblings, 3 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

The current LLDD api has:
  nvme-fc: contains api for transport to do LS requests (and aborts of
    them). However, there is no interface for reception of LS's and sending
    responses for them.
  nvmet-fc: contains api for transport to do reception of LS's and sending
    of responses for them. However, there is no interface for doing LS
    requests.

Revise the api's so that both nvme-fc and nvmet-fc can send LS's, as well
as receiving LS's and sending their responses.

Change name of the rcv_ls_req struct to better reflect generic use as
a context to used to send an ls rsp.

Change nvmet_fc_rcv_ls_req() calling sequence to provide handle that
can be used by transport in later LS request sequences for an association.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 include/linux/nvme-fc-driver.h | 368 ++++++++++++++++++++++++++++++-----------
 1 file changed, 270 insertions(+), 98 deletions(-)

diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
index 6d0d70f3219c..8b97c899517d 100644
--- a/include/linux/nvme-fc-driver.h
+++ b/include/linux/nvme-fc-driver.h
@@ -10,47 +10,26 @@
 
 
 /*
- * **********************  LLDD FC-NVME Host API ********************
+ * **********************  FC-NVME LS API ********************
  *
- *  For FC LLDD's that are the NVME Host role.
+ *  Data structures used by both FC-NVME hosts and FC-NVME
+ *  targets to perform FC-NVME LS requests or transmit
+ *  responses.
  *
- * ******************************************************************
+ * ***********************************************************
  */
 
-
-
 /**
- * struct nvme_fc_port_info - port-specific ids and FC connection-specific
- *                            data element used during NVME Host role
- *                            registrations
- *
- * Static fields describing the port being registered:
- * @node_name: FC WWNN for the port
- * @port_name: FC WWPN for the port
- * @port_role: What NVME roles are supported (see FC_PORT_ROLE_xxx)
- * @dev_loss_tmo: maximum delay for reconnects to an association on
- *             this device. Used only on a remoteport.
+ * struct nvmefc_ls_req - Request structure passed from the transport
+ *            to the LLDD to perform a NVME-FC LS request and obtain
+ *            a response.
+ *            Used by nvme-fc transport (host) to send LS's such as
+ *              Create Association, Create Connection and Disconnect
+ *              Association.
+ *            Used by the nvmet-fc transport (controller) to send
+ *              LS's such as Disconnect Association.
  *
- * Initialization values for dynamic port fields:
- * @port_id:      FC N_Port_ID currently assigned the port. Upper 8 bits must
- *                be set to 0.
- */
-struct nvme_fc_port_info {
-	u64			node_name;
-	u64			port_name;
-	u32			port_role;
-	u32			port_id;
-	u32			dev_loss_tmo;
-};
-
-
-/**
- * struct nvmefc_ls_req - Request structure passed from NVME-FC transport
- *                        to LLDD in order to perform a NVME FC-4 LS
- *                        request and obtain a response.
- *
- * Values set by the NVME-FC layer prior to calling the LLDD ls_req
- * entrypoint.
+ * Values set by the requestor prior to calling the LLDD ls_req entrypoint:
  * @rqstaddr: pointer to request buffer
  * @rqstdma:  PCI DMA address of request buffer
  * @rqstlen:  Length, in bytes, of request buffer
@@ -63,8 +42,8 @@ struct nvme_fc_port_info {
  * @private:  pointer to memory allocated alongside the ls request structure
  *            that is specifically for the LLDD to use while processing the
  *            request. The length of the buffer corresponds to the
- *            lsrqst_priv_sz value specified in the nvme_fc_port_template
- *            supplied by the LLDD.
+ *            lsrqst_priv_sz value specified in the xxx_template supplied
+ *            by the LLDD.
  * @done:     The callback routine the LLDD is to invoke upon completion of
  *            the LS request. req argument is the pointer to the original LS
  *            request structure. Status argument must be 0 upon success, a
@@ -86,6 +65,101 @@ struct nvmefc_ls_req {
 } __aligned(sizeof(u64));	/* alignment for other things alloc'd with */
 
 
+/**
+ * struct nvmefc_ls_rsp - Structure passed from the transport to the LLDD
+ *            to request the transmit the NVME-FC LS response to a
+ *            NVME-FC LS request.   The structure originates in the LLDD
+ *            and is given to the transport via the xxx_rcv_ls_req()
+ *            transport routine. As such, the structure represents the
+ *            FC exchange context for the NVME-FC LS request that was
+ *            received and which the response is to be sent for.
+ *            Used by the LLDD to pass the nvmet-fc transport (controller)
+ *              received LS's such as Create Association, Create Connection
+ *              and Disconnect Association.
+ *            Used by the LLDD to pass the nvme-fc transport (host)
+ *              received LS's such as Disconnect Association or Disconnect
+ *              Connection.
+ *
+ * The structure is allocated by the LLDD whenever a LS Request is received
+ * from the FC link. The address of the structure is passed to the nvmet-fc
+ * or nvme-fc layer via the xxx_rcv_ls_req() transport routines.
+ *
+ * The address of the structure is to be passed back to the LLDD
+ * when the response is to be transmit. The LLDD will use the address to
+ * map back to the LLDD exchange structure which maintains information such
+ * the remote N_Port that sent the LS as well as any FC exchange context.
+ * Upon completion of the LS response transmit, the LLDD will pass the
+ * address of the structure back to the transport LS rsp done() routine,
+ * allowing the transport release dma resources. Upon completion of
+ * the done() routine, no further access to the structure will be made by
+ * the transport and the LLDD can de-allocate the structure.
+ *
+ * Field initialization:
+ *   At the time of the xxx_rcv_ls_req() call, there is no content that
+ *     is valid in the structure.
+ *
+ *   When the structure is used for the LLDD->xmt_ls_rsp() call, the
+ *     transport layer will fully set the fields in order to specify the
+ *     response payload buffer and its length as well as the done routine
+ *     to be called upon completion of the transmit.  The transport layer
+ *     will also set a private pointer for its own use in the done routine.
+ *
+ * Values set by the transport layer prior to calling the LLDD xmt_ls_rsp
+ * entrypoint:
+ * @rspbuf:   pointer to the LS response buffer
+ * @rspdma:   PCI DMA address of the LS response buffer
+ * @rsplen:   Length, in bytes, of the LS response buffer
+ * @done:     The callback routine the LLDD is to invoke upon completion of
+ *            transmitting the LS response. req argument is the pointer to
+ *            the original ls request.
+ * @nvme_fc_private:  pointer to an internal transport-specific structure
+ *            used as part of the transport done() processing. The LLDD is
+ *            not to access this pointer.
+ */
+struct nvmefc_ls_rsp {
+	void		*rspbuf;
+	dma_addr_t	rspdma;
+	u16		rsplen;
+
+	void (*done)(struct nvmefc_ls_rsp *rsp);
+	void		*nvme_fc_private;	/* LLDD is not to access !! */
+};
+
+
+
+/*
+ * **********************  LLDD FC-NVME Host API ********************
+ *
+ *  For FC LLDD's that are the NVME Host role.
+ *
+ * ******************************************************************
+ */
+
+
+/**
+ * struct nvme_fc_port_info - port-specific ids and FC connection-specific
+ *                            data element used during NVME Host role
+ *                            registrations
+ *
+ * Static fields describing the port being registered:
+ * @node_name: FC WWNN for the port
+ * @port_name: FC WWPN for the port
+ * @port_role: What NVME roles are supported (see FC_PORT_ROLE_xxx)
+ * @dev_loss_tmo: maximum delay for reconnects to an association on
+ *             this device. Used only on a remoteport.
+ *
+ * Initialization values for dynamic port fields:
+ * @port_id:      FC N_Port_ID currently assigned the port. Upper 8 bits must
+ *                be set to 0.
+ */
+struct nvme_fc_port_info {
+	u64			node_name;
+	u64			port_name;
+	u32			port_role;
+	u32			port_id;
+	u32			dev_loss_tmo;
+};
+
 enum nvmefc_fcp_datadir {
 	NVMEFC_FCP_NODATA,	/* payload_length and sg_cnt will be zero */
 	NVMEFC_FCP_WRITE,
@@ -339,6 +413,21 @@ struct nvme_fc_remote_port {
  *       indicating an FC transport Aborted status.
  *       Entrypoint is Mandatory.
  *
+ * @xmt_ls_rsp:  Called to transmit the response to a FC-NVME FC-4 LS service.
+ *       The nvmefc_ls_rsp structure is the same LLDD-supplied exchange
+ *       structure specified in the nvme_fc_rcv_ls_req() call made when
+ *       the LS request was received. The structure will fully describe
+ *       the buffers for the response payload and the dma address of the
+ *       payload. The LLDD is to transmit the response (or return a
+ *       non-zero errno status), and upon completion of the transmit, call
+ *       the "done" routine specified in the nvmefc_ls_rsp structure
+ *       (argument to done is the address of the nvmefc_ls_rsp structure
+ *       itself). Upon the completion of the done routine, the LLDD shall
+ *       consider the LS handling complete and the nvmefc_ls_rsp structure
+ *       may be freed/released.
+ *       Entrypoint is mandatory if the LLDD calls the nvme_fc_rcv_ls_req()
+ *       entrypoint.
+ *
  * @max_hw_queues:  indicates the maximum number of hw queues the LLDD
  *       supports for cpu affinitization.
  *       Value is Mandatory. Must be at least 1.
@@ -373,7 +462,7 @@ struct nvme_fc_remote_port {
  * @lsrqst_priv_sz: The LLDD sets this field to the amount of additional
  *       memory that it would like fc nvme layer to allocate on the LLDD's
  *       behalf whenever a ls request structure is allocated. The additional
- *       memory area solely for the of the LLDD and its location is
+ *       memory area is solely for use by the LLDD and its location is
  *       specified by the ls_request->private pointer.
  *       Value is Mandatory. Allowed to be zero.
  *
@@ -409,6 +498,9 @@ struct nvme_fc_port_template {
 				struct nvme_fc_remote_port *,
 				void *hw_queue_handle,
 				struct nvmefc_fcp_req *);
+	int	(*xmt_ls_rsp)(struct nvme_fc_local_port *localport,
+				struct nvme_fc_remote_port *rport,
+				struct nvmefc_ls_rsp *ls_rsp);
 
 	u32	max_hw_queues;
 	u16	max_sgl_segments;
@@ -445,6 +537,34 @@ void nvme_fc_rescan_remoteport(struct nvme_fc_remote_port *remoteport);
 int nvme_fc_set_remoteport_devloss(struct nvme_fc_remote_port *remoteport,
 			u32 dev_loss_tmo);
 
+/*
+ * Routine called to pass a NVME-FC LS request, received by the lldd,
+ * to the nvme-fc transport.
+ *
+ * If the return value is zero: the LS was successfully accepted by the
+ *   transport.
+ * If the return value is non-zero: the transport has not accepted the
+ *   LS. The lldd should ABTS-LS the LS.
+ *
+ * Note: if the LLDD receives and ABTS for the LS prior to the transport
+ * calling the ops->xmt_ls_rsp() routine to transmit a response, the LLDD
+ * shall mark the LS as aborted, and when the xmt_ls_rsp() is called: the
+ * response shall not be transmit and the struct nvmefc_ls_rsp() done
+ * routine shall be called.  The LLDD may transmit the ABTS response as
+ * soon as the LS was marked or can delay until the xmt_ls_rsp() call is
+ * made.
+ * Note: if an RCV LS was successfully posted to the transport and the
+ * remoteport is then unregistered before xmt_ls_rsp() was called for
+ * the lsrsp structure, the transport will still call xmt_ls_rsp()
+ * afterward to cleanup the outstanding lsrsp structure. The LLDD should
+ * noop the transmission of the rsp and call the lsrsp->done() routine
+ * to allow the lsrsp structure to be released.
+ */
+int nvme_fc_rcv_ls_req(struct nvme_fc_remote_port *remoteport,
+			struct nvmefc_ls_rsp *lsrsp,
+			void *lsreqbuf, u32 lsreqbuf_len);
+
+
 
 /*
  * ***************  LLDD FC-NVME Target/Subsystem API ***************
@@ -474,55 +594,6 @@ struct nvmet_fc_port_info {
 };
 
 
-/**
- * struct nvmefc_tgt_ls_req - Structure used between LLDD and NVMET-FC
- *                            layer to represent the exchange context for
- *                            a FC-NVME Link Service (LS).
- *
- * The structure is allocated by the LLDD whenever a LS Request is received
- * from the FC link. The address of the structure is passed to the nvmet-fc
- * layer via the nvmet_fc_rcv_ls_req() call. The address of the structure
- * will be passed back to the LLDD when the response is to be transmit.
- * The LLDD is to use the address to map back to the LLDD exchange structure
- * which maintains information such as the targetport the LS was received
- * on, the remote FC NVME initiator that sent the LS, and any FC exchange
- * context.  Upon completion of the LS response transmit, the address of the
- * structure will be passed back to the LS rsp done() routine, allowing the
- * nvmet-fc layer to release dma resources. Upon completion of the done()
- * routine, no further access will be made by the nvmet-fc layer and the
- * LLDD can de-allocate the structure.
- *
- * Field initialization:
- *   At the time of the nvmet_fc_rcv_ls_req() call, there is no content that
- *     is valid in the structure.
- *
- *   When the structure is used for the LLDD->xmt_ls_rsp() call, the nvmet-fc
- *     layer will fully set the fields in order to specify the response
- *     payload buffer and its length as well as the done routine to be called
- *     upon compeletion of the transmit.  The nvmet-fc layer will also set a
- *     private pointer for its own use in the done routine.
- *
- * Values set by the NVMET-FC layer prior to calling the LLDD xmt_ls_rsp
- * entrypoint.
- * @rspbuf:   pointer to the LS response buffer
- * @rspdma:   PCI DMA address of the LS response buffer
- * @rsplen:   Length, in bytes, of the LS response buffer
- * @done:     The callback routine the LLDD is to invoke upon completion of
- *            transmitting the LS response. req argument is the pointer to
- *            the original ls request.
- * @nvmet_fc_private:  pointer to an internal NVMET-FC layer structure used
- *            as part of the NVMET-FC processing. The LLDD is not to access
- *            this pointer.
- */
-struct nvmefc_tgt_ls_req {
-	void		*rspbuf;
-	dma_addr_t	rspdma;
-	u16		rsplen;
-
-	void (*done)(struct nvmefc_tgt_ls_req *req);
-	void *nvmet_fc_private;		/* LLDD is not to access !! */
-};
-
 /* Operations that NVME-FC layer may request the LLDD to perform for FCP */
 enum {
 	NVMET_FCOP_READDATA	= 1,	/* xmt data to initiator */
@@ -697,17 +768,19 @@ struct nvmet_fc_target_port {
  *       Entrypoint is Mandatory.
  *
  * @xmt_ls_rsp:  Called to transmit the response to a FC-NVME FC-4 LS service.
- *       The nvmefc_tgt_ls_req structure is the same LLDD-supplied exchange
+ *       The nvmefc_ls_rsp structure is the same LLDD-supplied exchange
  *       structure specified in the nvmet_fc_rcv_ls_req() call made when
- *       the LS request was received.  The structure will fully describe
+ *       the LS request was received. The structure will fully describe
  *       the buffers for the response payload and the dma address of the
- *       payload. The LLDD is to transmit the response (or return a non-zero
- *       errno status), and upon completion of the transmit, call the
- *       "done" routine specified in the nvmefc_tgt_ls_req structure
- *       (argument to done is the ls reqwuest structure itself).
- *       After calling the done routine, the LLDD shall consider the
- *       LS handling complete and the nvmefc_tgt_ls_req structure may
- *       be freed/released.
+ *       payload. The LLDD is to transmit the response (or return a
+ *       non-zero errno status), and upon completion of the transmit, call
+ *       the "done" routine specified in the nvmefc_ls_rsp structure
+ *       (argument to done is the address of the nvmefc_ls_rsp structure
+ *       itself). Upon the completion of the done() routine, the LLDD shall
+ *       consider the LS handling complete and the nvmefc_ls_rsp structure
+ *       may be freed/released.
+ *       The transport will always call the xmt_ls_rsp() routine for any
+ *       LS received.
  *       Entrypoint is Mandatory.
  *
  * @fcp_op:  Called to perform a data transfer or transmit a response.
@@ -802,6 +875,39 @@ struct nvmet_fc_target_port {
  *       should cause the initiator to rescan the discovery controller
  *       on the targetport.
  *
+ * @ls_req:  Called to issue a FC-NVME FC-4 LS service request.
+ *       The nvme_fc_ls_req structure will fully describe the buffers for
+ *       the request payload and where to place the response payload.
+ *       The targetport that is to issue the LS request is identified by
+ *       the targetport argument.  The remote port that is to receive the
+ *       LS request is identified by the hosthandle argument. The nvmet-fc
+ *       transport is only allowed to issue FC-NVME LS's on behalf of an
+ *       association that was created prior by a Create Association LS.
+ *       The hosthandle will originate from the LLDD in the struct
+ *       nvmefc_ls_rsp structure for the Create Association LS that
+ *       was delivered to the transport. The transport will save the
+ *       hosthandle as an attribute of the association.  If the LLDD
+ *       loses connectivity with the remote port, it must call the
+ *       nvmet_fc_invalidate_host() routine to remove any references to
+ *       the remote port in the transport.
+ *       The LLDD is to allocate an exchange, issue the LS request, obtain
+ *       the LS response, and call the "done" routine specified in the
+ *       request structure (argument to done is the ls request structure
+ *       itself).
+ *       Entrypoint is Optional - but highly recommended.
+ *
+ * @ls_abort: called to request the LLDD to abort the indicated ls request.
+ *       The call may return before the abort has completed. After aborting
+ *       the request, the LLDD must still call the ls request done routine
+ *       indicating an FC transport Aborted status.
+ *       Entrypoint is Mandatory if the ls_req entry point is specified.
+ *
+ * @host_release: called to inform the LLDD that the request to invalidate
+ *       the host port indicated by the hosthandle has been fully completed.
+ *       No associations exist with the host port and there will be no
+ *       further references to hosthandle.
+ *       Entrypoint is Mandatory if the lldd calls nvmet_fc_invalidate_host().
+ *
  * @max_hw_queues:  indicates the maximum number of hw queues the LLDD
  *       supports for cpu affinitization.
  *       Value is Mandatory. Must be at least 1.
@@ -830,11 +936,19 @@ struct nvmet_fc_target_port {
  *       area solely for the of the LLDD and its location is specified by
  *       the targetport->private pointer.
  *       Value is Mandatory. Allowed to be zero.
+ *
+ * @lsrqst_priv_sz: The LLDD sets this field to the amount of additional
+ *       memory that it would like nvmet-fc layer to allocate on the LLDD's
+ *       behalf whenever a ls request structure is allocated. The additional
+ *       memory area is solely for use by the LLDD and its location is
+ *       specified by the ls_request->private pointer.
+ *       Value is Mandatory. Allowed to be zero.
+ *
  */
 struct nvmet_fc_target_template {
 	void (*targetport_delete)(struct nvmet_fc_target_port *tgtport);
 	int (*xmt_ls_rsp)(struct nvmet_fc_target_port *tgtport,
-				struct nvmefc_tgt_ls_req *tls_req);
+				struct nvmefc_ls_rsp *ls_rsp);
 	int (*fcp_op)(struct nvmet_fc_target_port *tgtport,
 				struct nvmefc_tgt_fcp_req *fcpreq);
 	void (*fcp_abort)(struct nvmet_fc_target_port *tgtport,
@@ -844,6 +958,11 @@ struct nvmet_fc_target_template {
 	void (*defer_rcv)(struct nvmet_fc_target_port *tgtport,
 				struct nvmefc_tgt_fcp_req *fcpreq);
 	void (*discovery_event)(struct nvmet_fc_target_port *tgtport);
+	int  (*ls_req)(struct nvmet_fc_target_port *targetport,
+				void *hosthandle, struct nvmefc_ls_req *lsreq);
+	void (*ls_abort)(struct nvmet_fc_target_port *targetport,
+				void *hosthandle, struct nvmefc_ls_req *lsreq);
+	void (*host_release)(void *hosthandle);
 
 	u32	max_hw_queues;
 	u16	max_sgl_segments;
@@ -852,7 +971,9 @@ struct nvmet_fc_target_template {
 
 	u32	target_features;
 
+	/* sizes of additional private data for data structures */
 	u32	target_priv_sz;
+	u32	lsrqst_priv_sz;
 };
 
 
@@ -863,10 +984,61 @@ int nvmet_fc_register_targetport(struct nvmet_fc_port_info *portinfo,
 
 int nvmet_fc_unregister_targetport(struct nvmet_fc_target_port *tgtport);
 
+/*
+ * Routine called to pass a NVME-FC LS request, received by the lldd,
+ * to the nvmet-fc transport.
+ *
+ * If the return value is zero: the LS was successfully accepted by the
+ *   transport.
+ * If the return value is non-zero: the transport has not accepted the
+ *   LS. The lldd should ABTS-LS the LS.
+ *
+ * Note: if the LLDD receives and ABTS for the LS prior to the transport
+ * calling the ops->xmt_ls_rsp() routine to transmit a response, the LLDD
+ * shall mark the LS as aborted, and when the xmt_ls_rsp() is called: the
+ * response shall not be transmit and the struct nvmefc_ls_rsp() done
+ * routine shall be called.  The LLDD may transmit the ABTS response as
+ * soon as the LS was marked or can delay until the xmt_ls_rsp() call is
+ * made.
+ * Note: if an RCV LS was successfully posted to the transport and the
+ * targetport is then unregistered before xmt_ls_rsp() was called for
+ * the lsrsp structure, the transport will still call xmt_ls_rsp()
+ * afterward to cleanup the outstanding lsrsp structure. The LLDD should
+ * noop the transmission of the rsp and call the lsrsp->done() routine
+ * to allow the lsrsp structure to be released.
+ */
 int nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *tgtport,
-			struct nvmefc_tgt_ls_req *lsreq,
+			void *hosthandle,
+			struct nvmefc_ls_rsp *rsp,
 			void *lsreqbuf, u32 lsreqbuf_len);
 
+/*
+ * Routine called by the LLDD whenever it has a logout or loss of
+ * connectivity to a NVME-FC host port which there had been active
+ * NVMe controllers for.  The host port is indicated by the
+ * hosthandle. The hosthandle is given to the nvmet-fc transport
+ * when a NVME LS was received, typically to create a new association.
+ * The nvmet-fc transport will cache the hostport value with the
+ * association for use in LS requests for the association.
+ * When the LLDD calls this routine, the nvmet-fc transport will
+ * immediately terminate all associations that were created with
+ * the hosthandle host port.
+ * The LLDD, after calling this routine and having control returned,
+ * must assume the transport may subsequently utilize hosthandle as
+ * part of sending LS's to terminate the association.  The LLDD
+ * should reject the LS's if they are attempted.
+ * Once the last association has terminated for the hosthandle host
+ * port, the nvmet-fc transport will call the ops->host_release()
+ * callback. As of the callback, the nvmet-fc transport will no
+ * longer reference hosthandle.
+ */
+void nvmet_fc_invalidate_host(struct nvmet_fc_target_port *tgtport,
+			void *hosthandle);
+
+/*
+ * If nvmet_fc_rcv_fcp_req returns non-zero, the transport has not accepted
+ * the FCP cmd. The lldd should ABTS-LS the cmd.
+ */
 int nvmet_fc_rcv_fcp_req(struct nvmet_fc_target_port *tgtport,
 			struct nvmefc_tgt_fcp_req *fcpreq,
 			void *cmdiubuf, u32 cmdiubuf_len);
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (2 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-02-28 20:40   ` Sagi Grimberg
                     ` (2 more replies)
  2020-02-05 18:37 ` [PATCH 05/29] lpfc: " James Smart
                   ` (26 subsequent siblings)
  30 siblings, 3 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

deal with following naming changes in the header:
  nvmefc_tgt_ls_req -> nvmefc_ls_rsp
  nvmefc_tgt_ls_req.nvmet_fc_private -> nvmefc_ls_rsp.nvme_fc_private

Change calling sequence to nvmet_fc_rcv_ls_req() for hosthandle.

Add stubs for new interfaces:
host/fc.c: nvme_fc_rcv_ls_req()
target/fc.c: nvmet_fc_invalidate_host()

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/host/fc.c       | 35 ++++++++++++++++++++
 drivers/nvme/target/fc.c     | 77 ++++++++++++++++++++++++++++++++------------
 drivers/nvme/target/fcloop.c | 20 ++++++------
 3 files changed, 102 insertions(+), 30 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 5a70ac395d53..f8f79cd88769 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -1465,6 +1465,41 @@ nvme_fc_xmt_disconnect_assoc(struct nvme_fc_ctrl *ctrl)
 		kfree(lsop);
 }
 
+/**
+ * nvme_fc_rcv_ls_req - transport entry point called by an LLDD
+ *                       upon the reception of a NVME LS request.
+ *
+ * The nvme-fc layer will copy payload to an internal structure for
+ * processing.  As such, upon completion of the routine, the LLDD may
+ * immediately free/reuse the LS request buffer passed in the call.
+ *
+ * If this routine returns error, the LLDD should abort the exchange.
+ *
+ * @remoteport: pointer to the (registered) remote port that the LS
+ *              was received from. The remoteport is associated with
+ *              a specific localport.
+ * @lsrsp:      pointer to a nvmefc_ls_rsp response structure to be
+ *              used to reference the exchange corresponding to the LS
+ *              when issuing an ls response.
+ * @lsreqbuf:   pointer to the buffer containing the LS Request
+ * @lsreqbuf_len: length, in bytes, of the received LS request
+ */
+int
+nvme_fc_rcv_ls_req(struct nvme_fc_remote_port *portptr,
+			struct nvmefc_ls_rsp *lsrsp,
+			void *lsreqbuf, u32 lsreqbuf_len)
+{
+	struct nvme_fc_rport *rport = remoteport_to_rport(portptr);
+	struct nvme_fc_lport *lport = rport->lport;
+
+	/* validate there's a routine to transmit a response */
+	if (!lport->ops->xmt_ls_rsp)
+		return(-EINVAL);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(nvme_fc_rcv_ls_req);
+
 
 /* *********************** NVME Ctrl Routines **************************** */
 
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index a8ceb7721640..aac7869a70bb 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -28,7 +28,7 @@ struct nvmet_fc_tgtport;
 struct nvmet_fc_tgt_assoc;
 
 struct nvmet_fc_ls_iod {
-	struct nvmefc_tgt_ls_req	*lsreq;
+	struct nvmefc_ls_rsp		*lsrsp;
 	struct nvmefc_tgt_fcp_req	*fcpreq;	/* only if RS */
 
 	struct list_head		ls_list;	/* tgtport->ls_list */
@@ -1146,6 +1146,42 @@ __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport)
 	spin_unlock_irqrestore(&tgtport->lock, flags);
 }
 
+/**
+ * nvmet_fc_invalidate_host - transport entry point called by an LLDD
+ *                       to remove references to a hosthandle for LS's.
+ *
+ * The nvmet-fc layer ensures that any references to the hosthandle
+ * on the targetport are forgotten (set to NULL).  The LLDD will
+ * typically call this when a login with a remote host port has been
+ * lost, thus LS's for the remote host port are no longer possible.
+ *
+ * If an LS request is outstanding to the targetport/hosthandle (or
+ * issued concurrently with the call to invalidate the host), the
+ * LLDD is responsible for terminating/aborting the LS and completing
+ * the LS request. It is recommended that these terminations/aborts
+ * occur after calling to invalidate the host handle to avoid additional
+ * retries by the nvmet-fc transport. The nvmet-fc transport may
+ * continue to reference host handle while it cleans up outstanding
+ * NVME associations. The nvmet-fc transport will call the
+ * ops->host_release() callback to notify the LLDD that all references
+ * are complete and the related host handle can be recovered.
+ * Note: if there are no references, the callback may be called before
+ * the invalidate host call returns.
+ *
+ * @target_port: pointer to the (registered) target port that a prior
+ *              LS was received on and which supplied the transport the
+ *              hosthandle.
+ * @hosthandle: the handle (pointer) that represents the host port
+ *              that no longer has connectivity and that LS's should
+ *              no longer be directed to.
+ */
+void
+nvmet_fc_invalidate_host(struct nvmet_fc_target_port *target_port,
+			void *hosthandle)
+{
+}
+EXPORT_SYMBOL_GPL(nvmet_fc_invalidate_host);
+
 /*
  * nvmet layer has called to terminate an association
  */
@@ -1371,7 +1407,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
 		dev_err(tgtport->dev,
 			"Create Association LS failed: %s\n",
 			validation_errors[ret]);
-		iod->lsreq->rsplen = nvmet_fc_format_rjt(acc,
+		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
 				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
 				FCNVME_RJT_RC_LOGIC,
 				FCNVME_RJT_EXP_NONE, 0);
@@ -1384,7 +1420,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
 
 	/* format a response */
 
-	iod->lsreq->rsplen = sizeof(*acc);
+	iod->lsrsp->rsplen = sizeof(*acc);
 
 	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
 			fcnvme_lsdesc_len(
@@ -1462,7 +1498,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
 		dev_err(tgtport->dev,
 			"Create Connection LS failed: %s\n",
 			validation_errors[ret]);
-		iod->lsreq->rsplen = nvmet_fc_format_rjt(acc,
+		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
 				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
 				(ret == VERR_NO_ASSOC) ?
 					FCNVME_RJT_RC_INV_ASSOC :
@@ -1477,7 +1513,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
 
 	/* format a response */
 
-	iod->lsreq->rsplen = sizeof(*acc);
+	iod->lsrsp->rsplen = sizeof(*acc);
 
 	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
 			fcnvme_lsdesc_len(sizeof(struct fcnvme_ls_cr_conn_acc)),
@@ -1542,7 +1578,7 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 		dev_err(tgtport->dev,
 			"Disconnect LS failed: %s\n",
 			validation_errors[ret]);
-		iod->lsreq->rsplen = nvmet_fc_format_rjt(acc,
+		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
 				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
 				(ret == VERR_NO_ASSOC) ?
 					FCNVME_RJT_RC_INV_ASSOC :
@@ -1555,7 +1591,7 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 
 	/* format a response */
 
-	iod->lsreq->rsplen = sizeof(*acc);
+	iod->lsrsp->rsplen = sizeof(*acc);
 
 	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
 			fcnvme_lsdesc_len(
@@ -1577,9 +1613,9 @@ static void nvmet_fc_fcp_nvme_cmd_done(struct nvmet_req *nvme_req);
 static const struct nvmet_fabrics_ops nvmet_fc_tgt_fcp_ops;
 
 static void
-nvmet_fc_xmt_ls_rsp_done(struct nvmefc_tgt_ls_req *lsreq)
+nvmet_fc_xmt_ls_rsp_done(struct nvmefc_ls_rsp *lsrsp)
 {
-	struct nvmet_fc_ls_iod *iod = lsreq->nvmet_fc_private;
+	struct nvmet_fc_ls_iod *iod = lsrsp->nvme_fc_private;
 	struct nvmet_fc_tgtport *tgtport = iod->tgtport;
 
 	fc_dma_sync_single_for_cpu(tgtport->dev, iod->rspdma,
@@ -1597,9 +1633,9 @@ nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport,
 	fc_dma_sync_single_for_device(tgtport->dev, iod->rspdma,
 				  NVME_FC_MAX_LS_BUFFER_SIZE, DMA_TO_DEVICE);
 
-	ret = tgtport->ops->xmt_ls_rsp(&tgtport->fc_target_port, iod->lsreq);
+	ret = tgtport->ops->xmt_ls_rsp(&tgtport->fc_target_port, iod->lsrsp);
 	if (ret)
-		nvmet_fc_xmt_ls_rsp_done(iod->lsreq);
+		nvmet_fc_xmt_ls_rsp_done(iod->lsrsp);
 }
 
 /*
@@ -1612,12 +1648,12 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
 	struct fcnvme_ls_rqst_w0 *w0 =
 			(struct fcnvme_ls_rqst_w0 *)iod->rqstbuf;
 
-	iod->lsreq->nvmet_fc_private = iod;
-	iod->lsreq->rspbuf = iod->rspbuf;
-	iod->lsreq->rspdma = iod->rspdma;
-	iod->lsreq->done = nvmet_fc_xmt_ls_rsp_done;
+	iod->lsrsp->nvme_fc_private = iod;
+	iod->lsrsp->rspbuf = iod->rspbuf;
+	iod->lsrsp->rspdma = iod->rspdma;
+	iod->lsrsp->done = nvmet_fc_xmt_ls_rsp_done;
 	/* Be preventative. handlers will later set to valid length */
-	iod->lsreq->rsplen = 0;
+	iod->lsrsp->rsplen = 0;
 
 	iod->assoc = NULL;
 
@@ -1640,7 +1676,7 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
 		nvmet_fc_ls_disconnect(tgtport, iod);
 		break;
 	default:
-		iod->lsreq->rsplen = nvmet_fc_format_rjt(iod->rspbuf,
+		iod->lsrsp->rsplen = nvmet_fc_format_rjt(iod->rspbuf,
 				NVME_FC_MAX_LS_BUFFER_SIZE, w0->ls_cmd,
 				FCNVME_RJT_RC_INVAL, FCNVME_RJT_EXP_NONE, 0);
 	}
@@ -1674,14 +1710,15 @@ nvmet_fc_handle_ls_rqst_work(struct work_struct *work)
  *
  * @target_port: pointer to the (registered) target port the LS was
  *              received on.
- * @lsreq:      pointer to a lsreq request structure to be used to reference
+ * @lsrsp:      pointer to a lsrsp structure to be used to reference
  *              the exchange corresponding to the LS.
  * @lsreqbuf:   pointer to the buffer containing the LS Request
  * @lsreqbuf_len: length, in bytes, of the received LS request
  */
 int
 nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *target_port,
-			struct nvmefc_tgt_ls_req *lsreq,
+			void *hosthandle,
+			struct nvmefc_ls_rsp *lsrsp,
 			void *lsreqbuf, u32 lsreqbuf_len)
 {
 	struct nvmet_fc_tgtport *tgtport = targetport_to_tgtport(target_port);
@@ -1699,7 +1736,7 @@ nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *target_port,
 		return -ENOENT;
 	}
 
-	iod->lsreq = lsreq;
+	iod->lsrsp = lsrsp;
 	iod->fcpreq = NULL;
 	memcpy(iod->rqstbuf, lsreqbuf, lsreqbuf_len);
 	iod->rqstdatalen = lsreqbuf_len;
diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index 1c50af6219f3..130932a5db0c 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -227,7 +227,7 @@ struct fcloop_lsreq {
 	struct fcloop_tport		*tport;
 	struct nvmefc_ls_req		*lsreq;
 	struct work_struct		work;
-	struct nvmefc_tgt_ls_req	tgt_ls_req;
+	struct nvmefc_ls_rsp		ls_rsp;
 	int				status;
 };
 
@@ -265,9 +265,9 @@ struct fcloop_ini_fcpreq {
 };
 
 static inline struct fcloop_lsreq *
-tgt_ls_req_to_lsreq(struct nvmefc_tgt_ls_req *tgt_lsreq)
+ls_rsp_to_lsreq(struct nvmefc_ls_rsp *lsrsp)
 {
-	return container_of(tgt_lsreq, struct fcloop_lsreq, tgt_ls_req);
+	return container_of(lsrsp, struct fcloop_lsreq, ls_rsp);
 }
 
 static inline struct fcloop_fcpreq *
@@ -330,7 +330,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
 
 	tls_req->status = 0;
 	tls_req->tport = rport->targetport->private;
-	ret = nvmet_fc_rcv_ls_req(rport->targetport, &tls_req->tgt_ls_req,
+	ret = nvmet_fc_rcv_ls_req(rport->targetport, NULL, &tls_req->ls_rsp,
 				 lsreq->rqstaddr, lsreq->rqstlen);
 
 	return ret;
@@ -338,15 +338,15 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
 
 static int
 fcloop_xmt_ls_rsp(struct nvmet_fc_target_port *tport,
-			struct nvmefc_tgt_ls_req *tgt_lsreq)
+			struct nvmefc_ls_rsp *lsrsp)
 {
-	struct fcloop_lsreq *tls_req = tgt_ls_req_to_lsreq(tgt_lsreq);
+	struct fcloop_lsreq *tls_req = ls_rsp_to_lsreq(lsrsp);
 	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
 
-	memcpy(lsreq->rspaddr, tgt_lsreq->rspbuf,
-		((lsreq->rsplen < tgt_lsreq->rsplen) ?
-				lsreq->rsplen : tgt_lsreq->rsplen));
-	tgt_lsreq->done(tgt_lsreq);
+	memcpy(lsreq->rspaddr, lsrsp->rspbuf,
+		((lsreq->rsplen < lsrsp->rsplen) ?
+				lsreq->rsplen : lsrsp->rsplen));
+	lsrsp->done(lsrsp);
 
 	schedule_work(&tls_req->work);
 
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 05/29] lpfc: adapt code to changed names in api header
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (3 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-02-28 20:40   ` Sagi Grimberg
                     ` (2 more replies)
  2020-02-05 18:37 ` [PATCH 06/29] nvme-fcloop: Fix deallocation of working context James Smart
                   ` (25 subsequent siblings)
  30 siblings, 3 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

deal with following naming changes in the header:
  nvmefc_tgt_ls_req -> nvmefc_ls_rsp
  nvmefc_tgt_ls_req.nvmet_fc_private -> nvmefc_ls_rsp.nvme_fc_private

Change calling sequence to nvmet_fc_rcv_ls_req() for hosthandle.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc_nvmet.c | 10 +++++-----
 drivers/scsi/lpfc/lpfc_nvmet.h |  2 +-
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index 9dc9afe1c255..47b983eddbb2 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -302,7 +302,7 @@ lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 			  struct lpfc_wcqe_complete *wcqe)
 {
 	struct lpfc_nvmet_tgtport *tgtp;
-	struct nvmefc_tgt_ls_req *rsp;
+	struct nvmefc_ls_rsp *rsp;
 	struct lpfc_nvmet_rcv_ctx *ctxp;
 	uint32_t status, result;
 
@@ -335,7 +335,7 @@ lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 	}
 
 out:
-	rsp = &ctxp->ctx.ls_req;
+	rsp = &ctxp->ctx.ls_rsp;
 
 	lpfc_nvmeio_data(phba, "NVMET LS  CMPL: xri x%x stat x%x result x%x\n",
 			 ctxp->oxid, status, result);
@@ -830,10 +830,10 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 
 static int
 lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
-		      struct nvmefc_tgt_ls_req *rsp)
+		      struct nvmefc_ls_rsp *rsp)
 {
 	struct lpfc_nvmet_rcv_ctx *ctxp =
-		container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.ls_req);
+		container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.ls_rsp);
 	struct lpfc_hba *phba = ctxp->phba;
 	struct hbq_dmabuf *nvmebuf =
 		(struct hbq_dmabuf *)ctxp->rqb_buffer;
@@ -2000,7 +2000,7 @@ lpfc_nvmet_unsol_ls_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 	 * lpfc_nvmet_xmt_ls_rsp_cmp should free the allocated ctxp.
 	 */
 	atomic_inc(&tgtp->rcv_ls_req_in);
-	rc = nvmet_fc_rcv_ls_req(phba->targetport, &ctxp->ctx.ls_req,
+	rc = nvmet_fc_rcv_ls_req(phba->targetport, NULL, &ctxp->ctx.ls_rsp,
 				 payload, size);
 
 	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.h b/drivers/scsi/lpfc/lpfc_nvmet.h
index b80b1639b9a7..f0196f3ef90d 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.h
+++ b/drivers/scsi/lpfc/lpfc_nvmet.h
@@ -105,7 +105,7 @@ struct lpfc_nvmet_ctx_info {
 
 struct lpfc_nvmet_rcv_ctx {
 	union {
-		struct nvmefc_tgt_ls_req ls_req;
+		struct nvmefc_ls_rsp ls_rsp;
 		struct nvmefc_tgt_fcp_req fcp_req;
 	} ctx;
 	struct list_head list;
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 06/29] nvme-fcloop: Fix deallocation of working context
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (4 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 05/29] lpfc: " James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-02-28 20:43   ` Sagi Grimberg
                     ` (2 more replies)
  2020-02-05 18:37 ` [PATCH 07/29] nvme-fc nvmet-fc: refactor for common LS definitions James Smart
                   ` (24 subsequent siblings)
  30 siblings, 3 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

There's been a longstanding bug of LS completions which freed ls
op's, particularly the disconnect LS, while executing on a work
context that is in the memory being free. Not a good thing to do.

Rework LS handling to make callbacks in the rport context
rather than the ls_request context.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/target/fcloop.c | 76 ++++++++++++++++++++++++++++++--------------
 1 file changed, 52 insertions(+), 24 deletions(-)

diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index 130932a5db0c..6533f4196005 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -198,10 +198,13 @@ struct fcloop_lport_priv {
 };
 
 struct fcloop_rport {
-	struct nvme_fc_remote_port *remoteport;
-	struct nvmet_fc_target_port *targetport;
-	struct fcloop_nport *nport;
-	struct fcloop_lport *lport;
+	struct nvme_fc_remote_port	*remoteport;
+	struct nvmet_fc_target_port	*targetport;
+	struct fcloop_nport		*nport;
+	struct fcloop_lport		*lport;
+	spinlock_t			lock;
+	struct list_head		ls_list;
+	struct work_struct		ls_work;
 };
 
 struct fcloop_tport {
@@ -224,11 +227,10 @@ struct fcloop_nport {
 };
 
 struct fcloop_lsreq {
-	struct fcloop_tport		*tport;
 	struct nvmefc_ls_req		*lsreq;
-	struct work_struct		work;
 	struct nvmefc_ls_rsp		ls_rsp;
 	int				status;
+	struct list_head		ls_list; /* fcloop_rport->ls_list */
 };
 
 struct fcloop_rscn {
@@ -292,21 +294,32 @@ fcloop_delete_queue(struct nvme_fc_local_port *localport,
 {
 }
 
-
-/*
- * Transmit of LS RSP done (e.g. buffers all set). call back up
- * initiator "done" flows.
- */
 static void
-fcloop_tgt_lsrqst_done_work(struct work_struct *work)
+fcloop_rport_lsrqst_work(struct work_struct *work)
 {
-	struct fcloop_lsreq *tls_req =
-		container_of(work, struct fcloop_lsreq, work);
-	struct fcloop_tport *tport = tls_req->tport;
-	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
+	struct fcloop_rport *rport =
+		container_of(work, struct fcloop_rport, ls_work);
+	struct fcloop_lsreq *tls_req;
 
-	if (!tport || tport->remoteport)
-		lsreq->done(lsreq, tls_req->status);
+	spin_lock(&rport->lock);
+	for (;;) {
+		tls_req = list_first_entry_or_null(&rport->ls_list,
+				struct fcloop_lsreq, ls_list);
+		if (!tls_req)
+			break;
+
+		list_del(&tls_req->ls_list);
+		spin_unlock(&rport->lock);
+
+		tls_req->lsreq->done(tls_req->lsreq, tls_req->status);
+		/*
+		 * callee may free memory containing tls_req.
+		 * do not reference lsreq after this.
+		 */
+
+		spin_lock(&rport->lock);
+	}
+	spin_unlock(&rport->lock);
 }
 
 static int
@@ -319,17 +332,18 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
 	int ret = 0;
 
 	tls_req->lsreq = lsreq;
-	INIT_WORK(&tls_req->work, fcloop_tgt_lsrqst_done_work);
+	INIT_LIST_HEAD(&tls_req->ls_list);
 
 	if (!rport->targetport) {
 		tls_req->status = -ECONNREFUSED;
-		tls_req->tport = NULL;
-		schedule_work(&tls_req->work);
+		spin_lock(&rport->lock);
+		list_add_tail(&rport->ls_list, &tls_req->ls_list);
+		spin_unlock(&rport->lock);
+		schedule_work(&rport->ls_work);
 		return ret;
 	}
 
 	tls_req->status = 0;
-	tls_req->tport = rport->targetport->private;
 	ret = nvmet_fc_rcv_ls_req(rport->targetport, NULL, &tls_req->ls_rsp,
 				 lsreq->rqstaddr, lsreq->rqstlen);
 
@@ -337,18 +351,28 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
 }
 
 static int
-fcloop_xmt_ls_rsp(struct nvmet_fc_target_port *tport,
+fcloop_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
 			struct nvmefc_ls_rsp *lsrsp)
 {
 	struct fcloop_lsreq *tls_req = ls_rsp_to_lsreq(lsrsp);
 	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
+	struct fcloop_tport *tport = targetport->private;
+	struct nvme_fc_remote_port *remoteport = tport->remoteport;
+	struct fcloop_rport *rport;
 
 	memcpy(lsreq->rspaddr, lsrsp->rspbuf,
 		((lsreq->rsplen < lsrsp->rsplen) ?
 				lsreq->rsplen : lsrsp->rsplen));
+
 	lsrsp->done(lsrsp);
 
-	schedule_work(&tls_req->work);
+	if (remoteport) {
+		rport = remoteport->private;
+		spin_lock(&rport->lock);
+		list_add_tail(&rport->ls_list, &tls_req->ls_list);
+		spin_unlock(&rport->lock);
+		schedule_work(&rport->ls_work);
+	}
 
 	return 0;
 }
@@ -834,6 +858,7 @@ fcloop_remoteport_delete(struct nvme_fc_remote_port *remoteport)
 {
 	struct fcloop_rport *rport = remoteport->private;
 
+	flush_work(&rport->ls_work);
 	fcloop_nport_put(rport->nport);
 }
 
@@ -1136,6 +1161,9 @@ fcloop_create_remote_port(struct device *dev, struct device_attribute *attr,
 	rport->nport = nport;
 	rport->lport = nport->lport;
 	nport->rport = rport;
+	spin_lock_init(&rport->lock);
+	INIT_WORK(&rport->ls_work, fcloop_rport_lsrqst_work);
+	INIT_LIST_HEAD(&rport->ls_list);
 
 	return count;
 }
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 07/29] nvme-fc nvmet-fc: refactor for common LS definitions
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (5 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 06/29] nvme-fcloop: Fix deallocation of working context James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  8:35   ` Hannes Reinecke
  2020-03-26 16:36   ` Himanshu Madhani
  2020-02-05 18:37 ` [PATCH 08/29] nvmet-fc: Better size LS buffers James Smart
                   ` (23 subsequent siblings)
  30 siblings, 2 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

Routines in the target will want to be used in the host as well.
Error definitions should now shared as both sides will process
requests and responses to requests.

Moved common declarations to new fc.h header kept in the host
subdirectory.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/host/fc.c   |  36 +------------
 drivers/nvme/host/fc.h   | 133 +++++++++++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/fc.c | 115 ++++------------------------------------
 3 files changed, 143 insertions(+), 141 deletions(-)
 create mode 100644 drivers/nvme/host/fc.h

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index f8f79cd88769..2e5163600f63 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -14,6 +14,7 @@
 #include "fabrics.h"
 #include <linux/nvme-fc-driver.h>
 #include <linux/nvme-fc.h>
+#include "fc.h"
 #include <scsi/scsi_transport_fc.h>
 
 /* *************************** Data Structures/Defines ****************** */
@@ -1141,41 +1142,6 @@ nvme_fc_send_ls_req_async(struct nvme_fc_rport *rport,
 	return __nvme_fc_send_ls_req(rport, lsop, done);
 }
 
-/* Validation Error indexes into the string table below */
-enum {
-	VERR_NO_ERROR		= 0,
-	VERR_LSACC		= 1,
-	VERR_LSDESC_RQST	= 2,
-	VERR_LSDESC_RQST_LEN	= 3,
-	VERR_ASSOC_ID		= 4,
-	VERR_ASSOC_ID_LEN	= 5,
-	VERR_CONN_ID		= 6,
-	VERR_CONN_ID_LEN	= 7,
-	VERR_CR_ASSOC		= 8,
-	VERR_CR_ASSOC_ACC_LEN	= 9,
-	VERR_CR_CONN		= 10,
-	VERR_CR_CONN_ACC_LEN	= 11,
-	VERR_DISCONN		= 12,
-	VERR_DISCONN_ACC_LEN	= 13,
-};
-
-static char *validation_errors[] = {
-	"OK",
-	"Not LS_ACC",
-	"Not LSDESC_RQST",
-	"Bad LSDESC_RQST Length",
-	"Not Association ID",
-	"Bad Association ID Length",
-	"Not Connection ID",
-	"Bad Connection ID Length",
-	"Not CR_ASSOC Rqst",
-	"Bad CR_ASSOC ACC Length",
-	"Not CR_CONN Rqst",
-	"Bad CR_CONN ACC Length",
-	"Not Disconnect Rqst",
-	"Bad Disconnect ACC Length",
-};
-
 static int
 nvme_fc_connect_admin_queue(struct nvme_fc_ctrl *ctrl,
 	struct nvme_fc_queue *queue, u16 qsize, u16 ersp_ratio)
diff --git a/drivers/nvme/host/fc.h b/drivers/nvme/host/fc.h
new file mode 100644
index 000000000000..d2861cdd58ee
--- /dev/null
+++ b/drivers/nvme/host/fc.h
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016, Avago Technologies
+ */
+
+#ifndef _NVME_FC_TRANSPORT_H
+#define _NVME_FC_TRANSPORT_H 1
+
+
+/*
+ * Common definitions between the nvme_fc (host) transport and
+ * nvmet_fc (target) transport implementation.
+ */
+
+/*
+ * ******************  FC-NVME LS HANDLING ******************
+ */
+
+static inline void
+nvme_fc_format_rsp_hdr(void *buf, u8 ls_cmd, __be32 desc_len, u8 rqst_ls_cmd)
+{
+	struct fcnvme_ls_acc_hdr *acc = buf;
+
+	acc->w0.ls_cmd = ls_cmd;
+	acc->desc_list_len = desc_len;
+	acc->rqst.desc_tag = cpu_to_be32(FCNVME_LSDESC_RQST);
+	acc->rqst.desc_len =
+			fcnvme_lsdesc_len(sizeof(struct fcnvme_lsdesc_rqst));
+	acc->rqst.w0.ls_cmd = rqst_ls_cmd;
+}
+
+static inline int
+nvme_fc_format_rjt(void *buf, u16 buflen, u8 ls_cmd,
+			u8 reason, u8 explanation, u8 vendor)
+{
+	struct fcnvme_ls_rjt *rjt = buf;
+
+	nvme_fc_format_rsp_hdr(buf, FCNVME_LSDESC_RQST,
+			fcnvme_lsdesc_len(sizeof(struct fcnvme_ls_rjt)),
+			ls_cmd);
+	rjt->rjt.desc_tag = cpu_to_be32(FCNVME_LSDESC_RJT);
+	rjt->rjt.desc_len = fcnvme_lsdesc_len(sizeof(struct fcnvme_lsdesc_rjt));
+	rjt->rjt.reason_code = reason;
+	rjt->rjt.reason_explanation = explanation;
+	rjt->rjt.vendor = vendor;
+
+	return sizeof(struct fcnvme_ls_rjt);
+}
+
+/* Validation Error indexes into the string table below */
+enum {
+	VERR_NO_ERROR		= 0,
+	VERR_CR_ASSOC_LEN	= 1,
+	VERR_CR_ASSOC_RQST_LEN	= 2,
+	VERR_CR_ASSOC_CMD	= 3,
+	VERR_CR_ASSOC_CMD_LEN	= 4,
+	VERR_ERSP_RATIO		= 5,
+	VERR_ASSOC_ALLOC_FAIL	= 6,
+	VERR_QUEUE_ALLOC_FAIL	= 7,
+	VERR_CR_CONN_LEN	= 8,
+	VERR_CR_CONN_RQST_LEN	= 9,
+	VERR_ASSOC_ID		= 10,
+	VERR_ASSOC_ID_LEN	= 11,
+	VERR_NO_ASSOC		= 12,
+	VERR_CONN_ID		= 13,
+	VERR_CONN_ID_LEN	= 14,
+	VERR_INVAL_CONN		= 15,
+	VERR_CR_CONN_CMD	= 16,
+	VERR_CR_CONN_CMD_LEN	= 17,
+	VERR_DISCONN_LEN	= 18,
+	VERR_DISCONN_RQST_LEN	= 19,
+	VERR_DISCONN_CMD	= 20,
+	VERR_DISCONN_CMD_LEN	= 21,
+	VERR_DISCONN_SCOPE	= 22,
+	VERR_RS_LEN		= 23,
+	VERR_RS_RQST_LEN	= 24,
+	VERR_RS_CMD		= 25,
+	VERR_RS_CMD_LEN		= 26,
+	VERR_RS_RCTL		= 27,
+	VERR_RS_RO		= 28,
+	VERR_LSACC		= 29,
+	VERR_LSDESC_RQST	= 30,
+	VERR_LSDESC_RQST_LEN	= 31,
+	VERR_CR_ASSOC		= 32,
+	VERR_CR_ASSOC_ACC_LEN	= 33,
+	VERR_CR_CONN		= 34,
+	VERR_CR_CONN_ACC_LEN	= 35,
+	VERR_DISCONN		= 36,
+	VERR_DISCONN_ACC_LEN	= 37,
+};
+
+static char *validation_errors[] = {
+	"OK",
+	"Bad CR_ASSOC Length",
+	"Bad CR_ASSOC Rqst Length",
+	"Not CR_ASSOC Cmd",
+	"Bad CR_ASSOC Cmd Length",
+	"Bad Ersp Ratio",
+	"Association Allocation Failed",
+	"Queue Allocation Failed",
+	"Bad CR_CONN Length",
+	"Bad CR_CONN Rqst Length",
+	"Not Association ID",
+	"Bad Association ID Length",
+	"No Association",
+	"Not Connection ID",
+	"Bad Connection ID Length",
+	"Invalid Connection ID",
+	"Not CR_CONN Cmd",
+	"Bad CR_CONN Cmd Length",
+	"Bad DISCONN Length",
+	"Bad DISCONN Rqst Length",
+	"Not DISCONN Cmd",
+	"Bad DISCONN Cmd Length",
+	"Bad Disconnect Scope",
+	"Bad RS Length",
+	"Bad RS Rqst Length",
+	"Not RS Cmd",
+	"Bad RS Cmd Length",
+	"Bad RS R_CTL",
+	"Bad RS Relative Offset",
+	"Not LS_ACC",
+	"Not LSDESC_RQST",
+	"Bad LSDESC_RQST Length",
+	"Not CR_ASSOC Rqst",
+	"Bad CR_ASSOC ACC Length",
+	"Not CR_CONN Rqst",
+	"Bad CR_CONN ACC Length",
+	"Not Disconnect Rqst",
+	"Bad Disconnect ACC Length",
+};
+
+#endif /* _NVME_FC_TRANSPORT_H */
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index aac7869a70bb..1f3118a3b0a3 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -14,6 +14,7 @@
 #include "nvmet.h"
 #include <linux/nvme-fc-driver.h>
 #include <linux/nvme-fc.h>
+#include "../host/fc.h"
 
 
 /* *************************** Data Structures/Defines ****************** */
@@ -1258,102 +1259,6 @@ EXPORT_SYMBOL_GPL(nvmet_fc_unregister_targetport);
 
 
 static void
-nvmet_fc_format_rsp_hdr(void *buf, u8 ls_cmd, __be32 desc_len, u8 rqst_ls_cmd)
-{
-	struct fcnvme_ls_acc_hdr *acc = buf;
-
-	acc->w0.ls_cmd = ls_cmd;
-	acc->desc_list_len = desc_len;
-	acc->rqst.desc_tag = cpu_to_be32(FCNVME_LSDESC_RQST);
-	acc->rqst.desc_len =
-			fcnvme_lsdesc_len(sizeof(struct fcnvme_lsdesc_rqst));
-	acc->rqst.w0.ls_cmd = rqst_ls_cmd;
-}
-
-static int
-nvmet_fc_format_rjt(void *buf, u16 buflen, u8 ls_cmd,
-			u8 reason, u8 explanation, u8 vendor)
-{
-	struct fcnvme_ls_rjt *rjt = buf;
-
-	nvmet_fc_format_rsp_hdr(buf, FCNVME_LSDESC_RQST,
-			fcnvme_lsdesc_len(sizeof(struct fcnvme_ls_rjt)),
-			ls_cmd);
-	rjt->rjt.desc_tag = cpu_to_be32(FCNVME_LSDESC_RJT);
-	rjt->rjt.desc_len = fcnvme_lsdesc_len(sizeof(struct fcnvme_lsdesc_rjt));
-	rjt->rjt.reason_code = reason;
-	rjt->rjt.reason_explanation = explanation;
-	rjt->rjt.vendor = vendor;
-
-	return sizeof(struct fcnvme_ls_rjt);
-}
-
-/* Validation Error indexes into the string table below */
-enum {
-	VERR_NO_ERROR		= 0,
-	VERR_CR_ASSOC_LEN	= 1,
-	VERR_CR_ASSOC_RQST_LEN	= 2,
-	VERR_CR_ASSOC_CMD	= 3,
-	VERR_CR_ASSOC_CMD_LEN	= 4,
-	VERR_ERSP_RATIO		= 5,
-	VERR_ASSOC_ALLOC_FAIL	= 6,
-	VERR_QUEUE_ALLOC_FAIL	= 7,
-	VERR_CR_CONN_LEN	= 8,
-	VERR_CR_CONN_RQST_LEN	= 9,
-	VERR_ASSOC_ID		= 10,
-	VERR_ASSOC_ID_LEN	= 11,
-	VERR_NO_ASSOC		= 12,
-	VERR_CONN_ID		= 13,
-	VERR_CONN_ID_LEN	= 14,
-	VERR_NO_CONN		= 15,
-	VERR_CR_CONN_CMD	= 16,
-	VERR_CR_CONN_CMD_LEN	= 17,
-	VERR_DISCONN_LEN	= 18,
-	VERR_DISCONN_RQST_LEN	= 19,
-	VERR_DISCONN_CMD	= 20,
-	VERR_DISCONN_CMD_LEN	= 21,
-	VERR_DISCONN_SCOPE	= 22,
-	VERR_RS_LEN		= 23,
-	VERR_RS_RQST_LEN	= 24,
-	VERR_RS_CMD		= 25,
-	VERR_RS_CMD_LEN		= 26,
-	VERR_RS_RCTL		= 27,
-	VERR_RS_RO		= 28,
-};
-
-static char *validation_errors[] = {
-	"OK",
-	"Bad CR_ASSOC Length",
-	"Bad CR_ASSOC Rqst Length",
-	"Not CR_ASSOC Cmd",
-	"Bad CR_ASSOC Cmd Length",
-	"Bad Ersp Ratio",
-	"Association Allocation Failed",
-	"Queue Allocation Failed",
-	"Bad CR_CONN Length",
-	"Bad CR_CONN Rqst Length",
-	"Not Association ID",
-	"Bad Association ID Length",
-	"No Association",
-	"Not Connection ID",
-	"Bad Connection ID Length",
-	"No Connection",
-	"Not CR_CONN Cmd",
-	"Bad CR_CONN Cmd Length",
-	"Bad DISCONN Length",
-	"Bad DISCONN Rqst Length",
-	"Not DISCONN Cmd",
-	"Bad DISCONN Cmd Length",
-	"Bad Disconnect Scope",
-	"Bad RS Length",
-	"Bad RS Rqst Length",
-	"Not RS Cmd",
-	"Bad RS Cmd Length",
-	"Bad RS R_CTL",
-	"Bad RS Relative Offset",
-};
-
-static void
 nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
 			struct nvmet_fc_ls_iod *iod)
 {
@@ -1407,7 +1312,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
 		dev_err(tgtport->dev,
 			"Create Association LS failed: %s\n",
 			validation_errors[ret]);
-		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
+		iod->lsrsp->rsplen = nvme_fc_format_rjt(acc,
 				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
 				FCNVME_RJT_RC_LOGIC,
 				FCNVME_RJT_EXP_NONE, 0);
@@ -1422,7 +1327,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
 
 	iod->lsrsp->rsplen = sizeof(*acc);
 
-	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
+	nvme_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
 			fcnvme_lsdesc_len(
 				sizeof(struct fcnvme_ls_cr_assoc_acc)),
 			FCNVME_LS_CREATE_ASSOCIATION);
@@ -1498,7 +1403,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
 		dev_err(tgtport->dev,
 			"Create Connection LS failed: %s\n",
 			validation_errors[ret]);
-		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
+		iod->lsrsp->rsplen = nvme_fc_format_rjt(acc,
 				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
 				(ret == VERR_NO_ASSOC) ?
 					FCNVME_RJT_RC_INV_ASSOC :
@@ -1515,7 +1420,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
 
 	iod->lsrsp->rsplen = sizeof(*acc);
 
-	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
+	nvme_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
 			fcnvme_lsdesc_len(sizeof(struct fcnvme_ls_cr_conn_acc)),
 			FCNVME_LS_CREATE_CONNECTION);
 	acc->connectid.desc_tag = cpu_to_be32(FCNVME_LSDESC_CONN_ID);
@@ -1578,13 +1483,11 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 		dev_err(tgtport->dev,
 			"Disconnect LS failed: %s\n",
 			validation_errors[ret]);
-		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
+		iod->lsrsp->rsplen = nvme_fc_format_rjt(acc,
 				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
 				(ret == VERR_NO_ASSOC) ?
 					FCNVME_RJT_RC_INV_ASSOC :
-					(ret == VERR_NO_CONN) ?
-						FCNVME_RJT_RC_INV_CONN :
-						FCNVME_RJT_RC_LOGIC,
+					FCNVME_RJT_RC_LOGIC,
 				FCNVME_RJT_EXP_NONE, 0);
 		return;
 	}
@@ -1593,7 +1496,7 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 
 	iod->lsrsp->rsplen = sizeof(*acc);
 
-	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
+	nvme_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
 			fcnvme_lsdesc_len(
 				sizeof(struct fcnvme_ls_disconnect_assoc_acc)),
 			FCNVME_LS_DISCONNECT_ASSOC);
@@ -1676,7 +1579,7 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
 		nvmet_fc_ls_disconnect(tgtport, iod);
 		break;
 	default:
-		iod->lsrsp->rsplen = nvmet_fc_format_rjt(iod->rspbuf,
+		iod->lsrsp->rsplen = nvme_fc_format_rjt(iod->rspbuf,
 				NVME_FC_MAX_LS_BUFFER_SIZE, w0->ls_cmd,
 				FCNVME_RJT_RC_INVAL, FCNVME_RJT_EXP_NONE, 0);
 	}
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 08/29] nvmet-fc: Better size LS buffers
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (6 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 07/29] nvme-fc nvmet-fc: refactor for common LS definitions James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-02-28 21:04   ` Sagi Grimberg
  2020-03-06  8:36   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data James Smart
                   ` (22 subsequent siblings)
  30 siblings, 2 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

Current code uses NVME_FC_MAX_LS_BUFFER_SIZE (2KB) when allocating
buffers for LS requests and responses. This is considerable overkill
for what is actually defined.

Rework code to have unions for all possible requests and responses
and size based on the unions.  Remove NVME_FC_MAX_LS_BUFFER_SIZE.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/host/fc.h   | 15 ++++++++++++++
 drivers/nvme/target/fc.c | 53 +++++++++++++++++++++---------------------------
 2 files changed, 38 insertions(+), 30 deletions(-)

diff --git a/drivers/nvme/host/fc.h b/drivers/nvme/host/fc.h
index d2861cdd58ee..08fa88381d45 100644
--- a/drivers/nvme/host/fc.h
+++ b/drivers/nvme/host/fc.h
@@ -16,6 +16,21 @@
  * ******************  FC-NVME LS HANDLING ******************
  */
 
+union nvmefc_ls_requests {
+	struct fcnvme_ls_cr_assoc_rqst		rq_cr_assoc;
+	struct fcnvme_ls_cr_conn_rqst		rq_cr_conn;
+	struct fcnvme_ls_disconnect_assoc_rqst	rq_dis_assoc;
+	struct fcnvme_ls_disconnect_conn_rqst	rq_dis_conn;
+} __aligned(128);	/* alignment for other things alloc'd with */
+
+union nvmefc_ls_responses {
+	struct fcnvme_ls_rjt			rsp_rjt;
+	struct fcnvme_ls_cr_assoc_acc		rsp_cr_assoc;
+	struct fcnvme_ls_cr_conn_acc		rsp_cr_conn;
+	struct fcnvme_ls_disconnect_assoc_acc	rsp_dis_assoc;
+	struct fcnvme_ls_disconnect_conn_acc	rsp_dis_conn;
+} __aligned(128);	/* alignment for other things alloc'd with */
+
 static inline void
 nvme_fc_format_rsp_hdr(void *buf, u8 ls_cmd, __be32 desc_len, u8 rqst_ls_cmd)
 {
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 1f3118a3b0a3..66de6bd8f4fd 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -22,9 +22,6 @@
 
 #define NVMET_LS_CTX_COUNT		256
 
-/* for this implementation, assume small single frame rqst/rsp */
-#define NVME_FC_MAX_LS_BUFFER_SIZE		2048
-
 struct nvmet_fc_tgtport;
 struct nvmet_fc_tgt_assoc;
 
@@ -37,8 +34,8 @@ struct nvmet_fc_ls_iod {
 	struct nvmet_fc_tgtport		*tgtport;
 	struct nvmet_fc_tgt_assoc	*assoc;
 
-	u8				*rqstbuf;
-	u8				*rspbuf;
+	union nvmefc_ls_requests	*rqstbuf;
+	union nvmefc_ls_responses	*rspbuf;
 	u16				rqstdatalen;
 	dma_addr_t			rspdma;
 
@@ -340,15 +337,16 @@ nvmet_fc_alloc_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
 		iod->tgtport = tgtport;
 		list_add_tail(&iod->ls_list, &tgtport->ls_list);
 
-		iod->rqstbuf = kcalloc(2, NVME_FC_MAX_LS_BUFFER_SIZE,
-			GFP_KERNEL);
+		iod->rqstbuf = kzalloc(sizeof(union nvmefc_ls_requests) +
+				       sizeof(union nvmefc_ls_responses),
+				       GFP_KERNEL);
 		if (!iod->rqstbuf)
 			goto out_fail;
 
-		iod->rspbuf = iod->rqstbuf + NVME_FC_MAX_LS_BUFFER_SIZE;
+		iod->rspbuf = (union nvmefc_ls_responses *)&iod->rqstbuf[1];
 
 		iod->rspdma = fc_dma_map_single(tgtport->dev, iod->rspbuf,
-						NVME_FC_MAX_LS_BUFFER_SIZE,
+						sizeof(*iod->rspbuf),
 						DMA_TO_DEVICE);
 		if (fc_dma_mapping_error(tgtport->dev, iod->rspdma))
 			goto out_fail;
@@ -361,7 +359,7 @@ nvmet_fc_alloc_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
 	list_del(&iod->ls_list);
 	for (iod--, i--; i >= 0; iod--, i--) {
 		fc_dma_unmap_single(tgtport->dev, iod->rspdma,
-				NVME_FC_MAX_LS_BUFFER_SIZE, DMA_TO_DEVICE);
+				sizeof(*iod->rspbuf), DMA_TO_DEVICE);
 		kfree(iod->rqstbuf);
 		list_del(&iod->ls_list);
 	}
@@ -379,7 +377,7 @@ nvmet_fc_free_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
 
 	for (i = 0; i < NVMET_LS_CTX_COUNT; iod++, i++) {
 		fc_dma_unmap_single(tgtport->dev,
-				iod->rspdma, NVME_FC_MAX_LS_BUFFER_SIZE,
+				iod->rspdma, sizeof(*iod->rspbuf),
 				DMA_TO_DEVICE);
 		kfree(iod->rqstbuf);
 		list_del(&iod->ls_list);
@@ -1262,10 +1260,8 @@ static void
 nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
 			struct nvmet_fc_ls_iod *iod)
 {
-	struct fcnvme_ls_cr_assoc_rqst *rqst =
-				(struct fcnvme_ls_cr_assoc_rqst *)iod->rqstbuf;
-	struct fcnvme_ls_cr_assoc_acc *acc =
-				(struct fcnvme_ls_cr_assoc_acc *)iod->rspbuf;
+	struct fcnvme_ls_cr_assoc_rqst *rqst = &iod->rqstbuf->rq_cr_assoc;
+	struct fcnvme_ls_cr_assoc_acc *acc = &iod->rspbuf->rsp_cr_assoc;
 	struct nvmet_fc_tgt_queue *queue;
 	int ret = 0;
 
@@ -1313,7 +1309,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
 			"Create Association LS failed: %s\n",
 			validation_errors[ret]);
 		iod->lsrsp->rsplen = nvme_fc_format_rjt(acc,
-				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
+				sizeof(*acc), rqst->w0.ls_cmd,
 				FCNVME_RJT_RC_LOGIC,
 				FCNVME_RJT_EXP_NONE, 0);
 		return;
@@ -1348,10 +1344,8 @@ static void
 nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
 			struct nvmet_fc_ls_iod *iod)
 {
-	struct fcnvme_ls_cr_conn_rqst *rqst =
-				(struct fcnvme_ls_cr_conn_rqst *)iod->rqstbuf;
-	struct fcnvme_ls_cr_conn_acc *acc =
-				(struct fcnvme_ls_cr_conn_acc *)iod->rspbuf;
+	struct fcnvme_ls_cr_conn_rqst *rqst = &iod->rqstbuf->rq_cr_conn;
+	struct fcnvme_ls_cr_conn_acc *acc = &iod->rspbuf->rsp_cr_conn;
 	struct nvmet_fc_tgt_queue *queue;
 	int ret = 0;
 
@@ -1404,7 +1398,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
 			"Create Connection LS failed: %s\n",
 			validation_errors[ret]);
 		iod->lsrsp->rsplen = nvme_fc_format_rjt(acc,
-				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
+				sizeof(*acc), rqst->w0.ls_cmd,
 				(ret == VERR_NO_ASSOC) ?
 					FCNVME_RJT_RC_INV_ASSOC :
 					FCNVME_RJT_RC_LOGIC,
@@ -1437,9 +1431,9 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 			struct nvmet_fc_ls_iod *iod)
 {
 	struct fcnvme_ls_disconnect_assoc_rqst *rqst =
-			(struct fcnvme_ls_disconnect_assoc_rqst *)iod->rqstbuf;
+						&iod->rqstbuf->rq_dis_assoc;
 	struct fcnvme_ls_disconnect_assoc_acc *acc =
-			(struct fcnvme_ls_disconnect_assoc_acc *)iod->rspbuf;
+						&iod->rspbuf->rsp_dis_assoc;
 	struct nvmet_fc_tgt_assoc *assoc;
 	int ret = 0;
 
@@ -1484,7 +1478,7 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 			"Disconnect LS failed: %s\n",
 			validation_errors[ret]);
 		iod->lsrsp->rsplen = nvme_fc_format_rjt(acc,
-				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
+				sizeof(*acc), rqst->w0.ls_cmd,
 				(ret == VERR_NO_ASSOC) ?
 					FCNVME_RJT_RC_INV_ASSOC :
 					FCNVME_RJT_RC_LOGIC,
@@ -1522,7 +1516,7 @@ nvmet_fc_xmt_ls_rsp_done(struct nvmefc_ls_rsp *lsrsp)
 	struct nvmet_fc_tgtport *tgtport = iod->tgtport;
 
 	fc_dma_sync_single_for_cpu(tgtport->dev, iod->rspdma,
-				NVME_FC_MAX_LS_BUFFER_SIZE, DMA_TO_DEVICE);
+				sizeof(*iod->rspbuf), DMA_TO_DEVICE);
 	nvmet_fc_free_ls_iod(tgtport, iod);
 	nvmet_fc_tgtport_put(tgtport);
 }
@@ -1534,7 +1528,7 @@ nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport,
 	int ret;
 
 	fc_dma_sync_single_for_device(tgtport->dev, iod->rspdma,
-				  NVME_FC_MAX_LS_BUFFER_SIZE, DMA_TO_DEVICE);
+				  sizeof(*iod->rspbuf), DMA_TO_DEVICE);
 
 	ret = tgtport->ops->xmt_ls_rsp(&tgtport->fc_target_port, iod->lsrsp);
 	if (ret)
@@ -1548,8 +1542,7 @@ static void
 nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
 			struct nvmet_fc_ls_iod *iod)
 {
-	struct fcnvme_ls_rqst_w0 *w0 =
-			(struct fcnvme_ls_rqst_w0 *)iod->rqstbuf;
+	struct fcnvme_ls_rqst_w0 *w0 = &iod->rqstbuf->rq_cr_assoc.w0;
 
 	iod->lsrsp->nvme_fc_private = iod;
 	iod->lsrsp->rspbuf = iod->rspbuf;
@@ -1580,7 +1573,7 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
 		break;
 	default:
 		iod->lsrsp->rsplen = nvme_fc_format_rjt(iod->rspbuf,
-				NVME_FC_MAX_LS_BUFFER_SIZE, w0->ls_cmd,
+				sizeof(*iod->rspbuf), w0->ls_cmd,
 				FCNVME_RJT_RC_INVAL, FCNVME_RJT_EXP_NONE, 0);
 	}
 
@@ -1627,7 +1620,7 @@ nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *target_port,
 	struct nvmet_fc_tgtport *tgtport = targetport_to_tgtport(target_port);
 	struct nvmet_fc_ls_iod *iod;
 
-	if (lsreqbuf_len > NVME_FC_MAX_LS_BUFFER_SIZE)
+	if (lsreqbuf_len > sizeof(union nvmefc_ls_requests))
 		return -E2BIG;
 
 	if (!nvmet_fc_tgtport_get(tgtport))
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (7 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 08/29] nvmet-fc: Better size LS buffers James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-02-28 21:05   ` Sagi Grimberg
                     ` (2 more replies)
  2020-02-05 18:37 ` [PATCH 10/29] nvmefc: Use common definitions for LS names, formatting, and validation James Smart
                   ` (21 subsequent siblings)
  30 siblings, 3 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

Ensure that when allocations are done, and the lldd options indicate
no private data is needed, that private pointers will be set to NULL
(catches driver error that forgot to set private data size).

Slightly reorg the allocations so that private data follows allocations
for LS request/response buffers. Ensures better alignments for the buffers
as well as the private pointer.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/host/fc.c   | 81 ++++++++++++++++++++++++++++++------------------
 drivers/nvme/target/fc.c |  5 ++-
 2 files changed, 54 insertions(+), 32 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 2e5163600f63..1a58e3dc0399 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -396,7 +396,10 @@ nvme_fc_register_localport(struct nvme_fc_port_info *pinfo,
 	newrec->ops = template;
 	newrec->dev = dev;
 	ida_init(&newrec->endp_cnt);
-	newrec->localport.private = &newrec[1];
+	if (template->local_priv_sz)
+		newrec->localport.private = &newrec[1];
+	else
+		newrec->localport.private = NULL;
 	newrec->localport.node_name = pinfo->node_name;
 	newrec->localport.port_name = pinfo->port_name;
 	newrec->localport.port_role = pinfo->port_role;
@@ -705,7 +708,10 @@ nvme_fc_register_remoteport(struct nvme_fc_local_port *localport,
 	newrec->remoteport.localport = &lport->localport;
 	newrec->dev = lport->dev;
 	newrec->lport = lport;
-	newrec->remoteport.private = &newrec[1];
+	if (lport->ops->remote_priv_sz)
+		newrec->remoteport.private = &newrec[1];
+	else
+		newrec->remoteport.private = NULL;
 	newrec->remoteport.port_role = pinfo->port_role;
 	newrec->remoteport.node_name = pinfo->node_name;
 	newrec->remoteport.port_name = pinfo->port_name;
@@ -1153,18 +1159,23 @@ nvme_fc_connect_admin_queue(struct nvme_fc_ctrl *ctrl,
 	int ret, fcret = 0;
 
 	lsop = kzalloc((sizeof(*lsop) +
-			 ctrl->lport->ops->lsrqst_priv_sz +
-			 sizeof(*assoc_rqst) + sizeof(*assoc_acc)), GFP_KERNEL);
+			 sizeof(*assoc_rqst) + sizeof(*assoc_acc) +
+			 ctrl->lport->ops->lsrqst_priv_sz), GFP_KERNEL);
 	if (!lsop) {
+		dev_info(ctrl->ctrl.device,
+			"NVME-FC{%d}: send Create Association failed: ENOMEM\n",
+			ctrl->cnum);
 		ret = -ENOMEM;
 		goto out_no_memory;
 	}
-	lsreq = &lsop->ls_req;
 
-	lsreq->private = (void *)&lsop[1];
-	assoc_rqst = (struct fcnvme_ls_cr_assoc_rqst *)
-			(lsreq->private + ctrl->lport->ops->lsrqst_priv_sz);
+	assoc_rqst = (struct fcnvme_ls_cr_assoc_rqst *)&lsop[1];
 	assoc_acc = (struct fcnvme_ls_cr_assoc_acc *)&assoc_rqst[1];
+	lsreq = &lsop->ls_req;
+	if (ctrl->lport->ops->lsrqst_priv_sz)
+		lsreq->private = &assoc_acc[1];
+	else
+		lsreq->private = NULL;
 
 	assoc_rqst->w0.ls_cmd = FCNVME_LS_CREATE_ASSOCIATION;
 	assoc_rqst->desc_list_len =
@@ -1262,18 +1273,23 @@ nvme_fc_connect_queue(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
 	int ret, fcret = 0;
 
 	lsop = kzalloc((sizeof(*lsop) +
-			 ctrl->lport->ops->lsrqst_priv_sz +
-			 sizeof(*conn_rqst) + sizeof(*conn_acc)), GFP_KERNEL);
+			 sizeof(*conn_rqst) + sizeof(*conn_acc) +
+			 ctrl->lport->ops->lsrqst_priv_sz), GFP_KERNEL);
 	if (!lsop) {
+		dev_info(ctrl->ctrl.device,
+			"NVME-FC{%d}: send Create Connection failed: ENOMEM\n",
+			ctrl->cnum);
 		ret = -ENOMEM;
 		goto out_no_memory;
 	}
-	lsreq = &lsop->ls_req;
 
-	lsreq->private = (void *)&lsop[1];
-	conn_rqst = (struct fcnvme_ls_cr_conn_rqst *)
-			(lsreq->private + ctrl->lport->ops->lsrqst_priv_sz);
+	conn_rqst = (struct fcnvme_ls_cr_conn_rqst *)&lsop[1];
 	conn_acc = (struct fcnvme_ls_cr_conn_acc *)&conn_rqst[1];
+	lsreq = &lsop->ls_req;
+	if (ctrl->lport->ops->lsrqst_priv_sz)
+		lsreq->private = (void *)&conn_acc[1];
+	else
+		lsreq->private = NULL;
 
 	conn_rqst->w0.ls_cmd = FCNVME_LS_CREATE_CONNECTION;
 	conn_rqst->desc_list_len = cpu_to_be32(
@@ -1387,19 +1403,23 @@ nvme_fc_xmt_disconnect_assoc(struct nvme_fc_ctrl *ctrl)
 	int ret;
 
 	lsop = kzalloc((sizeof(*lsop) +
-			 ctrl->lport->ops->lsrqst_priv_sz +
-			 sizeof(*discon_rqst) + sizeof(*discon_acc)),
-			GFP_KERNEL);
-	if (!lsop)
-		/* couldn't sent it... too bad */
+			sizeof(*discon_rqst) + sizeof(*discon_acc) +
+			ctrl->lport->ops->lsrqst_priv_sz), GFP_KERNEL);
+	if (!lsop) {
+		dev_info(ctrl->ctrl.device,
+			"NVME-FC{%d}: send Disconnect Association "
+			"failed: ENOMEM\n",
+			ctrl->cnum);
 		return;
+	}
 
-	lsreq = &lsop->ls_req;
-
-	lsreq->private = (void *)&lsop[1];
-	discon_rqst = (struct fcnvme_ls_disconnect_assoc_rqst *)
-			(lsreq->private + ctrl->lport->ops->lsrqst_priv_sz);
+	discon_rqst = (struct fcnvme_ls_disconnect_assoc_rqst *)&lsop[1];
 	discon_acc = (struct fcnvme_ls_disconnect_assoc_acc *)&discon_rqst[1];
+	lsreq = &lsop->ls_req;
+	if (ctrl->lport->ops->lsrqst_priv_sz)
+		lsreq->private = (void *)&discon_acc[1];
+	else
+		lsreq->private = NULL;
 
 	discon_rqst->w0.ls_cmd = FCNVME_LS_DISCONNECT_ASSOC;
 	discon_rqst->desc_list_len = cpu_to_be32(
@@ -1785,15 +1805,17 @@ nvme_fc_init_aen_ops(struct nvme_fc_ctrl *ctrl)
 	struct nvme_fc_fcp_op *aen_op;
 	struct nvme_fc_cmd_iu *cmdiu;
 	struct nvme_command *sqe;
-	void *private;
+	void *private = NULL;
 	int i, ret;
 
 	aen_op = ctrl->aen_ops;
 	for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++) {
-		private = kzalloc(ctrl->lport->ops->fcprqst_priv_sz,
+		if (ctrl->lport->ops->fcprqst_priv_sz) {
+			private = kzalloc(ctrl->lport->ops->fcprqst_priv_sz,
 						GFP_KERNEL);
-		if (!private)
-			return -ENOMEM;
+			if (!private)
+				return -ENOMEM;
+		}
 
 		cmdiu = &aen_op->cmd_iu;
 		sqe = &cmdiu->sqe;
@@ -1824,9 +1846,6 @@ nvme_fc_term_aen_ops(struct nvme_fc_ctrl *ctrl)
 
 	aen_op = ctrl->aen_ops;
 	for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++) {
-		if (!aen_op->fcp_req.private)
-			continue;
-
 		__nvme_fc_exit_request(ctrl, aen_op);
 
 		kfree(aen_op->fcp_req.private);
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 66de6bd8f4fd..66a60a218994 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -1047,7 +1047,10 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
 
 	newrec->fc_target_port.node_name = pinfo->node_name;
 	newrec->fc_target_port.port_name = pinfo->port_name;
-	newrec->fc_target_port.private = &newrec[1];
+	if (template->target_priv_sz)
+		newrec->fc_target_port.private = &newrec[1];
+	else
+		newrec->fc_target_port.private = NULL;
 	newrec->fc_target_port.port_id = pinfo->port_id;
 	newrec->fc_target_port.port_num = idx;
 	INIT_LIST_HEAD(&newrec->tgt_list);
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 10/29] nvmefc: Use common definitions for LS names, formatting, and validation
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (8 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  8:44   ` Hannes Reinecke
  2020-03-26 16:41   ` Himanshu Madhani
  2020-02-05 18:37 ` [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic James Smart
                   ` (20 subsequent siblings)
  30 siblings, 2 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

Given that both host and target now generate and receive LS's create
a single table definition for LS names. Each tranport half will have
a local version of the table.

As Create Association LS is issued by both sides, and received by
both sides, create common routines to format the LS and to validate
the LS.

Convert the host side transport to use the new common Create
Association LS formatting routine.

Convert the target side transport to use the new common Create
Association LS validation routine.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/host/fc.c   | 25 ++-------------
 drivers/nvme/host/fc.h   | 79 ++++++++++++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/fc.c | 28 ++---------------
 3 files changed, 83 insertions(+), 49 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 1a58e3dc0399..8fed69504c38 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -1421,29 +1421,8 @@ nvme_fc_xmt_disconnect_assoc(struct nvme_fc_ctrl *ctrl)
 	else
 		lsreq->private = NULL;
 
-	discon_rqst->w0.ls_cmd = FCNVME_LS_DISCONNECT_ASSOC;
-	discon_rqst->desc_list_len = cpu_to_be32(
-				sizeof(struct fcnvme_lsdesc_assoc_id) +
-				sizeof(struct fcnvme_lsdesc_disconn_cmd));
-
-	discon_rqst->associd.desc_tag = cpu_to_be32(FCNVME_LSDESC_ASSOC_ID);
-	discon_rqst->associd.desc_len =
-			fcnvme_lsdesc_len(
-				sizeof(struct fcnvme_lsdesc_assoc_id));
-
-	discon_rqst->associd.association_id = cpu_to_be64(ctrl->association_id);
-
-	discon_rqst->discon_cmd.desc_tag = cpu_to_be32(
-						FCNVME_LSDESC_DISCONN_CMD);
-	discon_rqst->discon_cmd.desc_len =
-			fcnvme_lsdesc_len(
-				sizeof(struct fcnvme_lsdesc_disconn_cmd));
-
-	lsreq->rqstaddr = discon_rqst;
-	lsreq->rqstlen = sizeof(*discon_rqst);
-	lsreq->rspaddr = discon_acc;
-	lsreq->rsplen = sizeof(*discon_acc);
-	lsreq->timeout = NVME_FC_LS_TIMEOUT_SEC;
+	nvmefc_fmt_lsreq_discon_assoc(lsreq, discon_rqst, discon_acc,
+				ctrl->association_id);
 
 	ret = nvme_fc_send_ls_req_async(ctrl->rport, lsop,
 				nvme_fc_disconnect_assoc_done);
diff --git a/drivers/nvme/host/fc.h b/drivers/nvme/host/fc.h
index 08fa88381d45..05ce566f2caf 100644
--- a/drivers/nvme/host/fc.h
+++ b/drivers/nvme/host/fc.h
@@ -17,6 +17,7 @@
  */
 
 union nvmefc_ls_requests {
+	struct fcnvme_ls_rqst_w0		w0;
 	struct fcnvme_ls_cr_assoc_rqst		rq_cr_assoc;
 	struct fcnvme_ls_cr_conn_rqst		rq_cr_conn;
 	struct fcnvme_ls_disconnect_assoc_rqst	rq_dis_assoc;
@@ -145,4 +146,82 @@ static char *validation_errors[] = {
 	"Bad Disconnect ACC Length",
 };
 
+#define NVME_FC_LAST_LS_CMD_VALUE	FCNVME_LS_DISCONNECT_CONN
+
+static char *nvmefc_ls_names[] = {
+	"Reserved (0)",
+	"RJT (1)",
+	"ACC (2)",
+	"Create Association",
+	"Create Connection",
+	"Disconnect Association",
+	"Disconnect Connection",
+};
+
+static inline void
+nvmefc_fmt_lsreq_discon_assoc(struct nvmefc_ls_req *lsreq,
+	struct fcnvme_ls_disconnect_assoc_rqst *discon_rqst,
+	struct fcnvme_ls_disconnect_assoc_acc *discon_acc,
+	u64 association_id)
+{
+	lsreq->rqstaddr = discon_rqst;
+	lsreq->rqstlen = sizeof(*discon_rqst);
+	lsreq->rspaddr = discon_acc;
+	lsreq->rsplen = sizeof(*discon_acc);
+	lsreq->timeout = NVME_FC_LS_TIMEOUT_SEC;
+
+	discon_rqst->w0.ls_cmd = FCNVME_LS_DISCONNECT_ASSOC;
+	discon_rqst->desc_list_len = cpu_to_be32(
+				sizeof(struct fcnvme_lsdesc_assoc_id) +
+				sizeof(struct fcnvme_lsdesc_disconn_cmd));
+
+	discon_rqst->associd.desc_tag = cpu_to_be32(FCNVME_LSDESC_ASSOC_ID);
+	discon_rqst->associd.desc_len =
+			fcnvme_lsdesc_len(
+				sizeof(struct fcnvme_lsdesc_assoc_id));
+
+	discon_rqst->associd.association_id = cpu_to_be64(association_id);
+
+	discon_rqst->discon_cmd.desc_tag = cpu_to_be32(
+						FCNVME_LSDESC_DISCONN_CMD);
+	discon_rqst->discon_cmd.desc_len =
+			fcnvme_lsdesc_len(
+				sizeof(struct fcnvme_lsdesc_disconn_cmd));
+}
+
+static inline int
+nvmefc_vldt_lsreq_discon_assoc(u32 rqstlen,
+	struct fcnvme_ls_disconnect_assoc_rqst *rqst)
+{
+	int ret = 0;
+
+	if (rqstlen < sizeof(struct fcnvme_ls_disconnect_assoc_rqst))
+		ret = VERR_DISCONN_LEN;
+	else if (rqst->desc_list_len !=
+			fcnvme_lsdesc_len(
+				sizeof(struct fcnvme_ls_disconnect_assoc_rqst)))
+		ret = VERR_DISCONN_RQST_LEN;
+	else if (rqst->associd.desc_tag != cpu_to_be32(FCNVME_LSDESC_ASSOC_ID))
+		ret = VERR_ASSOC_ID;
+	else if (rqst->associd.desc_len !=
+			fcnvme_lsdesc_len(
+				sizeof(struct fcnvme_lsdesc_assoc_id)))
+		ret = VERR_ASSOC_ID_LEN;
+	else if (rqst->discon_cmd.desc_tag !=
+			cpu_to_be32(FCNVME_LSDESC_DISCONN_CMD))
+		ret = VERR_DISCONN_CMD;
+	else if (rqst->discon_cmd.desc_len !=
+			fcnvme_lsdesc_len(
+				sizeof(struct fcnvme_lsdesc_disconn_cmd)))
+		ret = VERR_DISCONN_CMD_LEN;
+	/*
+	 * As the standard changed on the LS, check if old format and scope
+	 * something other than Association (e.g. 0).
+	 */
+	else if (rqst->discon_cmd.rsvd8[0])
+		ret = VERR_DISCONN_SCOPE;
+
+	return ret;
+}
+
 #endif /* _NVME_FC_TRANSPORT_H */
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 66a60a218994..5739df7edc59 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -1442,32 +1442,8 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 
 	memset(acc, 0, sizeof(*acc));
 
-	if (iod->rqstdatalen < sizeof(struct fcnvme_ls_disconnect_assoc_rqst))
-		ret = VERR_DISCONN_LEN;
-	else if (rqst->desc_list_len !=
-			fcnvme_lsdesc_len(
-				sizeof(struct fcnvme_ls_disconnect_assoc_rqst)))
-		ret = VERR_DISCONN_RQST_LEN;
-	else if (rqst->associd.desc_tag != cpu_to_be32(FCNVME_LSDESC_ASSOC_ID))
-		ret = VERR_ASSOC_ID;
-	else if (rqst->associd.desc_len !=
-			fcnvme_lsdesc_len(
-				sizeof(struct fcnvme_lsdesc_assoc_id)))
-		ret = VERR_ASSOC_ID_LEN;
-	else if (rqst->discon_cmd.desc_tag !=
-			cpu_to_be32(FCNVME_LSDESC_DISCONN_CMD))
-		ret = VERR_DISCONN_CMD;
-	else if (rqst->discon_cmd.desc_len !=
-			fcnvme_lsdesc_len(
-				sizeof(struct fcnvme_lsdesc_disconn_cmd)))
-		ret = VERR_DISCONN_CMD_LEN;
-	/*
-	 * As the standard changed on the LS, check if old format and scope
-	 * something other than Association (e.g. 0).
-	 */
-	else if (rqst->discon_cmd.rsvd8[0])
-		ret = VERR_DISCONN_SCOPE;
-	else {
+	ret = nvmefc_vldt_lsreq_discon_assoc(iod->rqstdatalen, rqst);
+	if (!ret) {
 		/* match an active association */
 		assoc = nvmet_fc_find_target_assoc(tgtport,
 				be64_to_cpu(rqst->associd.association_id));
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (9 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 10/29] nvmefc: Use common definitions for LS names, formatting, and validation James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-02-28 21:08   ` Sagi Grimberg
                     ` (2 more replies)
  2020-02-05 18:37 ` [PATCH 12/29] nvme-fc: Add Disconnect Association Rcv support James Smart
                   ` (19 subsequent siblings)
  30 siblings, 3 replies; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

Convert the assoc_active flag to an atomic to remove any small
race conditions on transitioning to active and back.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/host/fc.c | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 8fed69504c38..40e1141c76db 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -131,6 +131,11 @@ enum nvme_fcctrl_flags {
 	FCCTRL_TERMIO		= (1 << 0),
 };
 
+enum {
+	ASSOC_INACTIVE		= 0,
+	ASSOC_ACTIVE		= 1,
+};
+
 struct nvme_fc_ctrl {
 	spinlock_t		lock;
 	struct nvme_fc_queue	*queues;
@@ -140,7 +145,7 @@ struct nvme_fc_ctrl {
 	u32			cnum;
 
 	bool			ioq_live;
-	bool			assoc_active;
+	atomic_t		assoc_active;
 	atomic_t		err_work_active;
 	u64			association_id;
 
@@ -2584,12 +2589,14 @@ static int
 nvme_fc_ctlr_active_on_rport(struct nvme_fc_ctrl *ctrl)
 {
 	struct nvme_fc_rport *rport = ctrl->rport;
+	int priorstate;
 	u32 cnt;
 
-	if (ctrl->assoc_active)
+	priorstate = atomic_cmpxchg(&ctrl->assoc_active,
+					ASSOC_INACTIVE, ASSOC_ACTIVE);
+	if (priorstate != ASSOC_INACTIVE)
 		return 1;
 
-	ctrl->assoc_active = true;
 	cnt = atomic_inc_return(&rport->act_ctrl_cnt);
 	if (cnt == 1)
 		nvme_fc_rport_active_on_lport(rport);
@@ -2746,7 +2753,7 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 	__nvme_fc_delete_hw_queue(ctrl, &ctrl->queues[0], 0);
 out_free_queue:
 	nvme_fc_free_queue(&ctrl->queues[0]);
-	ctrl->assoc_active = false;
+	atomic_set(&ctrl->assoc_active, ASSOC_INACTIVE);
 	nvme_fc_ctlr_inactive_on_rport(ctrl);
 
 	return ret;
@@ -2762,10 +2769,12 @@ static void
 nvme_fc_delete_association(struct nvme_fc_ctrl *ctrl)
 {
 	unsigned long flags;
+	int priorstate;
 
-	if (!ctrl->assoc_active)
+	priorstate = atomic_cmpxchg(&ctrl->assoc_active,
+					ASSOC_ACTIVE, ASSOC_INACTIVE);
+	if (priorstate != ASSOC_ACTIVE)
 		return;
-	ctrl->assoc_active = false;
 
 	spin_lock_irqsave(&ctrl->lock, flags);
 	ctrl->flags |= FCCTRL_TERMIO;
@@ -3096,7 +3105,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
 	ctrl->dev = lport->dev;
 	ctrl->cnum = idx;
 	ctrl->ioq_live = false;
-	ctrl->assoc_active = false;
+	atomic_set(&ctrl->assoc_active, ASSOC_INACTIVE);
 	atomic_set(&ctrl->err_work_active, 0);
 	init_waitqueue_head(&ctrl->ioabort_wait);
 
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 12/29] nvme-fc: Add Disconnect Association Rcv support
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (10 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:00   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 13/29] nvmet-fc: add LS failure messages James Smart
                   ` (18 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

The nvme-fc host transport did not support the reception of a
FC-NVME LS. Reception is necessary to implement full compliance
with FC-NVME-2.

Populate the LS receive handler, and specifically the handling
of a Disconnect Association LS. The response to the LS, if it
matched a controller, must be sent after the aborts for any
I/O on any connection have been sent.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/host/fc.c | 363 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 359 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 40e1141c76db..ab1fbefb6bee 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -62,6 +62,17 @@ struct nvmefc_ls_req_op {
 	bool			req_queued;
 };
 
+struct nvmefc_ls_rcv_op {
+	struct nvme_fc_rport		*rport;
+	struct nvmefc_ls_rsp		*lsrsp;
+	union nvmefc_ls_requests	*rqstbuf;
+	union nvmefc_ls_responses	*rspbuf;
+	u16				rqstdatalen;
+	bool				handled;
+	dma_addr_t			rspdma;
+	struct list_head		lsrcv_list;	/* rport->ls_rcv_list */
+} __aligned(sizeof(u64));	/* alignment for other things alloc'd with */
+
 enum nvme_fcpop_state {
 	FCPOP_STATE_UNINIT	= 0,
 	FCPOP_STATE_IDLE	= 1,
@@ -118,6 +129,7 @@ struct nvme_fc_rport {
 	struct list_head		endp_list; /* for lport->endp_list */
 	struct list_head		ctrl_list;
 	struct list_head		ls_req_list;
+	struct list_head		ls_rcv_list;
 	struct list_head		disc_list;
 	struct device			*dev;	/* physical device for dma */
 	struct nvme_fc_lport		*lport;
@@ -125,6 +137,7 @@ struct nvme_fc_rport {
 	struct kref			ref;
 	atomic_t                        act_ctrl_cnt;
 	unsigned long			dev_loss_end;
+	struct work_struct		lsrcv_work;
 } __aligned(sizeof(u64));	/* alignment for other things alloc'd with */
 
 enum nvme_fcctrl_flags {
@@ -148,6 +161,7 @@ struct nvme_fc_ctrl {
 	atomic_t		assoc_active;
 	atomic_t		err_work_active;
 	u64			association_id;
+	struct nvmefc_ls_rcv_op	*rcv_disconn;
 
 	struct list_head	ctrl_list;	/* rport->ctrl_list */
 
@@ -225,6 +239,9 @@ static struct device *fc_udev_device;
 static void __nvme_fc_delete_hw_queue(struct nvme_fc_ctrl *,
 			struct nvme_fc_queue *, unsigned int);
 
+static void nvme_fc_handle_ls_rqst_work(struct work_struct *work);
+
+
 static void
 nvme_fc_free_lport(struct kref *ref)
 {
@@ -711,6 +728,7 @@ nvme_fc_register_remoteport(struct nvme_fc_local_port *localport,
 	atomic_set(&newrec->act_ctrl_cnt, 0);
 	spin_lock_init(&newrec->lock);
 	newrec->remoteport.localport = &lport->localport;
+	INIT_LIST_HEAD(&newrec->ls_rcv_list);
 	newrec->dev = lport->dev;
 	newrec->lport = lport;
 	if (lport->ops->remote_priv_sz)
@@ -724,6 +742,7 @@ nvme_fc_register_remoteport(struct nvme_fc_local_port *localport,
 	newrec->remoteport.port_state = FC_OBJSTATE_ONLINE;
 	newrec->remoteport.port_num = idx;
 	__nvme_fc_set_dev_loss_tmo(newrec, pinfo);
+	INIT_WORK(&newrec->lsrcv_work, nvme_fc_handle_ls_rqst_work);
 
 	spin_lock_irqsave(&nvme_fc_lock, flags);
 	list_add_tail(&newrec->endp_list, &lport->endp_list);
@@ -1013,6 +1032,7 @@ fc_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
 static void nvme_fc_ctrl_put(struct nvme_fc_ctrl *);
 static int nvme_fc_ctrl_get(struct nvme_fc_ctrl *);
 
+static void nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg);
 
 static void
 __nvme_fc_finish_ls_req(struct nvmefc_ls_req_op *lsop)
@@ -1161,6 +1181,7 @@ nvme_fc_connect_admin_queue(struct nvme_fc_ctrl *ctrl,
 	struct nvmefc_ls_req *lsreq;
 	struct fcnvme_ls_cr_assoc_rqst *assoc_rqst;
 	struct fcnvme_ls_cr_assoc_acc *assoc_acc;
+	unsigned long flags;
 	int ret, fcret = 0;
 
 	lsop = kzalloc((sizeof(*lsop) +
@@ -1250,11 +1271,13 @@ nvme_fc_connect_admin_queue(struct nvme_fc_ctrl *ctrl,
 			"q %d Create Association LS failed: %s\n",
 			queue->qnum, validation_errors[fcret]);
 	} else {
+		spin_lock_irqsave(&ctrl->lock, flags);
 		ctrl->association_id =
 			be64_to_cpu(assoc_acc->associd.association_id);
 		queue->connection_id =
 			be64_to_cpu(assoc_acc->connectid.connection_id);
 		set_bit(NVME_FC_Q_CONNECTED, &queue->flags);
+		spin_unlock_irqrestore(&ctrl->lock, flags);
 	}
 
 out_free_buffer:
@@ -1435,6 +1458,247 @@ nvme_fc_xmt_disconnect_assoc(struct nvme_fc_ctrl *ctrl)
 		kfree(lsop);
 }
 
+static void
+nvme_fc_xmt_ls_rsp_done(struct nvmefc_ls_rsp *lsrsp)
+{
+	struct nvmefc_ls_rcv_op *lsop = lsrsp->nvme_fc_private;
+	struct nvme_fc_rport *rport = lsop->rport;
+	struct nvme_fc_lport *lport = rport->lport;
+	unsigned long flags;
+
+	spin_lock_irqsave(&rport->lock, flags);
+	list_del(&lsop->lsrcv_list);
+	spin_unlock_irqrestore(&rport->lock, flags);
+
+	fc_dma_sync_single_for_cpu(lport->dev, lsop->rspdma,
+				sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
+	fc_dma_unmap_single(lport->dev, lsop->rspdma,
+			sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
+
+	kfree(lsop);
+
+	nvme_fc_rport_put(rport);
+}
+
+static void
+nvme_fc_xmt_ls_rsp(struct nvmefc_ls_rcv_op *lsop)
+{
+	struct nvme_fc_rport *rport = lsop->rport;
+	struct nvme_fc_lport *lport = rport->lport;
+	struct fcnvme_ls_rqst_w0 *w0 = &lsop->rqstbuf->w0;
+	int ret;
+
+	fc_dma_sync_single_for_device(lport->dev, lsop->rspdma,
+				  sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
+
+	ret = lport->ops->xmt_ls_rsp(&lport->localport, &rport->remoteport,
+				     lsop->lsrsp);
+	if (ret) {
+		dev_warn(lport->dev,
+			"LLDD rejected LS RSP xmt: LS %d status %d\n",
+			w0->ls_cmd, ret);
+		nvme_fc_xmt_ls_rsp_done(lsop->lsrsp);
+		return;
+	}
+}
+
+static struct nvme_fc_ctrl *
+nvme_fc_match_disconn_ls(struct nvme_fc_rport *rport,
+		      struct nvmefc_ls_rcv_op *lsop)
+{
+	struct fcnvme_ls_disconnect_assoc_rqst *rqst =
+					&lsop->rqstbuf->rq_dis_assoc;
+	struct nvme_fc_ctrl *ctrl, *ret = NULL;
+	struct nvmefc_ls_rcv_op *oldls = NULL;
+	u64 association_id = be64_to_cpu(rqst->associd.association_id);
+	unsigned long flags;
+
+	spin_lock_irqsave(&rport->lock, flags);
+
+	list_for_each_entry(ctrl, &rport->ctrl_list, ctrl_list) {
+		if (!nvme_fc_ctrl_get(ctrl))
+			continue;
+		spin_lock(&ctrl->lock);
+		if (association_id == ctrl->association_id) {
+			oldls = ctrl->rcv_disconn;
+			ctrl->rcv_disconn = lsop;
+			ret = ctrl;
+		}
+		spin_unlock(&ctrl->lock);
+		if (ret)
+			/* leave the ctrl get reference */
+			break;
+		nvme_fc_ctrl_put(ctrl);
+	}
+
+	spin_unlock_irqrestore(&rport->lock, flags);
+
+	/* transmit a response for anything that was pending */
+	if (oldls) {
+		dev_info(rport->lport->dev,
+			"NVME-FC{%d}: Multiple Disconnect Association "
+			"LS's received\n", ctrl->cnum);
+		/* overwrite good response with bogus failure */
+		oldls->lsrsp->rsplen = nvme_fc_format_rjt(oldls->rspbuf,
+						sizeof(*oldls->rspbuf),
+						rqst->w0.ls_cmd,
+						FCNVME_RJT_RC_UNAB,
+						FCNVME_RJT_EXP_NONE, 0);
+		nvme_fc_xmt_ls_rsp(oldls);
+	}
+
+	return ret;
+}
+
+/*
+ * returns true to mean LS handled and ls_rsp can be sent
+ * returns false to defer ls_rsp xmt (will be done as part of
+ *     association termination)
+ */
+static bool
+nvme_fc_ls_disconnect_assoc(struct nvmefc_ls_rcv_op *lsop)
+{
+	struct nvme_fc_rport *rport = lsop->rport;
+	struct fcnvme_ls_disconnect_assoc_rqst *rqst =
+					&lsop->rqstbuf->rq_dis_assoc;
+	struct fcnvme_ls_disconnect_assoc_acc *acc =
+					&lsop->rspbuf->rsp_dis_assoc;
+	struct nvme_fc_ctrl *ctrl = NULL;
+	int ret = 0;
+
+	memset(acc, 0, sizeof(*acc));
+
+	ret = nvmefc_vldt_lsreq_discon_assoc(lsop->rqstdatalen, rqst);
+	if (!ret) {
+		/* match an active association */
+		ctrl = nvme_fc_match_disconn_ls(rport, lsop);
+		if (!ctrl)
+			ret = VERR_NO_ASSOC;
+	}
+
+	if (ret) {
+		dev_info(rport->lport->dev,
+			"Disconnect LS failed: %s\n",
+			validation_errors[ret]);
+		lsop->lsrsp->rsplen = nvme_fc_format_rjt(acc,
+					sizeof(*acc), rqst->w0.ls_cmd,
+					(ret == VERR_NO_ASSOC) ?
+						FCNVME_RJT_RC_INV_ASSOC :
+						FCNVME_RJT_RC_LOGIC,
+					FCNVME_RJT_EXP_NONE, 0);
+		return true;
+	}
+
+	/* format an ACCept response */
+
+	lsop->lsrsp->rsplen = sizeof(*acc);
+
+	nvme_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
+			fcnvme_lsdesc_len(
+				sizeof(struct fcnvme_ls_disconnect_assoc_acc)),
+			FCNVME_LS_DISCONNECT_ASSOC);
+
+	/*
+	 * the transmit of the response will occur after the exchanges
+	 * for the association have been ABTS'd by
+	 * nvme_fc_delete_association().
+	 */
+
+	/* fail the association */
+	nvme_fc_error_recovery(ctrl, "Disconnect Association LS received");
+
+	/* release the reference taken by nvme_fc_match_disconn_ls() */
+	nvme_fc_ctrl_put(ctrl);
+
+	return false;
+}
+
+/*
+ * Actual Processing routine for received FC-NVME LS Requests from the LLD
+ * returns true if a response should be sent afterward, false if rsp will
+ * be sent asynchronously.
+ */
+static bool
+nvme_fc_handle_ls_rqst(struct nvmefc_ls_rcv_op *lsop)
+{
+	struct fcnvme_ls_rqst_w0 *w0 = &lsop->rqstbuf->w0;
+	bool ret = true;
+
+	lsop->lsrsp->nvme_fc_private = lsop;
+	lsop->lsrsp->rspbuf = lsop->rspbuf;
+	lsop->lsrsp->rspdma = lsop->rspdma;
+	lsop->lsrsp->done = nvme_fc_xmt_ls_rsp_done;
+	/* Be preventative. handlers will later set to valid length */
+	lsop->lsrsp->rsplen = 0;
+
+	/*
+	 * handlers:
+	 *   parse request input, execute the request, and format the
+	 *   LS response
+	 */
+	switch (w0->ls_cmd) {
+	case FCNVME_LS_DISCONNECT_ASSOC:
+		ret = nvme_fc_ls_disconnect_assoc(lsop);
+		break;
+	case FCNVME_LS_DISCONNECT_CONN:
+		lsop->lsrsp->rsplen = nvme_fc_format_rjt(lsop->rspbuf,
+				sizeof(*lsop->rspbuf), w0->ls_cmd,
+				FCNVME_RJT_RC_UNSUP, FCNVME_RJT_EXP_NONE, 0);
+		break;
+	case FCNVME_LS_CREATE_ASSOCIATION:
+	case FCNVME_LS_CREATE_CONNECTION:
+		lsop->lsrsp->rsplen = nvme_fc_format_rjt(lsop->rspbuf,
+				sizeof(*lsop->rspbuf), w0->ls_cmd,
+				FCNVME_RJT_RC_LOGIC, FCNVME_RJT_EXP_NONE, 0);
+		break;
+	default:
+		lsop->lsrsp->rsplen = nvme_fc_format_rjt(lsop->rspbuf,
+				sizeof(*lsop->rspbuf), w0->ls_cmd,
+				FCNVME_RJT_RC_INVAL, FCNVME_RJT_EXP_NONE, 0);
+		break;
+	}
+
+	return(ret);
+}
+
+static void
+nvme_fc_handle_ls_rqst_work(struct work_struct *work)
+{
+	struct nvme_fc_rport *rport =
+		container_of(work, struct nvme_fc_rport, lsrcv_work);
+	struct fcnvme_ls_rqst_w0 *w0;
+	struct nvmefc_ls_rcv_op *lsop;
+	unsigned long flags;
+	bool sendrsp;
+
+restart:
+	sendrsp = true;
+	spin_lock_irqsave(&rport->lock, flags);
+	list_for_each_entry(lsop, &rport->ls_rcv_list, lsrcv_list) {
+		if (lsop->handled)
+			continue;
+
+		lsop->handled = true;
+		if (rport->remoteport.port_state == FC_OBJSTATE_ONLINE) {
+			spin_unlock_irqrestore(&rport->lock, flags);
+			sendrsp = nvme_fc_handle_ls_rqst(lsop);
+		} else {
+			spin_unlock_irqrestore(&rport->lock, flags);
+			w0 = &lsop->rqstbuf->w0;
+			lsop->lsrsp->rsplen = nvme_fc_format_rjt(
+						lsop->rspbuf,
+						sizeof(*lsop->rspbuf),
+						w0->ls_cmd,
+						FCNVME_RJT_RC_UNAB,
+						FCNVME_RJT_EXP_NONE, 0);
+		}
+		if (sendrsp)
+			nvme_fc_xmt_ls_rsp(lsop);
+		goto restart;
+	}
+	spin_unlock_irqrestore(&rport->lock, flags);
+}
+
 /**
  * nvme_fc_rcv_ls_req - transport entry point called by an LLDD
  *                       upon the reception of a NVME LS request.
@@ -1461,20 +1725,92 @@ nvme_fc_rcv_ls_req(struct nvme_fc_remote_port *portptr,
 {
 	struct nvme_fc_rport *rport = remoteport_to_rport(portptr);
 	struct nvme_fc_lport *lport = rport->lport;
+	struct fcnvme_ls_rqst_w0 *w0 = (struct fcnvme_ls_rqst_w0 *)lsreqbuf;
+	struct nvmefc_ls_rcv_op *lsop;
+	unsigned long flags;
+	int ret;
+
+	nvme_fc_rport_get(rport);
 
 	/* validate there's a routine to transmit a response */
-	if (!lport->ops->xmt_ls_rsp)
-		return(-EINVAL);
+	if (!lport->ops->xmt_ls_rsp) {
+		dev_info(lport->dev,
+			"RCV %s LS failed: no LLDD xmt_ls_rsp\n",
+			(w0->ls_cmd <= NVME_FC_LAST_LS_CMD_VALUE) ?
+				nvmefc_ls_names[w0->ls_cmd] : "");
+		ret = -EINVAL;
+		goto out_put;
+	}
+
+	if (lsreqbuf_len > sizeof(union nvmefc_ls_requests)) {
+		dev_info(lport->dev,
+			"RCV %s LS failed: payload too large\n",
+			(w0->ls_cmd <= NVME_FC_LAST_LS_CMD_VALUE) ?
+				nvmefc_ls_names[w0->ls_cmd] : "");
+		ret = -E2BIG;
+		goto out_put;
+	}
+
+	lsop = kzalloc(sizeof(*lsop) +
+			sizeof(union nvmefc_ls_requests) +
+			sizeof(union nvmefc_ls_responses),
+			GFP_KERNEL);
+	if (!lsop) {
+		dev_info(lport->dev,
+			"RCV %s LS failed: No memory\n",
+			(w0->ls_cmd <= NVME_FC_LAST_LS_CMD_VALUE) ?
+				nvmefc_ls_names[w0->ls_cmd] : "");
+		ret = -ENOMEM;
+		goto out_put;
+	}
+	lsop->rqstbuf = (union nvmefc_ls_requests *)&lsop[1];
+	lsop->rspbuf = (union nvmefc_ls_responses *)&lsop->rqstbuf[1];
+
+	lsop->rspdma = fc_dma_map_single(lport->dev, lsop->rspbuf,
+					sizeof(*lsop->rspbuf),
+					DMA_TO_DEVICE);
+	if (fc_dma_mapping_error(lport->dev, lsop->rspdma)) {
+		dev_info(lport->dev,
+			"RCV %s LS failed: DMA mapping failure\n",
+			(w0->ls_cmd <= NVME_FC_LAST_LS_CMD_VALUE) ?
+				nvmefc_ls_names[w0->ls_cmd] : "");
+		ret = -EFAULT;
+		goto out_free;
+	}
+
+	lsop->rport = rport;
+	lsop->lsrsp = lsrsp;
+
+	memcpy(lsop->rqstbuf, lsreqbuf, lsreqbuf_len);
+	lsop->rqstdatalen = lsreqbuf_len;
+
+	spin_lock_irqsave(&rport->lock, flags);
+	if (rport->remoteport.port_state != FC_OBJSTATE_ONLINE) {
+		spin_unlock_irqrestore(&rport->lock, flags);
+		ret = -ENOTCONN;
+		goto out_unmap;
+	}
+	list_add_tail(&lsop->lsrcv_list, &rport->ls_rcv_list);
+	spin_unlock_irqrestore(&rport->lock, flags);
+
+	schedule_work(&rport->lsrcv_work);
 
 	return 0;
+
+out_unmap:
+	fc_dma_unmap_single(lport->dev, lsop->rspdma,
+			sizeof(*lsop->rspbuf), DMA_TO_DEVICE);
+out_free:
+	kfree(lsop);
+out_put:
+	nvme_fc_rport_put(rport);
+	return ret;
 }
 EXPORT_SYMBOL_GPL(nvme_fc_rcv_ls_req);
 
 
 /* *********************** NVME Ctrl Routines **************************** */
 
-static void nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg);
-
 static void
 __nvme_fc_exit_request(struct nvme_fc_ctrl *ctrl,
 		struct nvme_fc_fcp_op *op)
@@ -2631,6 +2967,8 @@ static int
 nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 {
 	struct nvmf_ctrl_options *opts = ctrl->ctrl.opts;
+	struct nvmefc_ls_rcv_op *disls = NULL;
+	unsigned long flags;
 	int ret;
 	bool changed;
 
@@ -2748,7 +3086,13 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 out_disconnect_admin_queue:
 	/* send a Disconnect(association) LS to fc-nvme target */
 	nvme_fc_xmt_disconnect_assoc(ctrl);
+	spin_lock_irqsave(&ctrl->lock, flags);
 	ctrl->association_id = 0;
+	disls = ctrl->rcv_disconn;
+	ctrl->rcv_disconn = NULL;
+	spin_unlock_irqrestore(&ctrl->lock, flags);
+	if (disls)
+		nvme_fc_xmt_ls_rsp(disls);
 out_delete_hw_queue:
 	__nvme_fc_delete_hw_queue(ctrl, &ctrl->queues[0], 0);
 out_free_queue:
@@ -2768,6 +3112,7 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 static void
 nvme_fc_delete_association(struct nvme_fc_ctrl *ctrl)
 {
+	struct nvmefc_ls_rcv_op *disls = NULL;
 	unsigned long flags;
 	int priorstate;
 
@@ -2842,7 +3187,17 @@ nvme_fc_delete_association(struct nvme_fc_ctrl *ctrl)
 	if (ctrl->association_id)
 		nvme_fc_xmt_disconnect_assoc(ctrl);
 
+	spin_lock_irqsave(&ctrl->lock, flags);
 	ctrl->association_id = 0;
+	disls = ctrl->rcv_disconn;
+	ctrl->rcv_disconn = NULL;
+	spin_unlock_irqrestore(&ctrl->lock, flags);
+	if (disls)
+		/*
+		 * if a Disconnect Request was waiting for a response, send
+		 * now that all ABTS's have been issued (and are complete).
+		 */
+		nvme_fc_xmt_ls_rsp(disls);
 
 	if (ctrl->ctrl.tagset) {
 		nvme_fc_delete_hw_io_queues(ctrl);
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 13/29] nvmet-fc: add LS failure messages
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (11 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 12/29] nvme-fc: Add Disconnect Association Rcv support James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:01   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 14/29] nvmet-fc: perform small cleanups on unneeded checks James Smart
                   ` (17 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

Add LS reception failure messages

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/target/fc.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 5739df7edc59..a91c443c9098 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -1598,15 +1598,31 @@ nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *target_port,
 {
 	struct nvmet_fc_tgtport *tgtport = targetport_to_tgtport(target_port);
 	struct nvmet_fc_ls_iod *iod;
-
-	if (lsreqbuf_len > sizeof(union nvmefc_ls_requests))
+	struct fcnvme_ls_rqst_w0 *w0 = (struct fcnvme_ls_rqst_w0 *)lsreqbuf;
+
+	if (lsreqbuf_len > sizeof(union nvmefc_ls_requests)) {
+		dev_info(tgtport->dev,
+			"RCV %s LS failed: payload too large (%d)\n",
+			(w0->ls_cmd <= NVME_FC_LAST_LS_CMD_VALUE) ?
+				nvmefc_ls_names[w0->ls_cmd] : "",
+			lsreqbuf_len);
 		return -E2BIG;
+	}
 
-	if (!nvmet_fc_tgtport_get(tgtport))
+	if (!nvmet_fc_tgtport_get(tgtport)) {
+		dev_info(tgtport->dev,
+			"RCV %s LS failed: target deleting\n",
+			(w0->ls_cmd <= NVME_FC_LAST_LS_CMD_VALUE) ?
+				nvmefc_ls_names[w0->ls_cmd] : "");
 		return -ESHUTDOWN;
+	}
 
 	iod = nvmet_fc_alloc_ls_iod(tgtport);
 	if (!iod) {
+		dev_info(tgtport->dev,
+			"RCV %s LS failed: context allocation failed\n",
+			(w0->ls_cmd <= NVME_FC_LAST_LS_CMD_VALUE) ?
+				nvmefc_ls_names[w0->ls_cmd] : "");
 		nvmet_fc_tgtport_put(tgtport);
 		return -ENOENT;
 	}
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 14/29] nvmet-fc: perform small cleanups on unneeded checks
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (12 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 13/29] nvmet-fc: add LS failure messages James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:01   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 15/29] nvmet-fc: track hostport handle for associations James Smart
                   ` (16 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

While code reviewing saw a couple of items that can be cleaned up:
- In nvmet_fc_delete_target_queue(), the routine unlocks, then checks
  and relocks.  Reorganize to avoid the unlock/relock.
- In nvmet_fc_delete_target_queue(), there's a check on the disconnect
  state that is unnecessary as the routine validates the state before
  starting any action.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/target/fc.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index a91c443c9098..35b5cc0d2240 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -688,20 +688,18 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
 		if (fod->active) {
 			spin_lock(&fod->flock);
 			fod->abort = true;
-			writedataactive = fod->writedataactive;
-			spin_unlock(&fod->flock);
 			/*
 			 * only call lldd abort routine if waiting for
 			 * writedata. other outstanding ops should finish
 			 * on their own.
 			 */
-			if (writedataactive) {
-				spin_lock(&fod->flock);
+			if (fod->writedataactive) {
 				fod->aborted = true;
 				spin_unlock(&fod->flock);
 				tgtport->ops->fcp_abort(
 					&tgtport->fc_target_port, fod->fcpreq);
-			}
+			} else
+				spin_unlock(&fod->flock);
 		}
 	}
 
@@ -741,8 +739,7 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
 
 	flush_workqueue(queue->work_q);
 
-	if (disconnect)
-		nvmet_sq_destroy(&queue->nvme_sq);
+	nvmet_sq_destroy(&queue->nvme_sq);
 
 	nvmet_fc_tgt_q_put(queue);
 }
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 15/29] nvmet-fc: track hostport handle for associations
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (13 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 14/29] nvmet-fc: perform small cleanups on unneeded checks James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:02   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 16/29] nvmet-fc: rename ls_list to ls_rcv_list James Smart
                   ` (15 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

In preparation for sending LS requests for an association that
terminates, save and track the hosthandle that is part of the
LS's that are received to create associations.

Support consists of:
- Create a hostport structure that will be 1:1 mapped to a
  host port handle. The hostport structure is specific to
  a targetport.
- Whenever an association is created, create a host port for
  the hosthandle the Create Association LS was received from.
  There will be only 1 hostport structure created, with all
  associations that have the same hosthandle sharing the
  hostport structure.
- When the association is terminated, the hostport reference
  will be removed. After the last association for the host
  port is removed, the hostport will be deleted.
- Add support for the new nvmet_fc_invalidate_host() interface.
  In the past, the LLDD didn't notify loss of connectivity to
  host ports - the LLD would simply reject new requests and wait
  for the kato timeout to kill the association. Now, when host
  port connectivity is lost, the LLDD can notify the transport.
  The transport will initiate the termination of all associations
  for that host port. When the last association has been terminated
  and the hosthandle will no longer be referenced, the new
  host_release callback will be made to the lldd.
- For compatibility with prior behavior which didn't report the
  hosthandle:  the LLDD must set hosthandle to NULL. In these
  cases, not LS request will be made, and no host_release callbacks
  will be made either.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/target/fc.c | 177 +++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 170 insertions(+), 7 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 35b5cc0d2240..2c5b702a8561 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -33,6 +33,7 @@ struct nvmet_fc_ls_iod {
 
 	struct nvmet_fc_tgtport		*tgtport;
 	struct nvmet_fc_tgt_assoc	*assoc;
+	void				*hosthandle;
 
 	union nvmefc_ls_requests	*rqstbuf;
 	union nvmefc_ls_responses	*rspbuf;
@@ -81,7 +82,6 @@ struct nvmet_fc_fcp_iod {
 };
 
 struct nvmet_fc_tgtport {
-
 	struct nvmet_fc_target_port	fc_target_port;
 
 	struct list_head		tgt_list; /* nvmet_fc_target_list */
@@ -93,6 +93,7 @@ struct nvmet_fc_tgtport {
 	struct list_head		ls_list;
 	struct list_head		ls_busylist;
 	struct list_head		assoc_list;
+	struct list_head		host_list;
 	struct ida			assoc_cnt;
 	struct nvmet_fc_port_entry	*pe;
 	struct kref			ref;
@@ -134,14 +135,24 @@ struct nvmet_fc_tgt_queue {
 	struct nvmet_fc_fcp_iod		fod[];		/* array of fcp_iods */
 } __aligned(sizeof(unsigned long long));
 
+struct nvmet_fc_hostport {
+	struct nvmet_fc_tgtport		*tgtport;
+	void				*hosthandle;
+	struct list_head		host_list;
+	struct kref			ref;
+	u8				invalid;
+};
+
 struct nvmet_fc_tgt_assoc {
 	u64				association_id;
 	u32				a_id;
 	struct nvmet_fc_tgtport		*tgtport;
+	struct nvmet_fc_hostport	*hostport;
 	struct list_head		a_list;
 	struct nvmet_fc_tgt_queue	*queues[NVMET_NR_QUEUES + 1];
 	struct kref			ref;
 	struct work_struct		del_work;
+	atomic_t			del_work_active;
 };
 
 
@@ -774,17 +785,114 @@ nvmet_fc_find_target_queue(struct nvmet_fc_tgtport *tgtport,
 }
 
 static void
+nvmet_fc_hostport_free(struct kref *ref)
+{
+	struct nvmet_fc_hostport *hostport =
+		container_of(ref, struct nvmet_fc_hostport, ref);
+	struct nvmet_fc_tgtport *tgtport = hostport->tgtport;
+	unsigned long flags;
+
+	spin_lock_irqsave(&tgtport->lock, flags);
+	list_del(&hostport->host_list);
+	spin_unlock_irqrestore(&tgtport->lock, flags);
+	if (tgtport->ops->host_release && hostport->invalid)
+		tgtport->ops->host_release(hostport->hosthandle);
+	kfree(hostport);
+	nvmet_fc_tgtport_put(tgtport);
+}
+
+static void
+nvmet_fc_hostport_put(struct nvmet_fc_hostport *hostport)
+{
+	kref_put(&hostport->ref, nvmet_fc_hostport_free);
+}
+
+static int
+nvmet_fc_hostport_get(struct nvmet_fc_hostport *hostport)
+{
+	return kref_get_unless_zero(&hostport->ref);
+}
+
+static void
+nvmet_fc_free_hostport(struct nvmet_fc_hostport *hostport)
+{
+	/* if LLDD not implemented, leave as NULL */
+	if (!hostport->hosthandle)
+		return;
+
+	nvmet_fc_hostport_put(hostport);
+}
+
+static struct nvmet_fc_hostport *
+nvmet_fc_alloc_hostport(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
+{
+	struct nvmet_fc_hostport *newhost, *host, *match = NULL;
+	unsigned long flags;
+
+	/* if LLDD not implemented, leave as NULL */
+	if (!hosthandle)
+		return NULL;
+
+	/* take reference for what will be the newly allocated hostport */
+	if (!nvmet_fc_tgtport_get(tgtport))
+		return ERR_PTR(-EINVAL);
+
+	newhost = kzalloc(sizeof(*newhost), GFP_KERNEL);
+	if (!newhost) {
+		spin_lock_irqsave(&tgtport->lock, flags);
+		list_for_each_entry(host, &tgtport->host_list, host_list) {
+			if (host->hosthandle == hosthandle && !host->invalid) {
+				if (nvmet_fc_hostport_get(host)) {
+					match = host;
+					break;
+				}
+			}
+		}
+		spin_unlock_irqrestore(&tgtport->lock, flags);
+		/* no allocation - release reference */
+		nvmet_fc_tgtport_put(tgtport);
+		return (match) ? match : ERR_PTR(-ENOMEM);
+	}
+
+	newhost->tgtport = tgtport;
+	newhost->hosthandle = hosthandle;
+	INIT_LIST_HEAD(&newhost->host_list);
+	kref_init(&newhost->ref);
+
+	spin_lock_irqsave(&tgtport->lock, flags);
+	list_for_each_entry(host, &tgtport->host_list, host_list) {
+		if (host->hosthandle == hosthandle && !host->invalid) {
+			if (nvmet_fc_hostport_get(host)) {
+				match = host;
+				break;
+			}
+		}
+	}
+	if (match) {
+		kfree(newhost);
+		newhost = NULL;
+		/* releasing allocation - release reference */
+		nvmet_fc_tgtport_put(tgtport);
+	} else
+		list_add_tail(&newhost->host_list, &tgtport->host_list);
+	spin_unlock_irqrestore(&tgtport->lock, flags);
+
+	return (match) ? match : newhost;
+}
+
+static void
 nvmet_fc_delete_assoc(struct work_struct *work)
 {
 	struct nvmet_fc_tgt_assoc *assoc =
 		container_of(work, struct nvmet_fc_tgt_assoc, del_work);
 
 	nvmet_fc_delete_target_assoc(assoc);
+	atomic_set(&assoc->del_work_active, 0);
 	nvmet_fc_tgt_a_put(assoc);
 }
 
 static struct nvmet_fc_tgt_assoc *
-nvmet_fc_alloc_target_assoc(struct nvmet_fc_tgtport *tgtport)
+nvmet_fc_alloc_target_assoc(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
 {
 	struct nvmet_fc_tgt_assoc *assoc, *tmpassoc;
 	unsigned long flags;
@@ -801,13 +909,18 @@ nvmet_fc_alloc_target_assoc(struct nvmet_fc_tgtport *tgtport)
 		goto out_free_assoc;
 
 	if (!nvmet_fc_tgtport_get(tgtport))
-		goto out_ida_put;
+		goto out_ida;
+
+	assoc->hostport = nvmet_fc_alloc_hostport(tgtport, hosthandle);
+	if (IS_ERR(assoc->hostport))
+		goto out_put;
 
 	assoc->tgtport = tgtport;
 	assoc->a_id = idx;
 	INIT_LIST_HEAD(&assoc->a_list);
 	kref_init(&assoc->ref);
 	INIT_WORK(&assoc->del_work, nvmet_fc_delete_assoc);
+	atomic_set(&assoc->del_work_active, 0);
 
 	while (needrandom) {
 		get_random_bytes(&ran, sizeof(ran) - BYTES_FOR_QID);
@@ -829,7 +942,9 @@ nvmet_fc_alloc_target_assoc(struct nvmet_fc_tgtport *tgtport)
 
 	return assoc;
 
-out_ida_put:
+out_put:
+	nvmet_fc_tgtport_put(tgtport);
+out_ida:
 	ida_simple_remove(&tgtport->assoc_cnt, idx);
 out_free_assoc:
 	kfree(assoc);
@@ -844,6 +959,7 @@ nvmet_fc_target_assoc_free(struct kref *ref)
 	struct nvmet_fc_tgtport *tgtport = assoc->tgtport;
 	unsigned long flags;
 
+	nvmet_fc_free_hostport(assoc->hostport);
 	spin_lock_irqsave(&tgtport->lock, flags);
 	list_del(&assoc->a_list);
 	spin_unlock_irqrestore(&tgtport->lock, flags);
@@ -1057,6 +1173,7 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
 	INIT_LIST_HEAD(&newrec->ls_list);
 	INIT_LIST_HEAD(&newrec->ls_busylist);
 	INIT_LIST_HEAD(&newrec->assoc_list);
+	INIT_LIST_HEAD(&newrec->host_list);
 	kref_init(&newrec->ref);
 	ida_init(&newrec->assoc_cnt);
 	newrec->max_sg_cnt = template->max_sgl_segments;
@@ -1133,14 +1250,21 @@ __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport)
 {
 	struct nvmet_fc_tgt_assoc *assoc, *next;
 	unsigned long flags;
+	int ret;
 
 	spin_lock_irqsave(&tgtport->lock, flags);
 	list_for_each_entry_safe(assoc, next,
 				&tgtport->assoc_list, a_list) {
 		if (!nvmet_fc_tgt_a_get(assoc))
 			continue;
-		if (!schedule_work(&assoc->del_work))
+		ret = atomic_cmpxchg(&assoc->del_work_active, 0, 1);
+		if (ret == 0) {
+			if (!schedule_work(&assoc->del_work))
+				nvmet_fc_tgt_a_put(assoc);
+		} else {
+			/* already deleting - release local reference */
 			nvmet_fc_tgt_a_put(assoc);
+		}
 	}
 	spin_unlock_irqrestore(&tgtport->lock, flags);
 }
@@ -1178,6 +1302,36 @@ void
 nvmet_fc_invalidate_host(struct nvmet_fc_target_port *target_port,
 			void *hosthandle)
 {
+	struct nvmet_fc_tgtport *tgtport = targetport_to_tgtport(target_port);
+	struct nvmet_fc_tgt_assoc *assoc, *next;
+	unsigned long flags;
+	bool noassoc = true;
+	int ret;
+
+	spin_lock_irqsave(&tgtport->lock, flags);
+	list_for_each_entry_safe(assoc, next,
+				&tgtport->assoc_list, a_list) {
+		if (!assoc->hostport ||
+		    assoc->hostport->hosthandle != hosthandle)
+			continue;
+		if (!nvmet_fc_tgt_a_get(assoc))
+			continue;
+		assoc->hostport->invalid = 1;
+		noassoc = false;
+		ret = atomic_cmpxchg(&assoc->del_work_active, 0, 1);
+		if (ret == 0) {
+			if (!schedule_work(&assoc->del_work))
+				nvmet_fc_tgt_a_put(assoc);
+		} else {
+			/* already deleting - release local reference */
+			nvmet_fc_tgt_a_put(assoc);
+		}
+	}
+	spin_unlock_irqrestore(&tgtport->lock, flags);
+
+	/* if there's nothing to wait for - call the callback */
+	if (noassoc && tgtport->ops->host_release)
+		tgtport->ops->host_release(hosthandle);
 }
 EXPORT_SYMBOL_GPL(nvmet_fc_invalidate_host);
 
@@ -1192,6 +1346,7 @@ nvmet_fc_delete_ctrl(struct nvmet_ctrl *ctrl)
 	struct nvmet_fc_tgt_queue *queue;
 	unsigned long flags;
 	bool found_ctrl = false;
+	int ret;
 
 	/* this is a bit ugly, but don't want to make locks layered */
 	spin_lock_irqsave(&nvmet_fc_tgtlock, flags);
@@ -1215,8 +1370,14 @@ nvmet_fc_delete_ctrl(struct nvmet_ctrl *ctrl)
 		nvmet_fc_tgtport_put(tgtport);
 
 		if (found_ctrl) {
-			if (!schedule_work(&assoc->del_work))
+			ret = atomic_cmpxchg(&assoc->del_work_active, 0, 1);
+			if (ret == 0) {
+				if (!schedule_work(&assoc->del_work))
+					nvmet_fc_tgt_a_put(assoc);
+			} else {
+				/* already deleting - release local reference */
 				nvmet_fc_tgt_a_put(assoc);
+			}
 			return;
 		}
 
@@ -1293,7 +1454,8 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
 
 	else {
 		/* new association w/ admin queue */
-		iod->assoc = nvmet_fc_alloc_target_assoc(tgtport);
+		iod->assoc = nvmet_fc_alloc_target_assoc(
+						tgtport, iod->hosthandle);
 		if (!iod->assoc)
 			ret = VERR_ASSOC_ALLOC_FAIL;
 		else {
@@ -1628,6 +1790,7 @@ nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *target_port,
 	iod->fcpreq = NULL;
 	memcpy(iod->rqstbuf, lsreqbuf, lsreqbuf_len);
 	iod->rqstdatalen = lsreqbuf_len;
+	iod->hosthandle = hosthandle;
 
 	schedule_work(&iod->work);
 
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 16/29] nvmet-fc: rename ls_list to ls_rcv_list
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (14 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 15/29] nvmet-fc: track hostport handle for associations James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:03   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 17/29] nvmet-fc: Add Disconnect Association Xmt support James Smart
                   ` (14 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

In preparation to add ls request support, rename the current ls_list,
which is RCV LS request only, to ls_rcv_list.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/target/fc.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 2c5b702a8561..d52393cd29f7 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -29,7 +29,7 @@ struct nvmet_fc_ls_iod {
 	struct nvmefc_ls_rsp		*lsrsp;
 	struct nvmefc_tgt_fcp_req	*fcpreq;	/* only if RS */
 
-	struct list_head		ls_list;	/* tgtport->ls_list */
+	struct list_head		ls_rcv_list; /* tgtport->ls_rcv_list */
 
 	struct nvmet_fc_tgtport		*tgtport;
 	struct nvmet_fc_tgt_assoc	*assoc;
@@ -90,7 +90,7 @@ struct nvmet_fc_tgtport {
 
 	struct nvmet_fc_ls_iod		*iod;
 	spinlock_t			lock;
-	struct list_head		ls_list;
+	struct list_head		ls_rcv_list;
 	struct list_head		ls_busylist;
 	struct list_head		assoc_list;
 	struct list_head		host_list;
@@ -346,7 +346,7 @@ nvmet_fc_alloc_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
 	for (i = 0; i < NVMET_LS_CTX_COUNT; iod++, i++) {
 		INIT_WORK(&iod->work, nvmet_fc_handle_ls_rqst_work);
 		iod->tgtport = tgtport;
-		list_add_tail(&iod->ls_list, &tgtport->ls_list);
+		list_add_tail(&iod->ls_rcv_list, &tgtport->ls_rcv_list);
 
 		iod->rqstbuf = kzalloc(sizeof(union nvmefc_ls_requests) +
 				       sizeof(union nvmefc_ls_responses),
@@ -367,12 +367,12 @@ nvmet_fc_alloc_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
 
 out_fail:
 	kfree(iod->rqstbuf);
-	list_del(&iod->ls_list);
+	list_del(&iod->ls_rcv_list);
 	for (iod--, i--; i >= 0; iod--, i--) {
 		fc_dma_unmap_single(tgtport->dev, iod->rspdma,
 				sizeof(*iod->rspbuf), DMA_TO_DEVICE);
 		kfree(iod->rqstbuf);
-		list_del(&iod->ls_list);
+		list_del(&iod->ls_rcv_list);
 	}
 
 	kfree(iod);
@@ -391,7 +391,7 @@ nvmet_fc_free_ls_iodlist(struct nvmet_fc_tgtport *tgtport)
 				iod->rspdma, sizeof(*iod->rspbuf),
 				DMA_TO_DEVICE);
 		kfree(iod->rqstbuf);
-		list_del(&iod->ls_list);
+		list_del(&iod->ls_rcv_list);
 	}
 	kfree(tgtport->iod);
 }
@@ -403,10 +403,10 @@ nvmet_fc_alloc_ls_iod(struct nvmet_fc_tgtport *tgtport)
 	unsigned long flags;
 
 	spin_lock_irqsave(&tgtport->lock, flags);
-	iod = list_first_entry_or_null(&tgtport->ls_list,
-					struct nvmet_fc_ls_iod, ls_list);
+	iod = list_first_entry_or_null(&tgtport->ls_rcv_list,
+					struct nvmet_fc_ls_iod, ls_rcv_list);
 	if (iod)
-		list_move_tail(&iod->ls_list, &tgtport->ls_busylist);
+		list_move_tail(&iod->ls_rcv_list, &tgtport->ls_busylist);
 	spin_unlock_irqrestore(&tgtport->lock, flags);
 	return iod;
 }
@@ -419,7 +419,7 @@ nvmet_fc_free_ls_iod(struct nvmet_fc_tgtport *tgtport,
 	unsigned long flags;
 
 	spin_lock_irqsave(&tgtport->lock, flags);
-	list_move(&iod->ls_list, &tgtport->ls_list);
+	list_move(&iod->ls_rcv_list, &tgtport->ls_rcv_list);
 	spin_unlock_irqrestore(&tgtport->lock, flags);
 }
 
@@ -1170,7 +1170,7 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
 	newrec->dev = dev;
 	newrec->ops = template;
 	spin_lock_init(&newrec->lock);
-	INIT_LIST_HEAD(&newrec->ls_list);
+	INIT_LIST_HEAD(&newrec->ls_rcv_list);
 	INIT_LIST_HEAD(&newrec->ls_busylist);
 	INIT_LIST_HEAD(&newrec->assoc_list);
 	INIT_LIST_HEAD(&newrec->host_list);
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 17/29] nvmet-fc: Add Disconnect Association Xmt support
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (15 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 16/29] nvmet-fc: rename ls_list to ls_rcv_list James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:04   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 18/29] nvme-fcloop: refactor to enable target to host LS James Smart
                   ` (13 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

As part of FC-NVME-2 (and ammendment on FC-NVME), the target is to
send a Disconnect LS after an association is terminated and any
exchanges for the association have been ABTS'd. The target is also
not to send the receipt to any Disconnect Association LS, received
to initiate the association termination or received while the
association is terminating, until the Disconnect LS has been transmit.

Add support for sending Disconnect Association LS after all I/O's
complete (which is after ABTS'd certainly). Utilizes the new LLDD
api to send ls requests.

There is no need to track the Disconnect LS response or to retry
after timeout. All spec requirements will have been met by waiting
for i/o completion to initiate the transmission.

Add support for tracking the reception of Disconnect Association
and defering the response transmission until after the Disconnect
Association LS has been transmit.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/target/fc.c | 298 +++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 287 insertions(+), 11 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index d52393cd29f7..3e94c4909cf9 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -25,7 +25,7 @@
 struct nvmet_fc_tgtport;
 struct nvmet_fc_tgt_assoc;
 
-struct nvmet_fc_ls_iod {
+struct nvmet_fc_ls_iod {		// for an LS RQST RCV
 	struct nvmefc_ls_rsp		*lsrsp;
 	struct nvmefc_tgt_fcp_req	*fcpreq;	/* only if RS */
 
@@ -45,6 +45,18 @@ struct nvmet_fc_ls_iod {
 	struct work_struct		work;
 } __aligned(sizeof(unsigned long long));
 
+struct nvmet_fc_ls_req_op {		// for an LS RQST XMT
+	struct nvmefc_ls_req		ls_req;
+
+	struct nvmet_fc_tgtport		*tgtport;
+	void				*hosthandle;
+
+	int				ls_error;
+	struct list_head		lsreq_list; /* tgtport->ls_req_list */
+	bool				req_queued;
+};
+
+
 /* desired maximum for a single sequence - if sg list allows it */
 #define NVMET_FC_MAX_SEQ_LENGTH		(256 * 1024)
 
@@ -91,6 +103,7 @@ struct nvmet_fc_tgtport {
 	struct nvmet_fc_ls_iod		*iod;
 	spinlock_t			lock;
 	struct list_head		ls_rcv_list;
+	struct list_head		ls_req_list;
 	struct list_head		ls_busylist;
 	struct list_head		assoc_list;
 	struct list_head		host_list;
@@ -146,8 +159,10 @@ struct nvmet_fc_hostport {
 struct nvmet_fc_tgt_assoc {
 	u64				association_id;
 	u32				a_id;
+	atomic_t			terminating;
 	struct nvmet_fc_tgtport		*tgtport;
 	struct nvmet_fc_hostport	*hostport;
+	struct nvmet_fc_ls_iod		*rcv_disconn;
 	struct list_head		a_list;
 	struct nvmet_fc_tgt_queue	*queues[NVMET_NR_QUEUES + 1];
 	struct kref			ref;
@@ -236,6 +251,8 @@ static int nvmet_fc_tgtport_get(struct nvmet_fc_tgtport *tgtport);
 static void nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
 					struct nvmet_fc_fcp_iod *fod);
 static void nvmet_fc_delete_target_assoc(struct nvmet_fc_tgt_assoc *assoc);
+static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport,
+				struct nvmet_fc_ls_iod *iod);
 
 
 /* *********************** FC-NVME DMA Handling **************************** */
@@ -327,6 +344,188 @@ fc_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
 }
 
 
+/* ********************** FC-NVME LS XMT Handling ************************* */
+
+
+static void
+__nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop)
+{
+	struct nvmet_fc_tgtport *tgtport = lsop->tgtport;
+	struct nvmefc_ls_req *lsreq = &lsop->ls_req;
+	unsigned long flags;
+
+	spin_lock_irqsave(&tgtport->lock, flags);
+
+	if (!lsop->req_queued) {
+		spin_unlock_irqrestore(&tgtport->lock, flags);
+		return;
+	}
+
+	list_del(&lsop->lsreq_list);
+
+	lsop->req_queued = false;
+
+	spin_unlock_irqrestore(&tgtport->lock, flags);
+
+	fc_dma_unmap_single(tgtport->dev, lsreq->rqstdma,
+				  (lsreq->rqstlen + lsreq->rsplen),
+				  DMA_BIDIRECTIONAL);
+
+	nvmet_fc_tgtport_put(tgtport);
+}
+
+static int
+__nvmet_fc_send_ls_req(struct nvmet_fc_tgtport *tgtport,
+		struct nvmet_fc_ls_req_op *lsop,
+		void (*done)(struct nvmefc_ls_req *req, int status))
+{
+	struct nvmefc_ls_req *lsreq = &lsop->ls_req;
+	unsigned long flags;
+	int ret = 0;
+
+	if (!tgtport->ops->ls_req)
+		return -EOPNOTSUPP;
+
+	if (!nvmet_fc_tgtport_get(tgtport))
+		return -ESHUTDOWN;
+
+	lsreq->done = done;
+	lsop->req_queued = false;
+	INIT_LIST_HEAD(&lsop->lsreq_list);
+
+	lsreq->rqstdma = fc_dma_map_single(tgtport->dev, lsreq->rqstaddr,
+				  lsreq->rqstlen + lsreq->rsplen,
+				  DMA_BIDIRECTIONAL);
+	if (fc_dma_mapping_error(tgtport->dev, lsreq->rqstdma)) {
+		ret = -EFAULT;
+		goto out_puttgtport;
+	}
+	lsreq->rspdma = lsreq->rqstdma + lsreq->rqstlen;
+
+	spin_lock_irqsave(&tgtport->lock, flags);
+
+	list_add_tail(&lsop->lsreq_list, &tgtport->ls_req_list);
+
+	lsop->req_queued = true;
+
+	spin_unlock_irqrestore(&tgtport->lock, flags);
+
+	ret = tgtport->ops->ls_req(&tgtport->fc_target_port, lsop->hosthandle,
+				   lsreq);
+	if (ret)
+		goto out_unlink;
+
+	return 0;
+
+out_unlink:
+	lsop->ls_error = ret;
+	spin_lock_irqsave(&tgtport->lock, flags);
+	lsop->req_queued = false;
+	list_del(&lsop->lsreq_list);
+	spin_unlock_irqrestore(&tgtport->lock, flags);
+	fc_dma_unmap_single(tgtport->dev, lsreq->rqstdma,
+				  (lsreq->rqstlen + lsreq->rsplen),
+				  DMA_BIDIRECTIONAL);
+out_puttgtport:
+	nvmet_fc_tgtport_put(tgtport);
+
+	return ret;
+}
+
+static int
+nvmet_fc_send_ls_req_async(struct nvmet_fc_tgtport *tgtport,
+		struct nvmet_fc_ls_req_op *lsop,
+		void (*done)(struct nvmefc_ls_req *req, int status))
+{
+	/* don't wait for completion */
+
+	return __nvmet_fc_send_ls_req(tgtport, lsop, done);
+}
+
+static void
+nvmet_fc_disconnect_assoc_done(struct nvmefc_ls_req *lsreq, int status)
+{
+	struct nvmet_fc_ls_req_op *lsop =
+		container_of(lsreq, struct nvmet_fc_ls_req_op, ls_req);
+
+	__nvmet_fc_finish_ls_req(lsop);
+
+	/* fc-nvme target doesn't care about success or failure of cmd */
+
+	kfree(lsop);
+}
+
+/*
+ * This routine sends a FC-NVME LS to disconnect (aka terminate)
+ * the FC-NVME Association.  Terminating the association also
+ * terminates the FC-NVME connections (per queue, both admin and io
+ * queues) that are part of the association. E.g. things are torn
+ * down, and the related FC-NVME Association ID and Connection IDs
+ * become invalid.
+ *
+ * The behavior of the fc-nvme target is such that it's
+ * understanding of the association and connections will implicitly
+ * be torn down. The action is implicit as it may be due to a loss of
+ * connectivity with the fc-nvme host, so the target may never get a
+ * response even if it tried.  As such, the action of this routine
+ * is to asynchronously send the LS, ignore any results of the LS, and
+ * continue on with terminating the association. If the fc-nvme host
+ * is present and receives the LS, it too can tear down.
+ */
+static void
+nvmet_fc_xmt_disconnect_assoc(struct nvmet_fc_tgt_assoc *assoc)
+{
+	struct nvmet_fc_tgtport *tgtport = assoc->tgtport;
+	struct fcnvme_ls_disconnect_assoc_rqst *discon_rqst;
+	struct fcnvme_ls_disconnect_assoc_acc *discon_acc;
+	struct nvmet_fc_ls_req_op *lsop;
+	struct nvmefc_ls_req *lsreq;
+	int ret;
+
+	/*
+	 * If ls_req is NULL or no hosthandle, it's an older lldd and no
+	 * message is normal. Otherwise, send unless the hostport has
+	 * already been invalidated by the lldd.
+	 */
+	if (!tgtport->ops->ls_req || !assoc->hostport ||
+	    assoc->hostport->invalid)
+		return;
+
+	lsop = kzalloc((sizeof(*lsop) +
+			sizeof(*discon_rqst) + sizeof(*discon_acc) +
+			tgtport->ops->lsrqst_priv_sz), GFP_KERNEL);
+	if (!lsop) {
+		dev_info(tgtport->dev,
+			"{%d:%d} send Disconnect Association failed: ENOMEM\n",
+			tgtport->fc_target_port.port_num, assoc->a_id);
+		return;
+	}
+
+	discon_rqst = (struct fcnvme_ls_disconnect_assoc_rqst *)&lsop[1];
+	discon_acc = (struct fcnvme_ls_disconnect_assoc_acc *)&discon_rqst[1];
+	lsreq = &lsop->ls_req;
+	if (tgtport->ops->lsrqst_priv_sz)
+		lsreq->private = (void *)&discon_acc[1];
+	else
+		lsreq->private = NULL;
+
+	lsop->tgtport = tgtport;
+	lsop->hosthandle = assoc->hostport->hosthandle;
+
+	nvmefc_fmt_lsreq_discon_assoc(lsreq, discon_rqst, discon_acc,
+				assoc->association_id);
+
+	ret = nvmet_fc_send_ls_req_async(tgtport, lsop,
+				nvmet_fc_disconnect_assoc_done);
+	if (ret) {
+		dev_info(tgtport->dev,
+			"{%d:%d} XMT Disconnect Association failed: %d\n",
+			tgtport->fc_target_port.port_num, assoc->a_id, ret);
+		kfree(lsop);
+	}
+}
+
+
 /* *********************** FC-NVME Port Management ************************ */
 
 
@@ -689,10 +888,14 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
 	struct nvmet_fc_defer_fcp_req *deferfcp, *tempptr;
 	unsigned long flags;
 	int i, writedataactive;
-	bool disconnect;
+	int disconnect;
 
 	disconnect = atomic_xchg(&queue->connected, 0);
 
+	/* if not connected, nothing to do */
+	if (!disconnect)
+		return;
+
 	spin_lock_irqsave(&queue->qlock, flags);
 	/* abort outstanding io's */
 	for (i = 0; i < queue->sqsize; fod++, i++) {
@@ -921,6 +1124,7 @@ nvmet_fc_alloc_target_assoc(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
 	kref_init(&assoc->ref);
 	INIT_WORK(&assoc->del_work, nvmet_fc_delete_assoc);
 	atomic_set(&assoc->del_work_active, 0);
+	atomic_set(&assoc->terminating, 0);
 
 	while (needrandom) {
 		get_random_bytes(&ran, sizeof(ran) - BYTES_FOR_QID);
@@ -957,13 +1161,24 @@ nvmet_fc_target_assoc_free(struct kref *ref)
 	struct nvmet_fc_tgt_assoc *assoc =
 		container_of(ref, struct nvmet_fc_tgt_assoc, ref);
 	struct nvmet_fc_tgtport *tgtport = assoc->tgtport;
+	struct nvmet_fc_ls_iod	*oldls;
 	unsigned long flags;
 
+	/* Send Disconnect now that all i/o has completed */
+	nvmet_fc_xmt_disconnect_assoc(assoc);
+
 	nvmet_fc_free_hostport(assoc->hostport);
 	spin_lock_irqsave(&tgtport->lock, flags);
 	list_del(&assoc->a_list);
+	oldls = assoc->rcv_disconn;
 	spin_unlock_irqrestore(&tgtport->lock, flags);
+	/* if pending Rcv Disconnect Association LS, send rsp now */
+	if (oldls)
+		nvmet_fc_xmt_ls_rsp(tgtport, oldls);
 	ida_simple_remove(&tgtport->assoc_cnt, assoc->a_id);
+	dev_info(tgtport->dev,
+		"{%d:%d} Association freed\n",
+		tgtport->fc_target_port.port_num, assoc->a_id);
 	kfree(assoc);
 	nvmet_fc_tgtport_put(tgtport);
 }
@@ -986,7 +1201,13 @@ nvmet_fc_delete_target_assoc(struct nvmet_fc_tgt_assoc *assoc)
 	struct nvmet_fc_tgtport *tgtport = assoc->tgtport;
 	struct nvmet_fc_tgt_queue *queue;
 	unsigned long flags;
-	int i;
+	int i, terminating;
+
+	terminating = atomic_xchg(&assoc->terminating, 1);
+
+	/* if already terminating, do nothing */
+	if (terminating)
+		return;
 
 	spin_lock_irqsave(&tgtport->lock, flags);
 	for (i = NVMET_NR_QUEUES; i >= 0; i--) {
@@ -1002,6 +1223,10 @@ nvmet_fc_delete_target_assoc(struct nvmet_fc_tgt_assoc *assoc)
 	}
 	spin_unlock_irqrestore(&tgtport->lock, flags);
 
+	dev_info(tgtport->dev,
+		"{%d:%d} Association deleted\n",
+		tgtport->fc_target_port.port_num, assoc->a_id);
+
 	nvmet_fc_tgt_a_put(assoc);
 }
 
@@ -1171,6 +1396,7 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
 	newrec->ops = template;
 	spin_lock_init(&newrec->lock);
 	INIT_LIST_HEAD(&newrec->ls_rcv_list);
+	INIT_LIST_HEAD(&newrec->ls_req_list);
 	INIT_LIST_HEAD(&newrec->ls_busylist);
 	INIT_LIST_HEAD(&newrec->assoc_list);
 	INIT_LIST_HEAD(&newrec->host_list);
@@ -1407,6 +1633,13 @@ nvmet_fc_unregister_targetport(struct nvmet_fc_target_port *target_port)
 	/* terminate any outstanding associations */
 	__nvmet_fc_free_assocs(tgtport);
 
+	/*
+	 * should terminate LS's as well. However, LS's will be generated
+	 * at the tail end of association termination, so they likely don't
+	 * exist yet. And even if they did, it's worthwhile to just let
+	 * them finish and targetport ref counting will clean things up.
+	 */
+
 	nvmet_fc_tgtport_put(tgtport);
 
 	return 0;
@@ -1414,7 +1647,7 @@ nvmet_fc_unregister_targetport(struct nvmet_fc_target_port *target_port)
 EXPORT_SYMBOL_GPL(nvmet_fc_unregister_targetport);
 
 
-/* *********************** FC-NVME LS Handling **************************** */
+/* ********************** FC-NVME LS RCV Handling ************************* */
 
 
 static void
@@ -1481,6 +1714,10 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
 	atomic_set(&queue->connected, 1);
 	queue->sqhd = 0;	/* best place to init value */
 
+	dev_info(tgtport->dev,
+		"{%d:%d} Association created\n",
+		tgtport->fc_target_port.port_num, iod->assoc->a_id);
+
 	/* format a response */
 
 	iod->lsrsp->rsplen = sizeof(*acc);
@@ -1588,7 +1825,11 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
 				be16_to_cpu(rqst->connect_cmd.qid)));
 }
 
-static void
+/*
+ * Returns true if the LS response is to be transmit
+ * Returns false if the LS response is to be delayed
+ */
+static int
 nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 			struct nvmet_fc_ls_iod *iod)
 {
@@ -1597,13 +1838,15 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 	struct fcnvme_ls_disconnect_assoc_acc *acc =
 						&iod->rspbuf->rsp_dis_assoc;
 	struct nvmet_fc_tgt_assoc *assoc;
+	struct nvmet_fc_ls_iod *oldls = NULL;
+	unsigned long flags;
 	int ret = 0;
 
 	memset(acc, 0, sizeof(*acc));
 
 	ret = nvmefc_vldt_lsreq_discon_assoc(iod->rqstdatalen, rqst);
 	if (!ret) {
-		/* match an active association */
+		/* match an active association - takes an assoc ref if !NULL */
 		assoc = nvmet_fc_find_target_assoc(tgtport,
 				be64_to_cpu(rqst->associd.association_id));
 		iod->assoc = assoc;
@@ -1621,7 +1864,7 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 					FCNVME_RJT_RC_INV_ASSOC :
 					FCNVME_RJT_RC_LOGIC,
 				FCNVME_RJT_EXP_NONE, 0);
-		return;
+		return true;
 	}
 
 	/* format a response */
@@ -1634,9 +1877,40 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
 			FCNVME_LS_DISCONNECT_ASSOC);
 
 	/* release get taken in nvmet_fc_find_target_assoc */
-	nvmet_fc_tgt_a_put(iod->assoc);
+	nvmet_fc_tgt_a_put(assoc);
+
+	/*
+	 * The rules for LS response says the response cannot
+	 * go back until ABTS's have been sent for all outstanding
+	 * I/O and a Disconnect Association LS has been sent.
+	 * So... save off the Disconnect LS to send the response
+	 * later. If there was a prior LS already saved, replace
+	 * it with the newer one and send a can't perform reject
+	 * on the older one.
+	 */
+	spin_lock_irqsave(&tgtport->lock, flags);
+	oldls = assoc->rcv_disconn;
+	assoc->rcv_disconn = iod;
+	spin_unlock_irqrestore(&tgtport->lock, flags);
 
-	nvmet_fc_delete_target_assoc(iod->assoc);
+	nvmet_fc_delete_target_assoc(assoc);
+
+	if (oldls) {
+		dev_info(tgtport->dev,
+			"{%d:%d} Multiple Disconnect Association LS's "
+			"received\n",
+			tgtport->fc_target_port.port_num, assoc->a_id);
+		/* overwrite good response with bogus failure */
+		oldls->lsrsp->rsplen = nvme_fc_format_rjt(oldls->rspbuf,
+						sizeof(*iod->rspbuf),
+						/* ok to use rqst, LS is same */
+						rqst->w0.ls_cmd,
+						FCNVME_RJT_RC_UNAB,
+						FCNVME_RJT_EXP_NONE, 0);
+		nvmet_fc_xmt_ls_rsp(tgtport, oldls);
+	}
+
+	return false;
 }
 
 
@@ -1681,6 +1955,7 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
 			struct nvmet_fc_ls_iod *iod)
 {
 	struct fcnvme_ls_rqst_w0 *w0 = &iod->rqstbuf->rq_cr_assoc.w0;
+	bool sendrsp = true;
 
 	iod->lsrsp->nvme_fc_private = iod;
 	iod->lsrsp->rspbuf = iod->rspbuf;
@@ -1707,7 +1982,7 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
 		break;
 	case FCNVME_LS_DISCONNECT_ASSOC:
 		/* Terminate a Queue/Connection or the Association */
-		nvmet_fc_ls_disconnect(tgtport, iod);
+		sendrsp = nvmet_fc_ls_disconnect(tgtport, iod);
 		break;
 	default:
 		iod->lsrsp->rsplen = nvme_fc_format_rjt(iod->rspbuf,
@@ -1715,7 +1990,8 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
 				FCNVME_RJT_RC_INVAL, FCNVME_RJT_EXP_NONE, 0);
 	}
 
-	nvmet_fc_xmt_ls_rsp(tgtport, iod);
+	if (sendrsp)
+		nvmet_fc_xmt_ls_rsp(tgtport, iod);
 }
 
 /*
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 18/29] nvme-fcloop: refactor to enable target to host LS
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (16 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 17/29] nvmet-fc: Add Disconnect Association Xmt support James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:06   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 19/29] nvme-fcloop: add target to host LS request support James Smart
                   ` (12 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

Currently nvmefc-loop only sends LS's from host to target.
Slightly rework data structures and routine names to reflect this
path. Allows a straight-forward conversion to be used by ls's
from target to host.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/target/fcloop.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index 6533f4196005..5293069e2769 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -226,7 +226,13 @@ struct fcloop_nport {
 	u32 port_id;
 };
 
+enum {
+	H2T	= 0,
+	T2H	= 1,
+};
+
 struct fcloop_lsreq {
+	int				lsdir;	/* H2T or T2H */
 	struct nvmefc_ls_req		*lsreq;
 	struct nvmefc_ls_rsp		ls_rsp;
 	int				status;
@@ -323,7 +329,7 @@ fcloop_rport_lsrqst_work(struct work_struct *work)
 }
 
 static int
-fcloop_ls_req(struct nvme_fc_local_port *localport,
+fcloop_h2t_ls_req(struct nvme_fc_local_port *localport,
 			struct nvme_fc_remote_port *remoteport,
 			struct nvmefc_ls_req *lsreq)
 {
@@ -331,6 +337,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
 	struct fcloop_rport *rport = remoteport->private;
 	int ret = 0;
 
+	tls_req->lsdir = H2T;
 	tls_req->lsreq = lsreq;
 	INIT_LIST_HEAD(&tls_req->ls_list);
 
@@ -351,7 +358,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
 }
 
 static int
-fcloop_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
+fcloop_h2t_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
 			struct nvmefc_ls_rsp *lsrsp)
 {
 	struct fcloop_lsreq *tls_req = ls_rsp_to_lsreq(lsrsp);
@@ -762,7 +769,7 @@ fcloop_fcp_req_release(struct nvmet_fc_target_port *tgtport,
 }
 
 static void
-fcloop_ls_abort(struct nvme_fc_local_port *localport,
+fcloop_h2t_ls_abort(struct nvme_fc_local_port *localport,
 			struct nvme_fc_remote_port *remoteport,
 				struct nvmefc_ls_req *lsreq)
 {
@@ -880,9 +887,9 @@ static struct nvme_fc_port_template fctemplate = {
 	.remoteport_delete	= fcloop_remoteport_delete,
 	.create_queue		= fcloop_create_queue,
 	.delete_queue		= fcloop_delete_queue,
-	.ls_req			= fcloop_ls_req,
+	.ls_req			= fcloop_h2t_ls_req,
 	.fcp_io			= fcloop_fcp_req,
-	.ls_abort		= fcloop_ls_abort,
+	.ls_abort		= fcloop_h2t_ls_abort,
 	.fcp_abort		= fcloop_fcp_abort,
 	.max_hw_queues		= FCLOOP_HW_QUEUES,
 	.max_sgl_segments	= FCLOOP_SGL_SEGS,
@@ -897,7 +904,7 @@ static struct nvme_fc_port_template fctemplate = {
 
 static struct nvmet_fc_target_template tgttemplate = {
 	.targetport_delete	= fcloop_targetport_delete,
-	.xmt_ls_rsp		= fcloop_xmt_ls_rsp,
+	.xmt_ls_rsp		= fcloop_h2t_xmt_ls_rsp,
 	.fcp_op			= fcloop_fcp_op,
 	.fcp_abort		= fcloop_tgt_fcp_abort,
 	.fcp_req_release	= fcloop_fcp_req_release,
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 19/29] nvme-fcloop: add target to host LS request support
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (17 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 18/29] nvme-fcloop: refactor to enable target to host LS James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:07   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 20/29] lpfc: Refactor lpfc nvme headers James Smart
                   ` (11 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, martin.petersen

Add support for performing LS requests from target to host.
Include sending request from targetport, reception into host,
host sending ls rsp.

Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/nvme/target/fcloop.c | 131 ++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 118 insertions(+), 13 deletions(-)

diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index 5293069e2769..3610b0bd12da 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -208,10 +208,13 @@ struct fcloop_rport {
 };
 
 struct fcloop_tport {
-	struct nvmet_fc_target_port *targetport;
-	struct nvme_fc_remote_port *remoteport;
-	struct fcloop_nport *nport;
-	struct fcloop_lport *lport;
+	struct nvmet_fc_target_port	*targetport;
+	struct nvme_fc_remote_port	*remoteport;
+	struct fcloop_nport		*nport;
+	struct fcloop_lport		*lport;
+	spinlock_t			lock;
+	struct list_head		ls_list;
+	struct work_struct		ls_work;
 };
 
 struct fcloop_nport {
@@ -226,13 +229,7 @@ struct fcloop_nport {
 	u32 port_id;
 };
 
-enum {
-	H2T	= 0,
-	T2H	= 1,
-};
-
 struct fcloop_lsreq {
-	int				lsdir;	/* H2T or T2H */
 	struct nvmefc_ls_req		*lsreq;
 	struct nvmefc_ls_rsp		ls_rsp;
 	int				status;
@@ -337,7 +334,6 @@ fcloop_h2t_ls_req(struct nvme_fc_local_port *localport,
 	struct fcloop_rport *rport = remoteport->private;
 	int ret = 0;
 
-	tls_req->lsdir = H2T;
 	tls_req->lsreq = lsreq;
 	INIT_LIST_HEAD(&tls_req->ls_list);
 
@@ -351,8 +347,9 @@ fcloop_h2t_ls_req(struct nvme_fc_local_port *localport,
 	}
 
 	tls_req->status = 0;
-	ret = nvmet_fc_rcv_ls_req(rport->targetport, NULL, &tls_req->ls_rsp,
-				 lsreq->rqstaddr, lsreq->rqstlen);
+	ret = nvmet_fc_rcv_ls_req(rport->targetport, rport,
+				  &tls_req->ls_rsp,
+				  lsreq->rqstaddr, lsreq->rqstlen);
 
 	return ret;
 }
@@ -384,6 +381,99 @@ fcloop_h2t_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
 	return 0;
 }
 
+static void
+fcloop_tport_lsrqst_work(struct work_struct *work)
+{
+	struct fcloop_tport *tport =
+		container_of(work, struct fcloop_tport, ls_work);
+	struct fcloop_lsreq *tls_req;
+
+	spin_lock(&tport->lock);
+	for (;;) {
+		tls_req = list_first_entry_or_null(&tport->ls_list,
+				struct fcloop_lsreq, ls_list);
+		if (!tls_req)
+			break;
+
+		list_del(&tls_req->ls_list);
+		spin_unlock(&tport->lock);
+
+		tls_req->lsreq->done(tls_req->lsreq, tls_req->status);
+		/*
+		 * callee may free memory containing tls_req.
+		 * do not reference lsreq after this.
+		 */
+
+		spin_lock(&tport->lock);
+	}
+	spin_unlock(&tport->lock);
+}
+
+static int
+fcloop_t2h_ls_req(struct nvmet_fc_target_port *targetport, void *hosthandle,
+			struct nvmefc_ls_req *lsreq)
+{
+	struct fcloop_lsreq *tls_req = lsreq->private;
+	struct fcloop_tport *tport = targetport->private;
+	int ret = 0;
+
+	/*
+	 * hosthandle should be the dst.rport value.
+	 * hosthandle ignored as fcloop currently is
+	 * 1:1 tgtport vs remoteport
+	 */
+	tls_req->lsreq = lsreq;
+	INIT_LIST_HEAD(&tls_req->ls_list);
+
+	if (!tport->remoteport) {
+		tls_req->status = -ECONNREFUSED;
+		spin_lock(&tport->lock);
+		list_add_tail(&tport->ls_list, &tls_req->ls_list);
+		spin_unlock(&tport->lock);
+		schedule_work(&tport->ls_work);
+		return ret;
+	}
+
+	tls_req->status = 0;
+	ret = nvme_fc_rcv_ls_req(tport->remoteport, &tls_req->ls_rsp,
+				 lsreq->rqstaddr, lsreq->rqstlen);
+
+	return ret;
+}
+
+static int
+fcloop_t2h_xmt_ls_rsp(struct nvme_fc_local_port *localport,
+			struct nvme_fc_remote_port *remoteport,
+			struct nvmefc_ls_rsp *lsrsp)
+{
+	struct fcloop_lsreq *tls_req = ls_rsp_to_lsreq(lsrsp);
+	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
+	struct fcloop_rport *rport = remoteport->private;
+	struct nvmet_fc_target_port *targetport = rport->targetport;
+	struct fcloop_tport *tport;
+
+	memcpy(lsreq->rspaddr, lsrsp->rspbuf,
+		((lsreq->rsplen < lsrsp->rsplen) ?
+				lsreq->rsplen : lsrsp->rsplen));
+	lsrsp->done(lsrsp);
+
+	if (targetport) {
+		tport = targetport->private;
+		spin_lock(&tport->lock);
+		list_add_tail(&tport->ls_list, &tls_req->ls_list);
+		spin_unlock(&tport->lock);
+		schedule_work(&tport->ls_work);
+	}
+
+	return 0;
+}
+
+static void
+fcloop_t2h_host_release(void *hosthandle)
+{
+	/* host handle ignored for now */
+}
+
 /*
  * Simulate reception of RSCN and converting it to a initiator transport
  * call to rescan a remote port.
@@ -776,6 +866,12 @@ fcloop_h2t_ls_abort(struct nvme_fc_local_port *localport,
 }
 
 static void
+fcloop_t2h_ls_abort(struct nvmet_fc_target_port *targetport,
+			void *hosthandle, struct nvmefc_ls_req *lsreq)
+{
+}
+
+static void
 fcloop_fcp_abort(struct nvme_fc_local_port *localport,
 			struct nvme_fc_remote_port *remoteport,
 			void *hw_queue_handle,
@@ -874,6 +970,7 @@ fcloop_targetport_delete(struct nvmet_fc_target_port *targetport)
 {
 	struct fcloop_tport *tport = targetport->private;
 
+	flush_work(&tport->ls_work);
 	fcloop_nport_put(tport->nport);
 }
 
@@ -891,6 +988,7 @@ static struct nvme_fc_port_template fctemplate = {
 	.fcp_io			= fcloop_fcp_req,
 	.ls_abort		= fcloop_h2t_ls_abort,
 	.fcp_abort		= fcloop_fcp_abort,
+	.xmt_ls_rsp		= fcloop_t2h_xmt_ls_rsp,
 	.max_hw_queues		= FCLOOP_HW_QUEUES,
 	.max_sgl_segments	= FCLOOP_SGL_SEGS,
 	.max_dif_sgl_segments	= FCLOOP_SGL_SEGS,
@@ -909,6 +1007,9 @@ static struct nvmet_fc_target_template tgttemplate = {
 	.fcp_abort		= fcloop_tgt_fcp_abort,
 	.fcp_req_release	= fcloop_fcp_req_release,
 	.discovery_event	= fcloop_tgt_discovery_evt,
+	.ls_req			= fcloop_t2h_ls_req,
+	.ls_abort		= fcloop_t2h_ls_abort,
+	.host_release		= fcloop_t2h_host_release,
 	.max_hw_queues		= FCLOOP_HW_QUEUES,
 	.max_sgl_segments	= FCLOOP_SGL_SEGS,
 	.max_dif_sgl_segments	= FCLOOP_SGL_SEGS,
@@ -917,6 +1018,7 @@ static struct nvmet_fc_target_template tgttemplate = {
 	.target_features	= 0,
 	/* sizes of additional private data for data structures */
 	.target_priv_sz		= sizeof(struct fcloop_tport),
+	.lsrqst_priv_sz		= sizeof(struct fcloop_lsreq),
 };
 
 static ssize_t
@@ -1266,6 +1368,9 @@ fcloop_create_target_port(struct device *dev, struct device_attribute *attr,
 	tport->nport = nport;
 	tport->lport = nport->lport;
 	nport->tport = tport;
+	spin_lock_init(&tport->lock);
+	INIT_WORK(&tport->ls_work, fcloop_tport_lsrqst_work);
+	INIT_LIST_HEAD(&tport->ls_list);
 
 	return count;
 }
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 20/29] lpfc: Refactor lpfc nvme headers
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (18 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 19/29] nvme-fcloop: add target to host LS request support James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:18   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 21/29] lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx James Smart
                   ` (10 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, Paul Ely, martin.petersen

A lot of files in lpfc include nvme headers, building up relationships that
require a file to change for its headers when there is no other change
necessary. It would be better to localize the nvme headers.

There is also no need for separate nvme (initiator) and nvmet (tgt)
header files.

Refactor the inclusion of nvme headers so that all nvme items are
included by lpfc_nvme.h

Merge lpfc_nvmet.h into lpfc_nvme.h so that there is a single header used
by both the nvme and nvmet sides. This prepares for structure sharing
between the two roles. Prep to add shared function prototypes for upcoming
shared routines.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc_attr.c      |   3 -
 drivers/scsi/lpfc/lpfc_ct.c        |   1 -
 drivers/scsi/lpfc/lpfc_debugfs.c   |   3 -
 drivers/scsi/lpfc/lpfc_hbadisc.c   |   2 -
 drivers/scsi/lpfc/lpfc_init.c      |   3 -
 drivers/scsi/lpfc/lpfc_mem.c       |   4 -
 drivers/scsi/lpfc/lpfc_nportdisc.c |   2 -
 drivers/scsi/lpfc/lpfc_nvme.c      |   3 -
 drivers/scsi/lpfc/lpfc_nvme.h      | 147 ++++++++++++++++++++++++++++++++++
 drivers/scsi/lpfc/lpfc_nvmet.c     |   5 --
 drivers/scsi/lpfc/lpfc_nvmet.h     | 158 -------------------------------------
 drivers/scsi/lpfc/lpfc_sli.c       |   3 -
 12 files changed, 147 insertions(+), 187 deletions(-)
 delete mode 100644 drivers/scsi/lpfc/lpfc_nvmet.h

diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
index 4ff82b36a37a..742098b60fec 100644
--- a/drivers/scsi/lpfc/lpfc_attr.c
+++ b/drivers/scsi/lpfc/lpfc_attr.c
@@ -37,8 +37,6 @@
 #include <scsi/scsi_transport_fc.h>
 #include <scsi/fc/fc_fs.h>
 
-#include <linux/nvme-fc-driver.h>
-
 #include "lpfc_hw4.h"
 #include "lpfc_hw.h"
 #include "lpfc_sli.h"
@@ -48,7 +46,6 @@
 #include "lpfc.h"
 #include "lpfc_scsi.h"
 #include "lpfc_nvme.h"
-#include "lpfc_nvmet.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_version.h"
 #include "lpfc_compat.h"
diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
index 99c9bb249758..a791f2a5a032 100644
--- a/drivers/scsi/lpfc/lpfc_ct.c
+++ b/drivers/scsi/lpfc/lpfc_ct.c
@@ -44,7 +44,6 @@
 #include "lpfc_disc.h"
 #include "lpfc.h"
 #include "lpfc_scsi.h"
-#include "lpfc_nvme.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_crtn.h"
 #include "lpfc_version.h"
diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
index 2e6a68d9ea4f..fe3585258d31 100644
--- a/drivers/scsi/lpfc/lpfc_debugfs.c
+++ b/drivers/scsi/lpfc/lpfc_debugfs.c
@@ -39,8 +39,6 @@
 #include <scsi/scsi_transport_fc.h>
 #include <scsi/fc/fc_fs.h>
 
-#include <linux/nvme-fc-driver.h>
-
 #include "lpfc_hw4.h"
 #include "lpfc_hw.h"
 #include "lpfc_sli.h"
@@ -50,7 +48,6 @@
 #include "lpfc.h"
 #include "lpfc_scsi.h"
 #include "lpfc_nvme.h"
-#include "lpfc_nvmet.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_crtn.h"
 #include "lpfc_vport.h"
diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
index 85ada3deb47d..05d51945defd 100644
--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
+++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
@@ -35,8 +35,6 @@
 #include <scsi/scsi_transport_fc.h>
 #include <scsi/fc/fc_fs.h>
 
-#include <linux/nvme-fc-driver.h>
-
 #include "lpfc_hw4.h"
 #include "lpfc_hw.h"
 #include "lpfc_nl.h"
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 6298b1729098..2115ea2dc945 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -50,8 +50,6 @@
 #include <scsi/scsi_tcq.h>
 #include <scsi/fc/fc_fs.h>
 
-#include <linux/nvme-fc-driver.h>
-
 #include "lpfc_hw4.h"
 #include "lpfc_hw.h"
 #include "lpfc_sli.h"
@@ -61,7 +59,6 @@
 #include "lpfc.h"
 #include "lpfc_scsi.h"
 #include "lpfc_nvme.h"
-#include "lpfc_nvmet.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_crtn.h"
 #include "lpfc_vport.h"
diff --git a/drivers/scsi/lpfc/lpfc_mem.c b/drivers/scsi/lpfc/lpfc_mem.c
index 7082279e4c01..726f6619230f 100644
--- a/drivers/scsi/lpfc/lpfc_mem.c
+++ b/drivers/scsi/lpfc/lpfc_mem.c
@@ -31,8 +31,6 @@
 #include <scsi/scsi_transport_fc.h>
 #include <scsi/fc/fc_fs.h>
 
-#include <linux/nvme-fc-driver.h>
-
 #include "lpfc_hw4.h"
 #include "lpfc_hw.h"
 #include "lpfc_sli.h"
@@ -41,8 +39,6 @@
 #include "lpfc_disc.h"
 #include "lpfc.h"
 #include "lpfc_scsi.h"
-#include "lpfc_nvme.h"
-#include "lpfc_nvmet.h"
 #include "lpfc_crtn.h"
 #include "lpfc_logmsg.h"
 
diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
index ae4359013846..1324e34f2a46 100644
--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
+++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
@@ -32,8 +32,6 @@
 #include <scsi/scsi_transport_fc.h>
 #include <scsi/fc/fc_fs.h>
 
-#include <linux/nvme-fc-driver.h>
-
 #include "lpfc_hw4.h"
 #include "lpfc_hw.h"
 #include "lpfc_sli.h"
diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index f6c8963c915d..21f2282b26ba 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -36,9 +36,6 @@
 #include <scsi/scsi_transport_fc.h>
 #include <scsi/fc/fc_fs.h>
 
-#include <linux/nvme.h>
-#include <linux/nvme-fc-driver.h>
-#include <linux/nvme-fc.h>
 #include "lpfc_version.h"
 #include "lpfc_hw4.h"
 #include "lpfc_hw.h"
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index 593c48ff634e..4c1e7e68d4b6 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -21,6 +21,11 @@
  * included with this package.                                     *
  ********************************************************************/
 
+/* Required inclusions from distro. */
+#include <linux/nvme.h>
+#include <linux/nvme-fc-driver.h>
+#include <linux/nvme-fc.h>
+
 #define LPFC_NVME_DEFAULT_SEGS		(64 + 1)	/* 256K IOs */
 
 #define LPFC_NVME_ERSP_LEN		0x20
@@ -74,3 +79,145 @@ struct lpfc_nvme_rport {
 struct lpfc_nvme_fcpreq_priv {
 	struct lpfc_io_buf *nvme_buf;
 };
+
+
+#define LPFC_NVMET_DEFAULT_SEGS		(64 + 1)	/* 256K IOs */
+#define LPFC_NVMET_RQE_MIN_POST		128
+#define LPFC_NVMET_RQE_DEF_POST		512
+#define LPFC_NVMET_RQE_DEF_COUNT	2048
+#define LPFC_NVMET_SUCCESS_LEN		12
+
+#define LPFC_NVMET_MRQ_AUTO		0
+#define LPFC_NVMET_MRQ_MAX		16
+
+#define LPFC_NVMET_WAIT_TMO		(5 * MSEC_PER_SEC)
+
+/* Used for NVME Target */
+struct lpfc_nvmet_tgtport {
+	struct lpfc_hba *phba;
+	struct completion *tport_unreg_cmp;
+
+	/* Stats counters - lpfc_nvmet_unsol_ls_buffer */
+	atomic_t rcv_ls_req_in;
+	atomic_t rcv_ls_req_out;
+	atomic_t rcv_ls_req_drop;
+	atomic_t xmt_ls_abort;
+	atomic_t xmt_ls_abort_cmpl;
+
+	/* Stats counters - lpfc_nvmet_xmt_ls_rsp */
+	atomic_t xmt_ls_rsp;
+	atomic_t xmt_ls_drop;
+
+	/* Stats counters - lpfc_nvmet_xmt_ls_rsp_cmp */
+	atomic_t xmt_ls_rsp_error;
+	atomic_t xmt_ls_rsp_aborted;
+	atomic_t xmt_ls_rsp_xb_set;
+	atomic_t xmt_ls_rsp_cmpl;
+
+	/* Stats counters - lpfc_nvmet_unsol_fcp_buffer */
+	atomic_t rcv_fcp_cmd_in;
+	atomic_t rcv_fcp_cmd_out;
+	atomic_t rcv_fcp_cmd_drop;
+	atomic_t rcv_fcp_cmd_defer;
+	atomic_t xmt_fcp_release;
+
+	/* Stats counters - lpfc_nvmet_xmt_fcp_op */
+	atomic_t xmt_fcp_drop;
+	atomic_t xmt_fcp_read_rsp;
+	atomic_t xmt_fcp_read;
+	atomic_t xmt_fcp_write;
+	atomic_t xmt_fcp_rsp;
+
+	/* Stats counters - lpfc_nvmet_xmt_fcp_op_cmp */
+	atomic_t xmt_fcp_rsp_xb_set;
+	atomic_t xmt_fcp_rsp_cmpl;
+	atomic_t xmt_fcp_rsp_error;
+	atomic_t xmt_fcp_rsp_aborted;
+	atomic_t xmt_fcp_rsp_drop;
+
+	/* Stats counters - lpfc_nvmet_xmt_fcp_abort */
+	atomic_t xmt_fcp_xri_abort_cqe;
+	atomic_t xmt_fcp_abort;
+	atomic_t xmt_fcp_abort_cmpl;
+	atomic_t xmt_abort_sol;
+	atomic_t xmt_abort_unsol;
+	atomic_t xmt_abort_rsp;
+	atomic_t xmt_abort_rsp_error;
+
+	/* Stats counters - defer IO */
+	atomic_t defer_ctx;
+	atomic_t defer_fod;
+	atomic_t defer_wqfull;
+};
+
+struct lpfc_nvmet_ctx_info {
+	struct list_head nvmet_ctx_list;
+	spinlock_t	nvmet_ctx_list_lock; /* lock per CPU */
+	struct lpfc_nvmet_ctx_info *nvmet_ctx_next_cpu;
+	struct lpfc_nvmet_ctx_info *nvmet_ctx_start_cpu;
+	uint16_t	nvmet_ctx_list_cnt;
+	char pad[16];  /* pad to a cache-line */
+};
+
+/* This retrieves the context info associated with the specified cpu / mrq */
+#define lpfc_get_ctx_list(phba, cpu, mrq)  \
+	(phba->sli4_hba.nvmet_ctx_info + ((cpu * phba->cfg_nvmet_mrq) + mrq))
+
+struct lpfc_nvmet_rcv_ctx {
+	union {
+		struct nvmefc_ls_rsp ls_rsp;
+		struct nvmefc_tgt_fcp_req fcp_req;
+	} ctx;
+	struct list_head list;
+	struct lpfc_hba *phba;
+	struct lpfc_iocbq *wqeq;
+	struct lpfc_iocbq *abort_wqeq;
+	spinlock_t ctxlock; /* protect flag access */
+	uint32_t sid;
+	uint32_t offset;
+	uint16_t oxid;
+	uint16_t size;
+	uint16_t entry_cnt;
+	uint16_t cpu;
+	uint16_t idx;
+	uint16_t state;
+	/* States */
+#define LPFC_NVMET_STE_LS_RCV		1
+#define LPFC_NVMET_STE_LS_ABORT		2
+#define LPFC_NVMET_STE_LS_RSP		3
+#define LPFC_NVMET_STE_RCV		4
+#define LPFC_NVMET_STE_DATA		5
+#define LPFC_NVMET_STE_ABORT		6
+#define LPFC_NVMET_STE_DONE		7
+#define LPFC_NVMET_STE_FREE		0xff
+	uint16_t flag;
+#define LPFC_NVMET_IO_INP		0x1  /* IO is in progress on exchange */
+#define LPFC_NVMET_ABORT_OP		0x2  /* Abort WQE issued on exchange */
+#define LPFC_NVMET_XBUSY		0x4  /* XB bit set on IO cmpl */
+#define LPFC_NVMET_CTX_RLS		0x8  /* ctx free requested */
+#define LPFC_NVMET_ABTS_RCV		0x10  /* ABTS received on exchange */
+#define LPFC_NVMET_CTX_REUSE_WQ		0x20  /* ctx reused via WQ */
+#define LPFC_NVMET_DEFER_WQFULL		0x40  /* Waiting on a free WQE */
+#define LPFC_NVMET_TNOTIFY		0x80  /* notify transport of abts */
+	struct rqb_dmabuf *rqb_buffer;
+	struct lpfc_nvmet_ctxbuf *ctxbuf;
+	struct lpfc_sli4_hdw_queue *hdwq;
+
+#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+	uint64_t ts_isr_cmd;
+	uint64_t ts_cmd_nvme;
+	uint64_t ts_nvme_data;
+	uint64_t ts_data_wqput;
+	uint64_t ts_isr_data;
+	uint64_t ts_data_nvme;
+	uint64_t ts_nvme_status;
+	uint64_t ts_status_wqput;
+	uint64_t ts_isr_status;
+	uint64_t ts_status_nvme;
+#endif
+};
+
+
+/* routines found in lpfc_nvme.c */
+
+/* routines found in lpfc_nvmet.c */
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index 47b983eddbb2..8d991466970f 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -36,10 +36,6 @@
 #include <scsi/scsi_transport_fc.h>
 #include <scsi/fc/fc_fs.h>
 
-#include <linux/nvme.h>
-#include <linux/nvme-fc-driver.h>
-#include <linux/nvme-fc.h>
-
 #include "lpfc_version.h"
 #include "lpfc_hw4.h"
 #include "lpfc_hw.h"
@@ -50,7 +46,6 @@
 #include "lpfc.h"
 #include "lpfc_scsi.h"
 #include "lpfc_nvme.h"
-#include "lpfc_nvmet.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_crtn.h"
 #include "lpfc_vport.h"
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.h b/drivers/scsi/lpfc/lpfc_nvmet.h
deleted file mode 100644
index f0196f3ef90d..000000000000
--- a/drivers/scsi/lpfc/lpfc_nvmet.h
+++ /dev/null
@@ -1,158 +0,0 @@
-/*******************************************************************
- * This file is part of the Emulex Linux Device Driver for         *
- * Fibre Channel Host Bus Adapters.                                *
- * Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
- * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.     *
- * Copyright (C) 2004-2016 Emulex.  All rights reserved.           *
- * EMULEX and SLI are trademarks of Emulex.                        *
- * www.broadcom.com                                                *
- * Portions Copyright (C) 2004-2005 Christoph Hellwig              *
- *                                                                 *
- * This program is free software; you can redistribute it and/or   *
- * modify it under the terms of version 2 of the GNU General       *
- * Public License as published by the Free Software Foundation.    *
- * This program is distributed in the hope that it will be useful. *
- * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
- * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
- * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
- * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
- * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
- * more details, a copy of which can be found in the file COPYING  *
- * included with this package.                                     *
- ********************************************************************/
-
-#define LPFC_NVMET_DEFAULT_SEGS		(64 + 1)	/* 256K IOs */
-#define LPFC_NVMET_RQE_MIN_POST		128
-#define LPFC_NVMET_RQE_DEF_POST		512
-#define LPFC_NVMET_RQE_DEF_COUNT	2048
-#define LPFC_NVMET_SUCCESS_LEN		12
-
-#define LPFC_NVMET_MRQ_AUTO		0
-#define LPFC_NVMET_MRQ_MAX		16
-
-#define LPFC_NVMET_WAIT_TMO		(5 * MSEC_PER_SEC)
-
-/* Used for NVME Target */
-struct lpfc_nvmet_tgtport {
-	struct lpfc_hba *phba;
-	struct completion *tport_unreg_cmp;
-
-	/* Stats counters - lpfc_nvmet_unsol_ls_buffer */
-	atomic_t rcv_ls_req_in;
-	atomic_t rcv_ls_req_out;
-	atomic_t rcv_ls_req_drop;
-	atomic_t xmt_ls_abort;
-	atomic_t xmt_ls_abort_cmpl;
-
-	/* Stats counters - lpfc_nvmet_xmt_ls_rsp */
-	atomic_t xmt_ls_rsp;
-	atomic_t xmt_ls_drop;
-
-	/* Stats counters - lpfc_nvmet_xmt_ls_rsp_cmp */
-	atomic_t xmt_ls_rsp_error;
-	atomic_t xmt_ls_rsp_aborted;
-	atomic_t xmt_ls_rsp_xb_set;
-	atomic_t xmt_ls_rsp_cmpl;
-
-	/* Stats counters - lpfc_nvmet_unsol_fcp_buffer */
-	atomic_t rcv_fcp_cmd_in;
-	atomic_t rcv_fcp_cmd_out;
-	atomic_t rcv_fcp_cmd_drop;
-	atomic_t rcv_fcp_cmd_defer;
-	atomic_t xmt_fcp_release;
-
-	/* Stats counters - lpfc_nvmet_xmt_fcp_op */
-	atomic_t xmt_fcp_drop;
-	atomic_t xmt_fcp_read_rsp;
-	atomic_t xmt_fcp_read;
-	atomic_t xmt_fcp_write;
-	atomic_t xmt_fcp_rsp;
-
-	/* Stats counters - lpfc_nvmet_xmt_fcp_op_cmp */
-	atomic_t xmt_fcp_rsp_xb_set;
-	atomic_t xmt_fcp_rsp_cmpl;
-	atomic_t xmt_fcp_rsp_error;
-	atomic_t xmt_fcp_rsp_aborted;
-	atomic_t xmt_fcp_rsp_drop;
-
-	/* Stats counters - lpfc_nvmet_xmt_fcp_abort */
-	atomic_t xmt_fcp_xri_abort_cqe;
-	atomic_t xmt_fcp_abort;
-	atomic_t xmt_fcp_abort_cmpl;
-	atomic_t xmt_abort_sol;
-	atomic_t xmt_abort_unsol;
-	atomic_t xmt_abort_rsp;
-	atomic_t xmt_abort_rsp_error;
-
-	/* Stats counters - defer IO */
-	atomic_t defer_ctx;
-	atomic_t defer_fod;
-	atomic_t defer_wqfull;
-};
-
-struct lpfc_nvmet_ctx_info {
-	struct list_head nvmet_ctx_list;
-	spinlock_t	nvmet_ctx_list_lock; /* lock per CPU */
-	struct lpfc_nvmet_ctx_info *nvmet_ctx_next_cpu;
-	struct lpfc_nvmet_ctx_info *nvmet_ctx_start_cpu;
-	uint16_t	nvmet_ctx_list_cnt;
-	char pad[16];  /* pad to a cache-line */
-};
-
-/* This retrieves the context info associated with the specified cpu / mrq */
-#define lpfc_get_ctx_list(phba, cpu, mrq)  \
-	(phba->sli4_hba.nvmet_ctx_info + ((cpu * phba->cfg_nvmet_mrq) + mrq))
-
-struct lpfc_nvmet_rcv_ctx {
-	union {
-		struct nvmefc_ls_rsp ls_rsp;
-		struct nvmefc_tgt_fcp_req fcp_req;
-	} ctx;
-	struct list_head list;
-	struct lpfc_hba *phba;
-	struct lpfc_iocbq *wqeq;
-	struct lpfc_iocbq *abort_wqeq;
-	spinlock_t ctxlock; /* protect flag access */
-	uint32_t sid;
-	uint32_t offset;
-	uint16_t oxid;
-	uint16_t size;
-	uint16_t entry_cnt;
-	uint16_t cpu;
-	uint16_t idx;
-	uint16_t state;
-	/* States */
-#define LPFC_NVMET_STE_LS_RCV		1
-#define LPFC_NVMET_STE_LS_ABORT		2
-#define LPFC_NVMET_STE_LS_RSP		3
-#define LPFC_NVMET_STE_RCV		4
-#define LPFC_NVMET_STE_DATA		5
-#define LPFC_NVMET_STE_ABORT		6
-#define LPFC_NVMET_STE_DONE		7
-#define LPFC_NVMET_STE_FREE		0xff
-	uint16_t flag;
-#define LPFC_NVMET_IO_INP		0x1  /* IO is in progress on exchange */
-#define LPFC_NVMET_ABORT_OP		0x2  /* Abort WQE issued on exchange */
-#define LPFC_NVMET_XBUSY		0x4  /* XB bit set on IO cmpl */
-#define LPFC_NVMET_CTX_RLS		0x8  /* ctx free requested */
-#define LPFC_NVMET_ABTS_RCV		0x10  /* ABTS received on exchange */
-#define LPFC_NVMET_CTX_REUSE_WQ		0x20  /* ctx reused via WQ */
-#define LPFC_NVMET_DEFER_WQFULL		0x40  /* Waiting on a free WQE */
-#define LPFC_NVMET_TNOTIFY		0x80  /* notify transport of abts */
-	struct rqb_dmabuf *rqb_buffer;
-	struct lpfc_nvmet_ctxbuf *ctxbuf;
-	struct lpfc_sli4_hdw_queue *hdwq;
-
-#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
-	uint64_t ts_isr_cmd;
-	uint64_t ts_cmd_nvme;
-	uint64_t ts_nvme_data;
-	uint64_t ts_data_wqput;
-	uint64_t ts_isr_data;
-	uint64_t ts_data_nvme;
-	uint64_t ts_nvme_status;
-	uint64_t ts_status_wqput;
-	uint64_t ts_isr_status;
-	uint64_t ts_status_nvme;
-#endif
-};
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index c82b5792da98..a5f282bf0c38 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -39,8 +39,6 @@
 #include <asm/set_memory.h>
 #endif
 
-#include <linux/nvme-fc-driver.h>
-
 #include "lpfc_hw4.h"
 #include "lpfc_hw.h"
 #include "lpfc_sli.h"
@@ -50,7 +48,6 @@
 #include "lpfc.h"
 #include "lpfc_scsi.h"
 #include "lpfc_nvme.h"
-#include "lpfc_nvmet.h"
 #include "lpfc_crtn.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_compat.h"
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 21/29] lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (19 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 20/29] lpfc: Refactor lpfc nvme headers James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:19   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 22/29] lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions James Smart
                   ` (9 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, Paul Ely, martin.petersen

To support FC-NVME-2 support (actually FC-NVME (rev 1) with Ammendment 1),
both the nvme (host) and nvmet (controller/target) sides will need to be
able to receive LS requests.  Currently, this support is in the nvmet side
only. To prepare for both sides supporting LS receive, rename
lpfc_nvmet_rcv_ctx to lpfc_async_xchg_ctx and commonize the definition.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc.h         |   2 +-
 drivers/scsi/lpfc/lpfc_crtn.h    |   1 -
 drivers/scsi/lpfc/lpfc_debugfs.c |   2 +-
 drivers/scsi/lpfc/lpfc_init.c    |   2 +-
 drivers/scsi/lpfc/lpfc_nvme.h    |   7 +--
 drivers/scsi/lpfc/lpfc_nvmet.c   | 109 ++++++++++++++++++++-------------------
 drivers/scsi/lpfc/lpfc_sli.c     |   2 +-
 7 files changed, 63 insertions(+), 62 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
index 935f98804198..b1b41661462f 100644
--- a/drivers/scsi/lpfc/lpfc.h
+++ b/drivers/scsi/lpfc/lpfc.h
@@ -143,7 +143,7 @@ struct lpfc_dmabuf {
 
 struct lpfc_nvmet_ctxbuf {
 	struct list_head list;
-	struct lpfc_nvmet_rcv_ctx *context;
+	struct lpfc_async_xchg_ctx *context;
 	struct lpfc_iocbq *iocbq;
 	struct lpfc_sglq *sglq;
 	struct work_struct defer_work;
diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
index ee353c84a097..9cd7767636d3 100644
--- a/drivers/scsi/lpfc/lpfc_crtn.h
+++ b/drivers/scsi/lpfc/lpfc_crtn.h
@@ -24,7 +24,6 @@ typedef int (*node_filter)(struct lpfc_nodelist *, void *);
 
 struct fc_rport;
 struct fc_frame_header;
-struct lpfc_nvmet_rcv_ctx;
 void lpfc_down_link(struct lpfc_hba *, LPFC_MBOXQ_t *);
 void lpfc_sli_read_link_ste(struct lpfc_hba *);
 void lpfc_dump_mem(struct lpfc_hba *, LPFC_MBOXQ_t *, uint16_t, uint16_t);
diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
index fe3585258d31..8d5e4b72c885 100644
--- a/drivers/scsi/lpfc/lpfc_debugfs.c
+++ b/drivers/scsi/lpfc/lpfc_debugfs.c
@@ -1032,7 +1032,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
 {
 	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_nvmet_tgtport *tgtp;
-	struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp;
+	struct lpfc_async_xchg_ctx *ctxp, *next_ctxp;
 	struct nvme_fc_local_port *localport;
 	struct lpfc_fc4_ctrl_stat *cstat;
 	struct lpfc_nvme_lport *lport;
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 2115ea2dc945..7bcd743dba4d 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -1038,7 +1038,7 @@ static int
 lpfc_hba_down_post_s4(struct lpfc_hba *phba)
 {
 	struct lpfc_io_buf *psb, *psb_next;
-	struct lpfc_nvmet_rcv_ctx *ctxp, *ctxp_next;
+	struct lpfc_async_xchg_ctx *ctxp, *ctxp_next;
 	struct lpfc_sli4_hdw_queue *qp;
 	LIST_HEAD(aborts);
 	LIST_HEAD(nvme_aborts);
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index 4c1e7e68d4b6..25eebc362121 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -163,13 +163,14 @@ struct lpfc_nvmet_ctx_info {
 #define lpfc_get_ctx_list(phba, cpu, mrq)  \
 	(phba->sli4_hba.nvmet_ctx_info + ((cpu * phba->cfg_nvmet_mrq) + mrq))
 
-struct lpfc_nvmet_rcv_ctx {
+struct lpfc_async_xchg_ctx {
 	union {
-		struct nvmefc_ls_rsp ls_rsp;
 		struct nvmefc_tgt_fcp_req fcp_req;
-	} ctx;
+	} hdlrctx;
 	struct list_head list;
 	struct lpfc_hba *phba;
+	struct nvmefc_ls_req *ls_req;
+	struct nvmefc_ls_rsp ls_rsp;
 	struct lpfc_iocbq *wqeq;
 	struct lpfc_iocbq *abort_wqeq;
 	spinlock_t ctxlock; /* protect flag access */
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index 8d991466970f..ded7f973cad4 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -52,22 +52,22 @@
 #include "lpfc_debugfs.h"
 
 static struct lpfc_iocbq *lpfc_nvmet_prep_ls_wqe(struct lpfc_hba *,
-						 struct lpfc_nvmet_rcv_ctx *,
+						 struct lpfc_async_xchg_ctx *,
 						 dma_addr_t rspbuf,
 						 uint16_t rspsize);
 static struct lpfc_iocbq *lpfc_nvmet_prep_fcp_wqe(struct lpfc_hba *,
-						  struct lpfc_nvmet_rcv_ctx *);
+						  struct lpfc_async_xchg_ctx *);
 static int lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *,
-					  struct lpfc_nvmet_rcv_ctx *,
+					  struct lpfc_async_xchg_ctx *,
 					  uint32_t, uint16_t);
 static int lpfc_nvmet_unsol_fcp_issue_abort(struct lpfc_hba *,
-					    struct lpfc_nvmet_rcv_ctx *,
+					    struct lpfc_async_xchg_ctx *,
 					    uint32_t, uint16_t);
 static int lpfc_nvmet_unsol_ls_issue_abort(struct lpfc_hba *,
-					   struct lpfc_nvmet_rcv_ctx *,
+					   struct lpfc_async_xchg_ctx *,
 					   uint32_t, uint16_t);
 static void lpfc_nvmet_wqfull_flush(struct lpfc_hba *, struct lpfc_queue *,
-				    struct lpfc_nvmet_rcv_ctx *);
+				    struct lpfc_async_xchg_ctx *);
 static void lpfc_nvmet_fcp_rqst_defer_work(struct work_struct *);
 
 static void lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf);
@@ -216,10 +216,10 @@ lpfc_nvmet_cmd_template(void)
 }
 
 #if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
-static struct lpfc_nvmet_rcv_ctx *
+static struct lpfc_async_xchg_ctx *
 lpfc_nvmet_get_ctx_for_xri(struct lpfc_hba *phba, u16 xri)
 {
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	unsigned long iflag;
 	bool found = false;
 
@@ -238,10 +238,10 @@ lpfc_nvmet_get_ctx_for_xri(struct lpfc_hba *phba, u16 xri)
 	return NULL;
 }
 
-static struct lpfc_nvmet_rcv_ctx *
+static struct lpfc_async_xchg_ctx *
 lpfc_nvmet_get_ctx_for_oxid(struct lpfc_hba *phba, u16 oxid, u32 sid)
 {
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	unsigned long iflag;
 	bool found = false;
 
@@ -262,7 +262,8 @@ lpfc_nvmet_get_ctx_for_oxid(struct lpfc_hba *phba, u16 oxid, u32 sid)
 #endif
 
 static void
-lpfc_nvmet_defer_release(struct lpfc_hba *phba, struct lpfc_nvmet_rcv_ctx *ctxp)
+lpfc_nvmet_defer_release(struct lpfc_hba *phba,
+			struct lpfc_async_xchg_ctx *ctxp)
 {
 	lockdep_assert_held(&ctxp->ctxlock);
 
@@ -298,7 +299,7 @@ lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 {
 	struct lpfc_nvmet_tgtport *tgtp;
 	struct nvmefc_ls_rsp *rsp;
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	uint32_t status, result;
 
 	status = bf_get(lpfc_wcqe_c_status, wcqe);
@@ -330,7 +331,7 @@ lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 	}
 
 out:
-	rsp = &ctxp->ctx.ls_rsp;
+	rsp = &ctxp->ls_rsp;
 
 	lpfc_nvmeio_data(phba, "NVMET LS  CMPL: xri x%x stat x%x result x%x\n",
 			 ctxp->oxid, status, result);
@@ -364,7 +365,7 @@ void
 lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
 {
 #if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
-	struct lpfc_nvmet_rcv_ctx *ctxp = ctx_buf->context;
+	struct lpfc_async_xchg_ctx *ctxp = ctx_buf->context;
 	struct lpfc_nvmet_tgtport *tgtp;
 	struct fc_frame_header *fc_hdr;
 	struct rqb_dmabuf *nvmebuf;
@@ -416,7 +417,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
 		size = nvmebuf->bytes_recv;
 		sid = sli4_sid_from_fc_hdr(fc_hdr);
 
-		ctxp = (struct lpfc_nvmet_rcv_ctx *)ctx_buf->context;
+		ctxp = (struct lpfc_async_xchg_ctx *)ctx_buf->context;
 		ctxp->wqeq = NULL;
 		ctxp->offset = 0;
 		ctxp->phba = phba;
@@ -490,7 +491,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
 static void
 lpfc_nvmet_ktime(struct lpfc_hba *phba,
-		 struct lpfc_nvmet_rcv_ctx *ctxp)
+		 struct lpfc_async_xchg_ctx *ctxp)
 {
 	uint64_t seg1, seg2, seg3, seg4, seg5;
 	uint64_t seg6, seg7, seg8, seg9, seg10;
@@ -699,7 +700,7 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 {
 	struct lpfc_nvmet_tgtport *tgtp;
 	struct nvmefc_tgt_fcp_req *rsp;
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	uint32_t status, result, op, start_clean, logerr;
 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
 	uint32_t id;
@@ -708,7 +709,7 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 	ctxp = cmdwqe->context2;
 	ctxp->flag &= ~LPFC_NVMET_IO_INP;
 
-	rsp = &ctxp->ctx.fcp_req;
+	rsp = &ctxp->hdlrctx.fcp_req;
 	op = rsp->op;
 
 	status = bf_get(lpfc_wcqe_c_status, wcqe);
@@ -827,8 +828,8 @@ static int
 lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
 		      struct nvmefc_ls_rsp *rsp)
 {
-	struct lpfc_nvmet_rcv_ctx *ctxp =
-		container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.ls_rsp);
+	struct lpfc_async_xchg_ctx *ctxp =
+		container_of(rsp, struct lpfc_async_xchg_ctx, ls_rsp);
 	struct lpfc_hba *phba = ctxp->phba;
 	struct hbq_dmabuf *nvmebuf =
 		(struct hbq_dmabuf *)ctxp->rqb_buffer;
@@ -918,8 +919,8 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
 		      struct nvmefc_tgt_fcp_req *rsp)
 {
 	struct lpfc_nvmet_tgtport *lpfc_nvmep = tgtport->private;
-	struct lpfc_nvmet_rcv_ctx *ctxp =
-		container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.fcp_req);
+	struct lpfc_async_xchg_ctx *ctxp =
+		container_of(rsp, struct lpfc_async_xchg_ctx, hdlrctx.fcp_req);
 	struct lpfc_hba *phba = ctxp->phba;
 	struct lpfc_queue *wq;
 	struct lpfc_iocbq *nvmewqeq;
@@ -1052,8 +1053,8 @@ lpfc_nvmet_xmt_fcp_abort(struct nvmet_fc_target_port *tgtport,
 			 struct nvmefc_tgt_fcp_req *req)
 {
 	struct lpfc_nvmet_tgtport *lpfc_nvmep = tgtport->private;
-	struct lpfc_nvmet_rcv_ctx *ctxp =
-		container_of(req, struct lpfc_nvmet_rcv_ctx, ctx.fcp_req);
+	struct lpfc_async_xchg_ctx *ctxp =
+		container_of(req, struct lpfc_async_xchg_ctx, hdlrctx.fcp_req);
 	struct lpfc_hba *phba = ctxp->phba;
 	struct lpfc_queue *wq;
 	unsigned long flags;
@@ -1114,8 +1115,8 @@ lpfc_nvmet_xmt_fcp_release(struct nvmet_fc_target_port *tgtport,
 			   struct nvmefc_tgt_fcp_req *rsp)
 {
 	struct lpfc_nvmet_tgtport *lpfc_nvmep = tgtport->private;
-	struct lpfc_nvmet_rcv_ctx *ctxp =
-		container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.fcp_req);
+	struct lpfc_async_xchg_ctx *ctxp =
+		container_of(rsp, struct lpfc_async_xchg_ctx, hdlrctx.fcp_req);
 	struct lpfc_hba *phba = ctxp->phba;
 	unsigned long flags;
 	bool aborting = false;
@@ -1157,8 +1158,8 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
 		     struct nvmefc_tgt_fcp_req *rsp)
 {
 	struct lpfc_nvmet_tgtport *tgtp;
-	struct lpfc_nvmet_rcv_ctx *ctxp =
-		container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.fcp_req);
+	struct lpfc_async_xchg_ctx *ctxp =
+		container_of(rsp, struct lpfc_async_xchg_ctx, hdlrctx.fcp_req);
 	struct rqb_dmabuf *nvmebuf = ctxp->rqb_buffer;
 	struct lpfc_hba *phba = ctxp->phba;
 	unsigned long iflag;
@@ -1564,7 +1565,7 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
 #if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
 	uint16_t xri = bf_get(lpfc_wcqe_xa_xri, axri);
 	uint16_t rxid = bf_get(lpfc_wcqe_xa_remote_xid, axri);
-	struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp;
+	struct lpfc_async_xchg_ctx *ctxp, *next_ctxp;
 	struct lpfc_nvmet_tgtport *tgtp;
 	struct nvmefc_tgt_fcp_req *req = NULL;
 	struct lpfc_nodelist *ndlp;
@@ -1650,7 +1651,7 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
 				 "NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n",
 				 xri, raw_smp_processor_id(), 0);
 
-		req = &ctxp->ctx.fcp_req;
+		req = &ctxp->hdlrctx.fcp_req;
 		if (req)
 			nvmet_fc_rcv_fcp_abort(phba->targetport, req);
 	}
@@ -1663,7 +1664,7 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
 {
 #if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
 	struct lpfc_hba *phba = vport->phba;
-	struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp;
+	struct lpfc_async_xchg_ctx *ctxp, *next_ctxp;
 	struct nvmefc_tgt_fcp_req *rsp;
 	uint32_t sid;
 	uint16_t oxid, xri;
@@ -1696,7 +1697,7 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
 		lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
 				"6319 NVMET Rcv ABTS:acc xri x%x\n", xri);
 
-		rsp = &ctxp->ctx.fcp_req;
+		rsp = &ctxp->hdlrctx.fcp_req;
 		nvmet_fc_rcv_fcp_abort(phba->targetport, rsp);
 
 		/* Respond with BA_ACC accordingly */
@@ -1770,7 +1771,7 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
 		if (ctxp->flag & LPFC_NVMET_TNOTIFY) {
 			/* Notify the transport */
 			nvmet_fc_rcv_fcp_abort(phba->targetport,
-					       &ctxp->ctx.fcp_req);
+					       &ctxp->hdlrctx.fcp_req);
 		} else {
 			cancel_work_sync(&ctxp->ctxbuf->defer_work);
 			spin_lock_irqsave(&ctxp->ctxlock, iflag);
@@ -1798,7 +1799,7 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
 
 static void
 lpfc_nvmet_wqfull_flush(struct lpfc_hba *phba, struct lpfc_queue *wq,
-			struct lpfc_nvmet_rcv_ctx *ctxp)
+			struct lpfc_async_xchg_ctx *ctxp)
 {
 	struct lpfc_sli_ring *pring;
 	struct lpfc_iocbq *nvmewqeq;
@@ -1849,7 +1850,7 @@ lpfc_nvmet_wqfull_process(struct lpfc_hba *phba,
 #if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
 	struct lpfc_sli_ring *pring;
 	struct lpfc_iocbq *nvmewqeq;
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	unsigned long iflags;
 	int rc;
 
@@ -1863,7 +1864,7 @@ lpfc_nvmet_wqfull_process(struct lpfc_hba *phba,
 		list_remove_head(&wq->wqfull_list, nvmewqeq, struct lpfc_iocbq,
 				 list);
 		spin_unlock_irqrestore(&pring->ring_lock, iflags);
-		ctxp = (struct lpfc_nvmet_rcv_ctx *)nvmewqeq->context2;
+		ctxp = (struct lpfc_async_xchg_ctx *)nvmewqeq->context2;
 		rc = lpfc_sli4_issue_wqe(phba, ctxp->hdwq, nvmewqeq);
 		spin_lock_irqsave(&pring->ring_lock, iflags);
 		if (rc == -EBUSY) {
@@ -1875,7 +1876,7 @@ lpfc_nvmet_wqfull_process(struct lpfc_hba *phba,
 		if (rc == WQE_SUCCESS) {
 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
 			if (ctxp->ts_cmd_nvme) {
-				if (ctxp->ctx.fcp_req.op == NVMET_FCOP_RSP)
+				if (ctxp->hdlrctx.fcp_req.op == NVMET_FCOP_RSP)
 					ctxp->ts_status_wqput = ktime_get_ns();
 				else
 					ctxp->ts_data_wqput = ktime_get_ns();
@@ -1941,7 +1942,7 @@ lpfc_nvmet_unsol_ls_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 #if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
 	struct lpfc_nvmet_tgtport *tgtp;
 	struct fc_frame_header *fc_hdr;
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	uint32_t *payload;
 	uint32_t size, oxid, sid, rc;
 
@@ -1964,7 +1965,7 @@ lpfc_nvmet_unsol_ls_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 	size = bf_get(lpfc_rcqe_length,  &nvmebuf->cq_event.cqe.rcqe_cmpl);
 	sid = sli4_sid_from_fc_hdr(fc_hdr);
 
-	ctxp = kzalloc(sizeof(struct lpfc_nvmet_rcv_ctx), GFP_ATOMIC);
+	ctxp = kzalloc(sizeof(struct lpfc_async_xchg_ctx), GFP_ATOMIC);
 	if (ctxp == NULL) {
 		atomic_inc(&tgtp->rcv_ls_req_drop);
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
@@ -1995,7 +1996,7 @@ lpfc_nvmet_unsol_ls_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 	 * lpfc_nvmet_xmt_ls_rsp_cmp should free the allocated ctxp.
 	 */
 	atomic_inc(&tgtp->rcv_ls_req_in);
-	rc = nvmet_fc_rcv_ls_req(phba->targetport, NULL, &ctxp->ctx.ls_rsp,
+	rc = nvmet_fc_rcv_ls_req(phba->targetport, NULL, &ctxp->ls_rsp,
 				 payload, size);
 
 	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
@@ -2029,7 +2030,7 @@ static void
 lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
 {
 #if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
-	struct lpfc_nvmet_rcv_ctx *ctxp = ctx_buf->context;
+	struct lpfc_async_xchg_ctx *ctxp = ctx_buf->context;
 	struct lpfc_hba *phba = ctxp->phba;
 	struct rqb_dmabuf *nvmebuf = ctxp->rqb_buffer;
 	struct lpfc_nvmet_tgtport *tgtp;
@@ -2073,7 +2074,7 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
 	 * A buffer has already been reposted for this IO, so just free
 	 * the nvmebuf.
 	 */
-	rc = nvmet_fc_rcv_fcp_req(phba->targetport, &ctxp->ctx.fcp_req,
+	rc = nvmet_fc_rcv_fcp_req(phba->targetport, &ctxp->hdlrctx.fcp_req,
 				  payload, ctxp->size);
 	/* Process FCP command */
 	if (rc == 0) {
@@ -2220,7 +2221,7 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
 			    uint64_t isr_timestamp,
 			    uint8_t cqflag)
 {
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	struct lpfc_nvmet_tgtport *tgtp;
 	struct fc_frame_header *fc_hdr;
 	struct lpfc_nvmet_ctxbuf *ctx_buf;
@@ -2304,7 +2305,7 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
 
 	sid = sli4_sid_from_fc_hdr(fc_hdr);
 
-	ctxp = (struct lpfc_nvmet_rcv_ctx *)ctx_buf->context;
+	ctxp = (struct lpfc_async_xchg_ctx *)ctx_buf->context;
 	spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag);
 	list_add_tail(&ctxp->list, &phba->sli4_hba.t_active_ctx_list);
 	spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag);
@@ -2460,7 +2461,7 @@ lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba,
  **/
 static struct lpfc_iocbq *
 lpfc_nvmet_prep_ls_wqe(struct lpfc_hba *phba,
-		       struct lpfc_nvmet_rcv_ctx *ctxp,
+		       struct lpfc_async_xchg_ctx *ctxp,
 		       dma_addr_t rspbuf, uint16_t rspsize)
 {
 	struct lpfc_nodelist *ndlp;
@@ -2582,9 +2583,9 @@ lpfc_nvmet_prep_ls_wqe(struct lpfc_hba *phba,
 
 static struct lpfc_iocbq *
 lpfc_nvmet_prep_fcp_wqe(struct lpfc_hba *phba,
-			struct lpfc_nvmet_rcv_ctx *ctxp)
+			struct lpfc_async_xchg_ctx *ctxp)
 {
-	struct nvmefc_tgt_fcp_req *rsp = &ctxp->ctx.fcp_req;
+	struct nvmefc_tgt_fcp_req *rsp = &ctxp->hdlrctx.fcp_req;
 	struct lpfc_nvmet_tgtport *tgtp;
 	struct sli4_sge *sgl;
 	struct lpfc_nodelist *ndlp;
@@ -2928,7 +2929,7 @@ static void
 lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 			     struct lpfc_wcqe_complete *wcqe)
 {
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	struct lpfc_nvmet_tgtport *tgtp;
 	uint32_t result;
 	unsigned long flags;
@@ -2997,7 +2998,7 @@ static void
 lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 			       struct lpfc_wcqe_complete *wcqe)
 {
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	struct lpfc_nvmet_tgtport *tgtp;
 	unsigned long flags;
 	uint32_t result;
@@ -3078,7 +3079,7 @@ static void
 lpfc_nvmet_xmt_ls_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 			    struct lpfc_wcqe_complete *wcqe)
 {
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	struct lpfc_nvmet_tgtport *tgtp;
 	uint32_t result;
 
@@ -3119,7 +3120,7 @@ lpfc_nvmet_xmt_ls_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 
 static int
 lpfc_nvmet_unsol_issue_abort(struct lpfc_hba *phba,
-			     struct lpfc_nvmet_rcv_ctx *ctxp,
+			     struct lpfc_async_xchg_ctx *ctxp,
 			     uint32_t sid, uint16_t xri)
 {
 	struct lpfc_nvmet_tgtport *tgtp;
@@ -3214,7 +3215,7 @@ lpfc_nvmet_unsol_issue_abort(struct lpfc_hba *phba,
 
 static int
 lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
-			       struct lpfc_nvmet_rcv_ctx *ctxp,
+			       struct lpfc_async_xchg_ctx *ctxp,
 			       uint32_t sid, uint16_t xri)
 {
 	struct lpfc_nvmet_tgtport *tgtp;
@@ -3340,7 +3341,7 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
 
 static int
 lpfc_nvmet_unsol_fcp_issue_abort(struct lpfc_hba *phba,
-				 struct lpfc_nvmet_rcv_ctx *ctxp,
+				 struct lpfc_async_xchg_ctx *ctxp,
 				 uint32_t sid, uint16_t xri)
 {
 	struct lpfc_nvmet_tgtport *tgtp;
@@ -3405,7 +3406,7 @@ lpfc_nvmet_unsol_fcp_issue_abort(struct lpfc_hba *phba,
 
 static int
 lpfc_nvmet_unsol_ls_issue_abort(struct lpfc_hba *phba,
-				struct lpfc_nvmet_rcv_ctx *ctxp,
+				struct lpfc_async_xchg_ctx *ctxp,
 				uint32_t sid, uint16_t xri)
 {
 	struct lpfc_nvmet_tgtport *tgtp;
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index a5f282bf0c38..23f034dfd3e2 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -19894,7 +19894,7 @@ lpfc_sli4_issue_wqe(struct lpfc_hba *phba, struct lpfc_sli4_hdw_queue *qp,
 		    struct lpfc_iocbq *pwqe)
 {
 	union lpfc_wqe128 *wqe = &pwqe->wqe;
-	struct lpfc_nvmet_rcv_ctx *ctxp;
+	struct lpfc_async_xchg_ctx *ctxp;
 	struct lpfc_queue *wq;
 	struct lpfc_sglq *sglq;
 	struct lpfc_sli_ring *pring;
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 22/29] lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (20 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 21/29] lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:19   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 23/29] lpfc: Refactor NVME LS receive handling James Smart
                   ` (8 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, Paul Ely, martin.petersen

The last step of commonization is to remove the 'T' suffix from
state and flag field definitions.  This is minor, but removes the
mental association that it solely applies to nvmet use.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc_init.c  |   2 +-
 drivers/scsi/lpfc/lpfc_nvme.h  |  37 +++++-----
 drivers/scsi/lpfc/lpfc_nvmet.c | 158 ++++++++++++++++++++---------------------
 3 files changed, 100 insertions(+), 97 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 7bcd743dba4d..923605382df2 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -1105,7 +1105,7 @@ lpfc_hba_down_post_s4(struct lpfc_hba *phba)
 				 &nvmet_aborts);
 		spin_unlock_irq(&phba->sli4_hba.abts_nvmet_buf_list_lock);
 		list_for_each_entry_safe(ctxp, ctxp_next, &nvmet_aborts, list) {
-			ctxp->flag &= ~(LPFC_NVMET_XBUSY | LPFC_NVMET_ABORT_OP);
+			ctxp->flag &= ~(LPFC_NVME_XBUSY | LPFC_NVME_ABORT_OP);
 			lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf);
 		}
 	}
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index 25eebc362121..c5706c950625 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -163,6 +163,26 @@ struct lpfc_nvmet_ctx_info {
 #define lpfc_get_ctx_list(phba, cpu, mrq)  \
 	(phba->sli4_hba.nvmet_ctx_info + ((cpu * phba->cfg_nvmet_mrq) + mrq))
 
+/* Values for state field of struct lpfc_async_xchg_ctx */
+#define LPFC_NVME_STE_LS_RCV		1
+#define LPFC_NVME_STE_LS_ABORT		2
+#define LPFC_NVME_STE_LS_RSP		3
+#define LPFC_NVME_STE_RCV		4
+#define LPFC_NVME_STE_DATA		5
+#define LPFC_NVME_STE_ABORT		6
+#define LPFC_NVME_STE_DONE		7
+#define LPFC_NVME_STE_FREE		0xff
+
+/* Values for flag field of struct lpfc_async_xchg_ctx */
+#define LPFC_NVME_IO_INP		0x1  /* IO is in progress on exchange */
+#define LPFC_NVME_ABORT_OP		0x2  /* Abort WQE issued on exchange */
+#define LPFC_NVME_XBUSY			0x4  /* XB bit set on IO cmpl */
+#define LPFC_NVME_CTX_RLS		0x8  /* ctx free requested */
+#define LPFC_NVME_ABTS_RCV		0x10  /* ABTS received on exchange */
+#define LPFC_NVME_CTX_REUSE_WQ		0x20  /* ctx reused via WQ */
+#define LPFC_NVME_DEFER_WQFULL		0x40  /* Waiting on a free WQE */
+#define LPFC_NVME_TNOTIFY		0x80  /* notify transport of abts */
+
 struct lpfc_async_xchg_ctx {
 	union {
 		struct nvmefc_tgt_fcp_req fcp_req;
@@ -182,24 +202,7 @@ struct lpfc_async_xchg_ctx {
 	uint16_t cpu;
 	uint16_t idx;
 	uint16_t state;
-	/* States */
-#define LPFC_NVMET_STE_LS_RCV		1
-#define LPFC_NVMET_STE_LS_ABORT		2
-#define LPFC_NVMET_STE_LS_RSP		3
-#define LPFC_NVMET_STE_RCV		4
-#define LPFC_NVMET_STE_DATA		5
-#define LPFC_NVMET_STE_ABORT		6
-#define LPFC_NVMET_STE_DONE		7
-#define LPFC_NVMET_STE_FREE		0xff
 	uint16_t flag;
-#define LPFC_NVMET_IO_INP		0x1  /* IO is in progress on exchange */
-#define LPFC_NVMET_ABORT_OP		0x2  /* Abort WQE issued on exchange */
-#define LPFC_NVMET_XBUSY		0x4  /* XB bit set on IO cmpl */
-#define LPFC_NVMET_CTX_RLS		0x8  /* ctx free requested */
-#define LPFC_NVMET_ABTS_RCV		0x10  /* ABTS received on exchange */
-#define LPFC_NVMET_CTX_REUSE_WQ		0x20  /* ctx reused via WQ */
-#define LPFC_NVMET_DEFER_WQFULL		0x40  /* Waiting on a free WQE */
-#define LPFC_NVMET_TNOTIFY		0x80  /* notify transport of abts */
 	struct rqb_dmabuf *rqb_buffer;
 	struct lpfc_nvmet_ctxbuf *ctxbuf;
 	struct lpfc_sli4_hdw_queue *hdwq;
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index ded7f973cad4..28db056cf5af 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -271,10 +271,10 @@ lpfc_nvmet_defer_release(struct lpfc_hba *phba,
 			"6313 NVMET Defer ctx release oxid x%x flg x%x\n",
 			ctxp->oxid, ctxp->flag);
 
-	if (ctxp->flag & LPFC_NVMET_CTX_RLS)
+	if (ctxp->flag & LPFC_NVME_CTX_RLS)
 		return;
 
-	ctxp->flag |= LPFC_NVMET_CTX_RLS;
+	ctxp->flag |= LPFC_NVME_CTX_RLS;
 	spin_lock(&phba->sli4_hba.t_active_list_lock);
 	list_del(&ctxp->list);
 	spin_unlock(&phba->sli4_hba.t_active_list_lock);
@@ -306,7 +306,7 @@ lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 	result = wcqe->parameter;
 	ctxp = cmdwqe->context2;
 
-	if (ctxp->state != LPFC_NVMET_STE_LS_RSP || ctxp->entry_cnt != 2) {
+	if (ctxp->state != LPFC_NVME_STE_LS_RSP || ctxp->entry_cnt != 2) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"6410 NVMET LS cmpl state mismatch IO x%x: "
 				"%d %d\n",
@@ -374,7 +374,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
 	int cpu;
 	unsigned long iflag;
 
-	if (ctxp->state == LPFC_NVMET_STE_FREE) {
+	if (ctxp->state == LPFC_NVME_STE_FREE) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"6411 NVMET free, already free IO x%x: %d %d\n",
 				ctxp->oxid, ctxp->state, ctxp->entry_cnt);
@@ -386,8 +386,8 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
 		/* check if freed in another path whilst acquiring lock */
 		if (nvmebuf) {
 			ctxp->rqb_buffer = NULL;
-			if (ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) {
-				ctxp->flag &= ~LPFC_NVMET_CTX_REUSE_WQ;
+			if (ctxp->flag & LPFC_NVME_CTX_REUSE_WQ) {
+				ctxp->flag &= ~LPFC_NVME_CTX_REUSE_WQ;
 				spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
 				nvmebuf->hrq->rqbp->rqb_free_buffer(phba,
 								    nvmebuf);
@@ -400,7 +400,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
 			spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
 		}
 	}
-	ctxp->state = LPFC_NVMET_STE_FREE;
+	ctxp->state = LPFC_NVME_STE_FREE;
 
 	spin_lock_irqsave(&phba->sli4_hba.nvmet_io_wait_lock, iflag);
 	if (phba->sli4_hba.nvmet_io_wait_cnt) {
@@ -424,7 +424,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
 		ctxp->size = size;
 		ctxp->oxid = oxid;
 		ctxp->sid = sid;
-		ctxp->state = LPFC_NVMET_STE_RCV;
+		ctxp->state = LPFC_NVME_STE_RCV;
 		ctxp->entry_cnt = 1;
 		ctxp->flag = 0;
 		ctxp->ctxbuf = ctx_buf;
@@ -449,7 +449,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
 
 		/* Indicate that a replacement buffer has been posted */
 		spin_lock_irqsave(&ctxp->ctxlock, iflag);
-		ctxp->flag |= LPFC_NVMET_CTX_REUSE_WQ;
+		ctxp->flag |= LPFC_NVME_CTX_REUSE_WQ;
 		spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
 
 		if (!queue_work(phba->wq, &ctx_buf->defer_work)) {
@@ -707,7 +707,7 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 #endif
 
 	ctxp = cmdwqe->context2;
-	ctxp->flag &= ~LPFC_NVMET_IO_INP;
+	ctxp->flag &= ~LPFC_NVME_IO_INP;
 
 	rsp = &ctxp->hdlrctx.fcp_req;
 	op = rsp->op;
@@ -736,13 +736,13 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 
 		/* pick up SLI4 exhange busy condition */
 		if (bf_get(lpfc_wcqe_c_xb, wcqe)) {
-			ctxp->flag |= LPFC_NVMET_XBUSY;
+			ctxp->flag |= LPFC_NVME_XBUSY;
 			logerr |= LOG_NVME_ABTS;
 			if (tgtp)
 				atomic_inc(&tgtp->xmt_fcp_rsp_xb_set);
 
 		} else {
-			ctxp->flag &= ~LPFC_NVMET_XBUSY;
+			ctxp->flag &= ~LPFC_NVME_XBUSY;
 		}
 
 		lpfc_printf_log(phba, KERN_INFO, logerr,
@@ -764,7 +764,7 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 	if ((op == NVMET_FCOP_READDATA_RSP) ||
 	    (op == NVMET_FCOP_RSP)) {
 		/* Sanity check */
-		ctxp->state = LPFC_NVMET_STE_DONE;
+		ctxp->state = LPFC_NVME_STE_DONE;
 		ctxp->entry_cnt++;
 
 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
@@ -848,14 +848,14 @@ lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
 	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
 			"6023 NVMET LS rsp oxid x%x\n", ctxp->oxid);
 
-	if ((ctxp->state != LPFC_NVMET_STE_LS_RCV) ||
+	if ((ctxp->state != LPFC_NVME_STE_LS_RCV) ||
 	    (ctxp->entry_cnt != 1)) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"6412 NVMET LS rsp state mismatch "
 				"oxid x%x: %d %d\n",
 				ctxp->oxid, ctxp->state, ctxp->entry_cnt);
 	}
-	ctxp->state = LPFC_NVMET_STE_LS_RSP;
+	ctxp->state = LPFC_NVME_STE_LS_RSP;
 	ctxp->entry_cnt++;
 
 	nvmewqeq = lpfc_nvmet_prep_ls_wqe(phba, ctxp, rsp->rspdma,
@@ -965,8 +965,8 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
 #endif
 
 	/* Sanity check */
-	if ((ctxp->flag & LPFC_NVMET_ABTS_RCV) ||
-	    (ctxp->state == LPFC_NVMET_STE_ABORT)) {
+	if ((ctxp->flag & LPFC_NVME_ABTS_RCV) ||
+	    (ctxp->state == LPFC_NVME_STE_ABORT)) {
 		atomic_inc(&lpfc_nvmep->xmt_fcp_drop);
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"6102 IO oxid x%x aborted\n",
@@ -994,7 +994,7 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
 	lpfc_nvmeio_data(phba, "NVMET FCP CMND: xri x%x op x%x len x%x\n",
 			 ctxp->oxid, rsp->op, rsp->rsplen);
 
-	ctxp->flag |= LPFC_NVMET_IO_INP;
+	ctxp->flag |= LPFC_NVME_IO_INP;
 	rc = lpfc_sli4_issue_wqe(phba, ctxp->hdwq, nvmewqeq);
 	if (rc == WQE_SUCCESS) {
 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
@@ -1013,7 +1013,7 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
 		 * WQ was full, so queue nvmewqeq to be sent after
 		 * WQE release CQE
 		 */
-		ctxp->flag |= LPFC_NVMET_DEFER_WQFULL;
+		ctxp->flag |= LPFC_NVME_DEFER_WQFULL;
 		wq = ctxp->hdwq->io_wq;
 		pring = wq->pring;
 		spin_lock_irqsave(&pring->ring_lock, iflags);
@@ -1082,13 +1082,13 @@ lpfc_nvmet_xmt_fcp_abort(struct nvmet_fc_target_port *tgtport,
 	/* Since iaab/iaar are NOT set, we need to check
 	 * if the firmware is in process of aborting IO
 	 */
-	if (ctxp->flag & (LPFC_NVMET_XBUSY | LPFC_NVMET_ABORT_OP)) {
+	if (ctxp->flag & (LPFC_NVME_XBUSY | LPFC_NVME_ABORT_OP)) {
 		spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 		return;
 	}
-	ctxp->flag |= LPFC_NVMET_ABORT_OP;
+	ctxp->flag |= LPFC_NVME_ABORT_OP;
 
-	if (ctxp->flag & LPFC_NVMET_DEFER_WQFULL) {
+	if (ctxp->flag & LPFC_NVME_DEFER_WQFULL) {
 		spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 		lpfc_nvmet_unsol_fcp_issue_abort(phba, ctxp, ctxp->sid,
 						 ctxp->oxid);
@@ -1098,11 +1098,11 @@ lpfc_nvmet_xmt_fcp_abort(struct nvmet_fc_target_port *tgtport,
 	}
 	spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 
-	/* An state of LPFC_NVMET_STE_RCV means we have just received
+	/* A state of LPFC_NVME_STE_RCV means we have just received
 	 * the NVME command and have not started processing it.
 	 * (by issuing any IO WQEs on this exchange yet)
 	 */
-	if (ctxp->state == LPFC_NVMET_STE_RCV)
+	if (ctxp->state == LPFC_NVME_STE_RCV)
 		lpfc_nvmet_unsol_fcp_issue_abort(phba, ctxp, ctxp->sid,
 						 ctxp->oxid);
 	else
@@ -1122,19 +1122,19 @@ lpfc_nvmet_xmt_fcp_release(struct nvmet_fc_target_port *tgtport,
 	bool aborting = false;
 
 	spin_lock_irqsave(&ctxp->ctxlock, flags);
-	if (ctxp->flag & LPFC_NVMET_XBUSY)
+	if (ctxp->flag & LPFC_NVME_XBUSY)
 		lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
 				"6027 NVMET release with XBUSY flag x%x"
 				" oxid x%x\n",
 				ctxp->flag, ctxp->oxid);
-	else if (ctxp->state != LPFC_NVMET_STE_DONE &&
-		 ctxp->state != LPFC_NVMET_STE_ABORT)
+	else if (ctxp->state != LPFC_NVME_STE_DONE &&
+		 ctxp->state != LPFC_NVME_STE_ABORT)
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"6413 NVMET release bad state %d %d oxid x%x\n",
 				ctxp->state, ctxp->entry_cnt, ctxp->oxid);
 
-	if ((ctxp->flag & LPFC_NVMET_ABORT_OP) ||
-	    (ctxp->flag & LPFC_NVMET_XBUSY)) {
+	if ((ctxp->flag & LPFC_NVME_ABORT_OP) ||
+	    (ctxp->flag & LPFC_NVME_XBUSY)) {
 		aborting = true;
 		/* let the abort path do the real release */
 		lpfc_nvmet_defer_release(phba, ctxp);
@@ -1145,7 +1145,7 @@ lpfc_nvmet_xmt_fcp_release(struct nvmet_fc_target_port *tgtport,
 			 ctxp->state, aborting);
 
 	atomic_inc(&lpfc_nvmep->xmt_fcp_release);
-	ctxp->flag &= ~LPFC_NVMET_TNOTIFY;
+	ctxp->flag &= ~LPFC_NVME_TNOTIFY;
 
 	if (aborting)
 		return;
@@ -1365,7 +1365,7 @@ lpfc_nvmet_setup_io_context(struct lpfc_hba *phba)
 			return -ENOMEM;
 		}
 		ctx_buf->context->ctxbuf = ctx_buf;
-		ctx_buf->context->state = LPFC_NVMET_STE_FREE;
+		ctx_buf->context->state = LPFC_NVME_STE_FREE;
 
 		ctx_buf->iocbq = lpfc_sli_get_iocbq(phba);
 		if (!ctx_buf->iocbq) {
@@ -1596,12 +1596,12 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
 		/* Check if we already received a free context call
 		 * and we have completed processing an abort situation.
 		 */
-		if (ctxp->flag & LPFC_NVMET_CTX_RLS &&
-		    !(ctxp->flag & LPFC_NVMET_ABORT_OP)) {
+		if (ctxp->flag & LPFC_NVME_CTX_RLS &&
+		    !(ctxp->flag & LPFC_NVME_ABORT_OP)) {
 			list_del_init(&ctxp->list);
 			released = true;
 		}
-		ctxp->flag &= ~LPFC_NVMET_XBUSY;
+		ctxp->flag &= ~LPFC_NVME_XBUSY;
 		spin_unlock(&ctxp->ctxlock);
 		spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
 
@@ -1643,8 +1643,8 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
 				rxid);
 
 		spin_lock_irqsave(&ctxp->ctxlock, iflag);
-		ctxp->flag |= LPFC_NVMET_ABTS_RCV;
-		ctxp->state = LPFC_NVMET_STE_ABORT;
+		ctxp->flag |= LPFC_NVME_ABTS_RCV;
+		ctxp->state = LPFC_NVME_STE_ABORT;
 		spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
 
 		lpfc_nvmeio_data(phba,
@@ -1687,7 +1687,7 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
 		spin_unlock_irqrestore(&phba->hbalock, iflag);
 
 		spin_lock_irqsave(&ctxp->ctxlock, iflag);
-		ctxp->flag |= LPFC_NVMET_ABTS_RCV;
+		ctxp->flag |= LPFC_NVME_ABTS_RCV;
 		spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
 
 		lpfc_nvmeio_data(phba,
@@ -1756,7 +1756,7 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
 		xri = ctxp->ctxbuf->sglq->sli4_xritag;
 
 		spin_lock_irqsave(&ctxp->ctxlock, iflag);
-		ctxp->flag |= (LPFC_NVMET_ABTS_RCV | LPFC_NVMET_ABORT_OP);
+		ctxp->flag |= (LPFC_NVME_ABTS_RCV | LPFC_NVME_ABORT_OP);
 		spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
 
 		lpfc_nvmeio_data(phba,
@@ -1768,7 +1768,7 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
 				"flag x%x state x%x\n",
 				ctxp->oxid, xri, ctxp->flag, ctxp->state);
 
-		if (ctxp->flag & LPFC_NVMET_TNOTIFY) {
+		if (ctxp->flag & LPFC_NVME_TNOTIFY) {
 			/* Notify the transport */
 			nvmet_fc_rcv_fcp_abort(phba->targetport,
 					       &ctxp->hdlrctx.fcp_req);
@@ -1983,7 +1983,7 @@ lpfc_nvmet_unsol_ls_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 	ctxp->oxid = oxid;
 	ctxp->sid = sid;
 	ctxp->wqeq = NULL;
-	ctxp->state = LPFC_NVMET_STE_LS_RCV;
+	ctxp->state = LPFC_NVME_STE_LS_RCV;
 	ctxp->entry_cnt = 1;
 	ctxp->rqb_buffer = (void *)nvmebuf;
 	ctxp->hdwq = &phba->sli4_hba.hdwq[0];
@@ -2051,7 +2051,7 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
 		return;
 	}
 
-	if (ctxp->flag & LPFC_NVMET_ABTS_RCV) {
+	if (ctxp->flag & LPFC_NVME_ABTS_RCV) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"6324 IO oxid x%x aborted\n",
 				ctxp->oxid);
@@ -2060,7 +2060,7 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
 
 	payload = (uint32_t *)(nvmebuf->dbuf.virt);
 	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
-	ctxp->flag |= LPFC_NVMET_TNOTIFY;
+	ctxp->flag |= LPFC_NVME_TNOTIFY;
 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
 	if (ctxp->ts_isr_cmd)
 		ctxp->ts_cmd_nvme = ktime_get_ns();
@@ -2080,7 +2080,7 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
 	if (rc == 0) {
 		atomic_inc(&tgtp->rcv_fcp_cmd_out);
 		spin_lock_irqsave(&ctxp->ctxlock, iflags);
-		if ((ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) ||
+		if ((ctxp->flag & LPFC_NVME_CTX_REUSE_WQ) ||
 		    (nvmebuf != ctxp->rqb_buffer)) {
 			spin_unlock_irqrestore(&ctxp->ctxlock, iflags);
 			return;
@@ -2099,7 +2099,7 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
 		atomic_inc(&tgtp->rcv_fcp_cmd_out);
 		atomic_inc(&tgtp->defer_fod);
 		spin_lock_irqsave(&ctxp->ctxlock, iflags);
-		if (ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) {
+		if (ctxp->flag & LPFC_NVME_CTX_REUSE_WQ) {
 			spin_unlock_irqrestore(&ctxp->ctxlock, iflags);
 			return;
 		}
@@ -2114,7 +2114,7 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
 			phba->sli4_hba.nvmet_mrq_data[qno], 1, qno);
 		return;
 	}
-	ctxp->flag &= ~LPFC_NVMET_TNOTIFY;
+	ctxp->flag &= ~LPFC_NVME_TNOTIFY;
 	atomic_inc(&tgtp->rcv_fcp_cmd_drop);
 	lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 			"2582 FCP Drop IO x%x: err x%x: x%x x%x x%x\n",
@@ -2309,7 +2309,7 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
 	spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag);
 	list_add_tail(&ctxp->list, &phba->sli4_hba.t_active_ctx_list);
 	spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag);
-	if (ctxp->state != LPFC_NVMET_STE_FREE) {
+	if (ctxp->state != LPFC_NVME_STE_FREE) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"6414 NVMET Context corrupt %d %d oxid x%x\n",
 				ctxp->state, ctxp->entry_cnt, ctxp->oxid);
@@ -2321,7 +2321,7 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
 	ctxp->oxid = oxid;
 	ctxp->sid = sid;
 	ctxp->idx = idx;
-	ctxp->state = LPFC_NVMET_STE_RCV;
+	ctxp->state = LPFC_NVME_STE_RCV;
 	ctxp->entry_cnt = 1;
 	ctxp->flag = 0;
 	ctxp->ctxbuf = ctx_buf;
@@ -2645,9 +2645,9 @@ lpfc_nvmet_prep_fcp_wqe(struct lpfc_hba *phba,
 	}
 
 	/* Sanity check */
-	if (((ctxp->state == LPFC_NVMET_STE_RCV) &&
+	if (((ctxp->state == LPFC_NVME_STE_RCV) &&
 	    (ctxp->entry_cnt == 1)) ||
-	    (ctxp->state == LPFC_NVMET_STE_DATA)) {
+	    (ctxp->state == LPFC_NVME_STE_DATA)) {
 		wqe = &nvmewqe->wqe;
 	} else {
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
@@ -2910,7 +2910,7 @@ lpfc_nvmet_prep_fcp_wqe(struct lpfc_hba *phba,
 		sgl++;
 		ctxp->offset += cnt;
 	}
-	ctxp->state = LPFC_NVMET_STE_DATA;
+	ctxp->state = LPFC_NVME_STE_DATA;
 	ctxp->entry_cnt++;
 	return nvmewqe;
 }
@@ -2939,23 +2939,23 @@ lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 	result = wcqe->parameter;
 
 	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
-	if (ctxp->flag & LPFC_NVMET_ABORT_OP)
+	if (ctxp->flag & LPFC_NVME_ABORT_OP)
 		atomic_inc(&tgtp->xmt_fcp_abort_cmpl);
 
 	spin_lock_irqsave(&ctxp->ctxlock, flags);
-	ctxp->state = LPFC_NVMET_STE_DONE;
+	ctxp->state = LPFC_NVME_STE_DONE;
 
 	/* Check if we already received a free context call
 	 * and we have completed processing an abort situation.
 	 */
-	if ((ctxp->flag & LPFC_NVMET_CTX_RLS) &&
-	    !(ctxp->flag & LPFC_NVMET_XBUSY)) {
+	if ((ctxp->flag & LPFC_NVME_CTX_RLS) &&
+	    !(ctxp->flag & LPFC_NVME_XBUSY)) {
 		spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
 		list_del_init(&ctxp->list);
 		spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
 		released = true;
 	}
-	ctxp->flag &= ~LPFC_NVMET_ABORT_OP;
+	ctxp->flag &= ~LPFC_NVME_ABORT_OP;
 	spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 	atomic_inc(&tgtp->xmt_abort_rsp);
 
@@ -2979,7 +2979,7 @@ lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 	lpfc_sli_release_iocbq(phba, cmdwqe);
 
 	/* Since iaab/iaar are NOT set, there is no work left.
-	 * For LPFC_NVMET_XBUSY, lpfc_sli4_nvmet_xri_aborted
+	 * For LPFC_NVME_XBUSY, lpfc_sli4_nvmet_xri_aborted
 	 * should have been called already.
 	 */
 }
@@ -3018,11 +3018,11 @@ lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 
 	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
 	spin_lock_irqsave(&ctxp->ctxlock, flags);
-	if (ctxp->flag & LPFC_NVMET_ABORT_OP)
+	if (ctxp->flag & LPFC_NVME_ABORT_OP)
 		atomic_inc(&tgtp->xmt_fcp_abort_cmpl);
 
 	/* Sanity check */
-	if (ctxp->state != LPFC_NVMET_STE_ABORT) {
+	if (ctxp->state != LPFC_NVME_STE_ABORT) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_ABTS,
 				"6112 ABTS Wrong state:%d oxid x%x\n",
 				ctxp->state, ctxp->oxid);
@@ -3031,15 +3031,15 @@ lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 	/* Check if we already received a free context call
 	 * and we have completed processing an abort situation.
 	 */
-	ctxp->state = LPFC_NVMET_STE_DONE;
-	if ((ctxp->flag & LPFC_NVMET_CTX_RLS) &&
-	    !(ctxp->flag & LPFC_NVMET_XBUSY)) {
+	ctxp->state = LPFC_NVME_STE_DONE;
+	if ((ctxp->flag & LPFC_NVME_CTX_RLS) &&
+	    !(ctxp->flag & LPFC_NVME_XBUSY)) {
 		spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
 		list_del_init(&ctxp->list);
 		spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
 		released = true;
 	}
-	ctxp->flag &= ~LPFC_NVMET_ABORT_OP;
+	ctxp->flag &= ~LPFC_NVME_ABORT_OP;
 	spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 	atomic_inc(&tgtp->xmt_abort_rsp);
 
@@ -3060,7 +3060,7 @@ lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 		lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf);
 
 	/* Since iaab/iaar are NOT set, there is no work left.
-	 * For LPFC_NVMET_XBUSY, lpfc_sli4_nvmet_xri_aborted
+	 * For LPFC_NVME_XBUSY, lpfc_sli4_nvmet_xri_aborted
 	 * should have been called already.
 	 */
 }
@@ -3105,7 +3105,7 @@ lpfc_nvmet_xmt_ls_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 		return;
 	}
 
-	if (ctxp->state != LPFC_NVMET_STE_LS_ABORT) {
+	if (ctxp->state != LPFC_NVME_STE_LS_ABORT) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"6416 NVMET LS abort cmpl state mismatch: "
 				"oxid x%x: %d %d\n",
@@ -3242,7 +3242,7 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
 
 		/* No failure to an ABTS request. */
 		spin_lock_irqsave(&ctxp->ctxlock, flags);
-		ctxp->flag &= ~LPFC_NVMET_ABORT_OP;
+		ctxp->flag &= ~LPFC_NVME_ABORT_OP;
 		spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 		return 0;
 	}
@@ -3256,13 +3256,13 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
 				"6161 ABORT failed: No wqeqs: "
 				"xri: x%x\n", ctxp->oxid);
 		/* No failure to an ABTS request. */
-		ctxp->flag &= ~LPFC_NVMET_ABORT_OP;
+		ctxp->flag &= ~LPFC_NVME_ABORT_OP;
 		spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 		return 0;
 	}
 	abts_wqeq = ctxp->abort_wqeq;
-	ctxp->state = LPFC_NVMET_STE_ABORT;
-	opt = (ctxp->flag & LPFC_NVMET_ABTS_RCV) ? INHIBIT_ABORT : 0;
+	ctxp->state = LPFC_NVME_STE_ABORT;
+	opt = (ctxp->flag & LPFC_NVME_ABTS_RCV) ? INHIBIT_ABORT : 0;
 	spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 
 	/* Announce entry to new IO submit field. */
@@ -3285,7 +3285,7 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
 				phba->hba_flag, ctxp->oxid);
 		lpfc_sli_release_iocbq(phba, abts_wqeq);
 		spin_lock_irqsave(&ctxp->ctxlock, flags);
-		ctxp->flag &= ~LPFC_NVMET_ABORT_OP;
+		ctxp->flag &= ~LPFC_NVME_ABORT_OP;
 		spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 		return 0;
 	}
@@ -3300,7 +3300,7 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
 				ctxp->oxid);
 		lpfc_sli_release_iocbq(phba, abts_wqeq);
 		spin_lock_irqsave(&ctxp->ctxlock, flags);
-		ctxp->flag &= ~LPFC_NVMET_ABORT_OP;
+		ctxp->flag &= ~LPFC_NVME_ABORT_OP;
 		spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 		return 0;
 	}
@@ -3329,7 +3329,7 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
 
 	atomic_inc(&tgtp->xmt_abort_rsp_error);
 	spin_lock_irqsave(&ctxp->ctxlock, flags);
-	ctxp->flag &= ~LPFC_NVMET_ABORT_OP;
+	ctxp->flag &= ~LPFC_NVME_ABORT_OP;
 	spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 	lpfc_sli_release_iocbq(phba, abts_wqeq);
 	lpfc_printf_log(phba, KERN_ERR, LOG_NVME_ABTS,
@@ -3356,14 +3356,14 @@ lpfc_nvmet_unsol_fcp_issue_abort(struct lpfc_hba *phba,
 		ctxp->wqeq->hba_wqidx = 0;
 	}
 
-	if (ctxp->state == LPFC_NVMET_STE_FREE) {
+	if (ctxp->state == LPFC_NVME_STE_FREE) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"6417 NVMET ABORT ctx freed %d %d oxid x%x\n",
 				ctxp->state, ctxp->entry_cnt, ctxp->oxid);
 		rc = WQE_BUSY;
 		goto aerr;
 	}
-	ctxp->state = LPFC_NVMET_STE_ABORT;
+	ctxp->state = LPFC_NVME_STE_ABORT;
 	ctxp->entry_cnt++;
 	rc = lpfc_nvmet_unsol_issue_abort(phba, ctxp, sid, xri);
 	if (rc == 0)
@@ -3385,13 +3385,13 @@ lpfc_nvmet_unsol_fcp_issue_abort(struct lpfc_hba *phba,
 
 aerr:
 	spin_lock_irqsave(&ctxp->ctxlock, flags);
-	if (ctxp->flag & LPFC_NVMET_CTX_RLS) {
+	if (ctxp->flag & LPFC_NVME_CTX_RLS) {
 		spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
 		list_del_init(&ctxp->list);
 		spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
 		released = true;
 	}
-	ctxp->flag &= ~(LPFC_NVMET_ABORT_OP | LPFC_NVMET_CTX_RLS);
+	ctxp->flag &= ~(LPFC_NVME_ABORT_OP | LPFC_NVME_CTX_RLS);
 	spin_unlock_irqrestore(&ctxp->ctxlock, flags);
 
 	atomic_inc(&tgtp->xmt_abort_rsp_error);
@@ -3414,16 +3414,16 @@ lpfc_nvmet_unsol_ls_issue_abort(struct lpfc_hba *phba,
 	unsigned long flags;
 	int rc;
 
-	if ((ctxp->state == LPFC_NVMET_STE_LS_RCV && ctxp->entry_cnt == 1) ||
-	    (ctxp->state == LPFC_NVMET_STE_LS_RSP && ctxp->entry_cnt == 2)) {
-		ctxp->state = LPFC_NVMET_STE_LS_ABORT;
+	if ((ctxp->state == LPFC_NVME_STE_LS_RCV && ctxp->entry_cnt == 1) ||
+	    (ctxp->state == LPFC_NVME_STE_LS_RSP && ctxp->entry_cnt == 2)) {
+		ctxp->state = LPFC_NVME_STE_LS_ABORT;
 		ctxp->entry_cnt++;
 	} else {
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"6418 NVMET LS abort state mismatch "
 				"IO x%x: %d %d\n",
 				ctxp->oxid, ctxp->state, ctxp->entry_cnt);
-		ctxp->state = LPFC_NVMET_STE_LS_ABORT;
+		ctxp->state = LPFC_NVME_STE_LS_ABORT;
 	}
 
 	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 23/29] lpfc: Refactor NVME LS receive handling
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (21 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 22/29] lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:20   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 24/29] lpfc: Refactor Send LS Request support James Smart
                   ` (7 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, Paul Ely, martin.petersen

In preparation for supporting both intiator mode and target mode
receiving NVME LS's, commonize the existing NVME LS request receive
handling found in the base driver and in the nvmet side.

Using the original lpfc_nvmet_unsol_ls_event() and
lpfc_nvme_unsol_ls_buffer() routines as a templates, commonize the
reception of an NVME LS request. The common routine will validate the LS
request, that it was received from a logged-in node, and allocate a
lpfc_async_xchg_ctx that is used to manage the LS request. The role of
the port is then inspected to determine which handler is to receive the
LS - nvme or nvmet. As such, the nvmet handler is tied back in. A handler
is created in nvme and is stubbed out.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc_crtn.h  |   6 +-
 drivers/scsi/lpfc/lpfc_nvme.c  |  19 +++++
 drivers/scsi/lpfc/lpfc_nvme.h  |   5 ++
 drivers/scsi/lpfc/lpfc_nvmet.c | 163 ++++++++++-------------------------------
 drivers/scsi/lpfc/lpfc_sli.c   | 121 +++++++++++++++++++++++++++++-
 5 files changed, 184 insertions(+), 130 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
index 9cd7767636d3..928e40fcf544 100644
--- a/drivers/scsi/lpfc/lpfc_crtn.h
+++ b/drivers/scsi/lpfc/lpfc_crtn.h
@@ -564,8 +564,10 @@ void lpfc_nvme_update_localport(struct lpfc_vport *vport);
 int lpfc_nvmet_create_targetport(struct lpfc_hba *phba);
 int lpfc_nvmet_update_targetport(struct lpfc_hba *phba);
 void lpfc_nvmet_destroy_targetport(struct lpfc_hba *phba);
-void lpfc_nvmet_unsol_ls_event(struct lpfc_hba *phba,
-			struct lpfc_sli_ring *pring, struct lpfc_iocbq *piocb);
+int lpfc_nvme_handle_lsreq(struct lpfc_hba *phba,
+			struct lpfc_async_xchg_ctx *axchg);
+int lpfc_nvmet_handle_lsreq(struct lpfc_hba *phba,
+			struct lpfc_async_xchg_ctx *axchg);
 void lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba, uint32_t idx,
 				struct rqb_dmabuf *nvmebuf, uint64_t isr_ts,
 				uint8_t cqflag);
diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index 21f2282b26ba..daded70ce7b6 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -391,6 +391,25 @@ lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
 	return;
 }
 
+/**
+ * lpfc_nvme_handle_lsreq - Process an unsolicited NVME LS request
+ * @phba: pointer to lpfc hba data structure.
+ * @axchg: pointer to exchange context for the NVME LS request
+ *
+ * This routine is used for processing an asychronously received NVME LS
+ * request. Any remaining validation is done and the LS is then forwarded
+ * to the nvme-fc transport via nvme_fc_rcv_ls_req().
+ *
+ * Returns 0 if LS was handled and delivered to the transport
+ * Returns 1 if LS failed to be handled and should be dropped
+ */
+int
+lpfc_nvme_handle_lsreq(struct lpfc_hba *phba,
+			struct lpfc_async_xchg_ctx *axchg)
+{
+	return 1;
+}
+
 static void
 lpfc_nvme_cmpl_gen_req(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 		       struct lpfc_wcqe_complete *wcqe)
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index c5706c950625..7525b12b06c8 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -189,6 +189,7 @@ struct lpfc_async_xchg_ctx {
 	} hdlrctx;
 	struct list_head list;
 	struct lpfc_hba *phba;
+	struct lpfc_nodelist *ndlp;
 	struct nvmefc_ls_req *ls_req;
 	struct nvmefc_ls_rsp ls_rsp;
 	struct lpfc_iocbq *wqeq;
@@ -203,6 +204,7 @@ struct lpfc_async_xchg_ctx {
 	uint16_t idx;
 	uint16_t state;
 	uint16_t flag;
+	void *payload;
 	struct rqb_dmabuf *rqb_buffer;
 	struct lpfc_nvmet_ctxbuf *ctxbuf;
 	struct lpfc_sli4_hdw_queue *hdwq;
@@ -225,3 +227,6 @@ struct lpfc_async_xchg_ctx {
 /* routines found in lpfc_nvme.c */
 
 /* routines found in lpfc_nvmet.c */
+int lpfc_nvme_unsol_ls_issue_abort(struct lpfc_hba *phba,
+			struct lpfc_async_xchg_ctx *ctxp, uint32_t sid,
+			uint16_t xri);
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index 28db056cf5af..e6895c719683 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -63,9 +63,6 @@ static int lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *,
 static int lpfc_nvmet_unsol_fcp_issue_abort(struct lpfc_hba *,
 					    struct lpfc_async_xchg_ctx *,
 					    uint32_t, uint16_t);
-static int lpfc_nvmet_unsol_ls_issue_abort(struct lpfc_hba *,
-					   struct lpfc_async_xchg_ctx *,
-					   uint32_t, uint16_t);
 static void lpfc_nvmet_wqfull_flush(struct lpfc_hba *, struct lpfc_queue *,
 				    struct lpfc_async_xchg_ctx *);
 static void lpfc_nvmet_fcp_rqst_defer_work(struct work_struct *);
@@ -867,7 +864,7 @@ lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
 				ctxp->oxid);
 		lpfc_in_buf_free(phba, &nvmebuf->dbuf);
 		atomic_inc(&nvmep->xmt_ls_abort);
-		lpfc_nvmet_unsol_ls_issue_abort(phba, ctxp,
+		lpfc_nvme_unsol_ls_issue_abort(phba, ctxp,
 						ctxp->sid, ctxp->oxid);
 		return -ENOMEM;
 	}
@@ -910,7 +907,7 @@ lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
 
 	lpfc_in_buf_free(phba, &nvmebuf->dbuf);
 	atomic_inc(&nvmep->xmt_ls_abort);
-	lpfc_nvmet_unsol_ls_issue_abort(phba, ctxp, ctxp->sid, ctxp->oxid);
+	lpfc_nvme_unsol_ls_issue_abort(phba, ctxp, ctxp->sid, ctxp->oxid);
 	return -ENXIO;
 }
 
@@ -1923,107 +1920,49 @@ lpfc_nvmet_destroy_targetport(struct lpfc_hba *phba)
 }
 
 /**
- * lpfc_nvmet_unsol_ls_buffer - Process an unsolicited event data buffer
+ * lpfc_nvmet_handle_lsreq - Process an NVME LS request
  * @phba: pointer to lpfc hba data structure.
- * @pring: pointer to a SLI ring.
- * @nvmebuf: pointer to lpfc nvme command HBQ data structure.
+ * @axchg: pointer to exchange context for the NVME LS request
  *
- * This routine is used for processing the WQE associated with a unsolicited
- * event. It first determines whether there is an existing ndlp that matches
- * the DID from the unsolicited WQE. If not, it will create a new one with
- * the DID from the unsolicited WQE. The ELS command from the unsolicited
- * WQE is then used to invoke the proper routine and to set up proper state
- * of the discovery state machine.
- **/
-static void
-lpfc_nvmet_unsol_ls_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
-			   struct hbq_dmabuf *nvmebuf)
+ * This routine is used for processing an asychronously received NVME LS
+ * request. Any remaining validation is done and the LS is then forwarded
+ * to the nvmet-fc transport via nvmet_fc_rcv_ls_req().
+ *
+ * The calling sequence should be: nvmet_fc_rcv_ls_req() -> (processing)
+ * -> lpfc_nvmet_xmt_ls_rsp/cmp -> req->done.
+ * lpfc_nvme_xmt_ls_rsp_cmp should free the allocated axchg.
+ *
+ * Returns 0 if LS was handled and delivered to the transport
+ * Returns 1 if LS failed to be handled and should be dropped
+ */
+int
+lpfc_nvmet_handle_lsreq(struct lpfc_hba *phba,
+			struct lpfc_async_xchg_ctx *axchg)
 {
 #if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
-	struct lpfc_nvmet_tgtport *tgtp;
-	struct fc_frame_header *fc_hdr;
-	struct lpfc_async_xchg_ctx *ctxp;
-	uint32_t *payload;
-	uint32_t size, oxid, sid, rc;
-
-
-	if (!nvmebuf || !phba->targetport) {
-		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
-				"6154 LS Drop IO\n");
-		oxid = 0;
-		size = 0;
-		sid = 0;
-		ctxp = NULL;
-		goto dropit;
-	}
-
-	fc_hdr = (struct fc_frame_header *)(nvmebuf->hbuf.virt);
-	oxid = be16_to_cpu(fc_hdr->fh_ox_id);
-
-	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
-	payload = (uint32_t *)(nvmebuf->dbuf.virt);
-	size = bf_get(lpfc_rcqe_length,  &nvmebuf->cq_event.cqe.rcqe_cmpl);
-	sid = sli4_sid_from_fc_hdr(fc_hdr);
-
-	ctxp = kzalloc(sizeof(struct lpfc_async_xchg_ctx), GFP_ATOMIC);
-	if (ctxp == NULL) {
-		atomic_inc(&tgtp->rcv_ls_req_drop);
-		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
-				"6155 LS Drop IO x%x: Alloc\n",
-				oxid);
-dropit:
-		lpfc_nvmeio_data(phba, "NVMET LS  DROP: "
-				 "xri x%x sz %d from %06x\n",
-				 oxid, size, sid);
-		lpfc_in_buf_free(phba, &nvmebuf->dbuf);
-		return;
-	}
-	ctxp->phba = phba;
-	ctxp->size = size;
-	ctxp->oxid = oxid;
-	ctxp->sid = sid;
-	ctxp->wqeq = NULL;
-	ctxp->state = LPFC_NVME_STE_LS_RCV;
-	ctxp->entry_cnt = 1;
-	ctxp->rqb_buffer = (void *)nvmebuf;
-	ctxp->hdwq = &phba->sli4_hba.hdwq[0];
+	struct lpfc_nvmet_tgtport *tgtp = phba->targetport->private;
+	uint32_t *payload = axchg->payload;
+	int rc;
 
-	lpfc_nvmeio_data(phba, "NVMET LS   RCV: xri x%x sz %d from %06x\n",
-			 oxid, size, sid);
-	/*
-	 * The calling sequence should be:
-	 * nvmet_fc_rcv_ls_req -> lpfc_nvmet_xmt_ls_rsp/cmp ->_req->done
-	 * lpfc_nvmet_xmt_ls_rsp_cmp should free the allocated ctxp.
-	 */
 	atomic_inc(&tgtp->rcv_ls_req_in);
-	rc = nvmet_fc_rcv_ls_req(phba->targetport, NULL, &ctxp->ls_rsp,
-				 payload, size);
+
+	rc = nvmet_fc_rcv_ls_req(phba->targetport, NULL, &axchg->ls_rsp,
+				 axchg->payload, axchg->size);
 
 	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
 			"6037 NVMET Unsol rcv: sz %d rc %d: %08x %08x %08x "
-			"%08x %08x %08x\n", size, rc,
+			"%08x %08x %08x\n", axchg->size, rc,
 			*payload, *(payload+1), *(payload+2),
 			*(payload+3), *(payload+4), *(payload+5));
 
-	if (rc == 0) {
+	if (!rc) {
 		atomic_inc(&tgtp->rcv_ls_req_out);
-		return;
+		return 0;
 	}
 
-	lpfc_nvmeio_data(phba, "NVMET LS  DROP: xri x%x sz %d from %06x\n",
-			 oxid, size, sid);
-
 	atomic_inc(&tgtp->rcv_ls_req_drop);
-	lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
-			"6156 LS Drop IO x%x: nvmet_fc_rcv_ls_req %d\n",
-			ctxp->oxid, rc);
-
-	/* We assume a rcv'ed cmd ALWAYs fits into 1 buffer */
-	lpfc_in_buf_free(phba, &nvmebuf->dbuf);
-
-	atomic_inc(&tgtp->xmt_ls_abort);
-	lpfc_nvmet_unsol_ls_issue_abort(phba, ctxp, sid, oxid);
 #endif
+	return 1;
 }
 
 static void
@@ -2368,40 +2307,6 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
 }
 
 /**
- * lpfc_nvmet_unsol_ls_event - Process an unsolicited event from an nvme nport
- * @phba: pointer to lpfc hba data structure.
- * @pring: pointer to a SLI ring.
- * @nvmebuf: pointer to received nvme data structure.
- *
- * This routine is used to process an unsolicited event received from a SLI
- * (Service Level Interface) ring. The actual processing of the data buffer
- * associated with the unsolicited event is done by invoking the routine
- * lpfc_nvmet_unsol_ls_buffer() after properly set up the buffer from the
- * SLI RQ on which the unsolicited event was received.
- **/
-void
-lpfc_nvmet_unsol_ls_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
-			  struct lpfc_iocbq *piocb)
-{
-	struct lpfc_dmabuf *d_buf;
-	struct hbq_dmabuf *nvmebuf;
-
-	d_buf = piocb->context2;
-	nvmebuf = container_of(d_buf, struct hbq_dmabuf, dbuf);
-
-	if (!nvmebuf) {
-		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
-				"3015 LS Drop IO\n");
-		return;
-	}
-	if (phba->nvmet_support == 0) {
-		lpfc_in_buf_free(phba, &nvmebuf->dbuf);
-		return;
-	}
-	lpfc_nvmet_unsol_ls_buffer(phba, pring, nvmebuf);
-}
-
-/**
  * lpfc_nvmet_unsol_fcp_event - Process an unsolicited event from an nvme nport
  * @phba: pointer to lpfc hba data structure.
  * @idx: relative index of MRQ vector
@@ -3404,8 +3309,16 @@ lpfc_nvmet_unsol_fcp_issue_abort(struct lpfc_hba *phba,
 	return 1;
 }
 
-static int
-lpfc_nvmet_unsol_ls_issue_abort(struct lpfc_hba *phba,
+/**
+ * lpfc_nvme_unsol_ls_issue_abort - issue ABTS on an exchange received
+ *        via async frame receive where the frame is not handled.
+ * @phba: pointer to adapter structure
+ * @ctxp: pointer to the asynchronously received received sequence
+ * @sid: address of the remote port to send the ABTS to
+ * @xri: oxid value to for the ABTS (other side's exchange id).
+ **/
+int
+lpfc_nvme_unsol_ls_issue_abort(struct lpfc_hba *phba,
 				struct lpfc_async_xchg_ctx *ctxp,
 				uint32_t sid, uint16_t xri)
 {
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index 23f034dfd3e2..0d167e200d8f 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -2800,6 +2800,121 @@ lpfc_sli_get_buff(struct lpfc_hba *phba,
 }
 
 /**
+ * lpfc_nvme_unsol_ls_handler - Process an unsolicited event data buffer
+ *                              containing a NVME LS request.
+ * @phba: pointer to lpfc hba data structure.
+ * @piocb: pointer to the iocbq struct representing the sequence starting
+ *        frame.
+ *
+ * This routine initially validates the NVME LS, validates there is a login
+ * with the port that sent the LS, and then calls the appropriate nvme host
+ * or target LS request handler.
+ **/
+static void
+lpfc_nvme_unsol_ls_handler(struct lpfc_hba *phba, struct lpfc_iocbq *piocb)
+{
+	struct lpfc_nodelist *ndlp;
+	struct lpfc_dmabuf *d_buf;
+	struct hbq_dmabuf *nvmebuf;
+	struct fc_frame_header *fc_hdr;
+	struct lpfc_async_xchg_ctx *axchg = NULL;
+	char *failwhy = NULL;
+	uint32_t oxid, sid, did, fctl, size;
+	int ret;
+
+	d_buf = piocb->context2;
+
+	nvmebuf = container_of(d_buf, struct hbq_dmabuf, dbuf);
+	fc_hdr = nvmebuf->hbuf.virt;
+	oxid = be16_to_cpu(fc_hdr->fh_ox_id);
+	sid = sli4_sid_from_fc_hdr(fc_hdr);
+	did = sli4_did_from_fc_hdr(fc_hdr);
+	fctl = (fc_hdr->fh_f_ctl[0] << 16 |
+		fc_hdr->fh_f_ctl[1] << 8 |
+		fc_hdr->fh_f_ctl[2]);
+	size = bf_get(lpfc_rcqe_length, &nvmebuf->cq_event.cqe.rcqe_cmpl);
+
+	lpfc_nvmeio_data(phba, "NVME LS    RCV: xri x%x sz %d from %06x\n",
+			 oxid, size, sid);
+
+	if (phba->pport->load_flag & FC_UNLOADING) {
+		failwhy = "Driver Unloading";
+	} else if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)) {
+		failwhy = "NVME FC4 Disabled";
+	} else if (!phba->nvmet_support && !phba->pport->localport) {
+		failwhy = "No Localport";
+	} else if (phba->nvmet_support && !phba->targetport) {
+		failwhy = "No Targetport";
+	} else if (unlikely(fc_hdr->fh_r_ctl != FC_RCTL_ELS4_REQ)) {
+		failwhy = "Bad NVME LS R_CTL";
+	} else if (unlikely((fctl & 0x00FF0000) !=
+			(FC_FC_FIRST_SEQ | FC_FC_END_SEQ | FC_FC_SEQ_INIT))) {
+		failwhy = "Bad NVME LS F_CTL";
+	} else {
+		axchg = kzalloc(sizeof(*axchg), GFP_ATOMIC);
+		if (!axchg)
+			failwhy = "No CTX memory";
+	}
+
+	if (unlikely(failwhy)) {
+		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_DISC | LOG_NVME_IOERR,
+				"6154 Drop NVME LS: SID %06X OXID x%X: %s\n",
+				sid, oxid, failwhy);
+		goto out_fail;
+	}
+
+	/* validate the source of the LS is logged in */
+	ndlp = lpfc_findnode_did(phba->pport, sid);
+	if (!ndlp || !NLP_CHK_NODE_ACT(ndlp) ||
+	    ((ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) &&
+	     (ndlp->nlp_state != NLP_STE_MAPPED_NODE))) {
+		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_DISC,
+				"6216 NVME Unsol rcv: No ndlp: "
+				"NPort_ID x%x oxid x%x\n",
+				sid, oxid);
+		goto out_fail;
+	}
+
+	axchg->phba = phba;
+	axchg->ndlp = ndlp;
+	axchg->size = size;
+	axchg->oxid = oxid;
+	axchg->sid = sid;
+	axchg->wqeq = NULL;
+	axchg->state = LPFC_NVME_STE_LS_RCV;
+	axchg->entry_cnt = 1;
+	axchg->rqb_buffer = (void *)nvmebuf;
+	axchg->hdwq = &phba->sli4_hba.hdwq[0];
+	axchg->payload = nvmebuf->dbuf.virt;
+	INIT_LIST_HEAD(&axchg->list);
+
+	if (phba->nvmet_support)
+		ret = lpfc_nvmet_handle_lsreq(phba, axchg);
+	else
+		ret = lpfc_nvme_handle_lsreq(phba, axchg);
+
+	/* if zero, LS was successfully handled. If non-zero, LS not handled */
+	if (!ret)
+		return;
+
+	lpfc_printf_log(phba, KERN_ERR, LOG_NVME_DISC | LOG_NVME_IOERR,
+			"6155 Drop NVME LS from DID %06X: SID %06X OXID x%X "
+			"NVMe%s handler failed %d\n",
+			did, sid, oxid,
+			(phba->nvmet_support) ? "T" : "I", ret);
+
+out_fail:
+	kfree(axchg);
+
+	/* recycle receive buffer */
+	lpfc_in_buf_free(phba, &nvmebuf->dbuf);
+
+	/* If start of new exchange, abort it */
+	if (fctl & FC_FC_FIRST_SEQ && !(fctl & FC_FC_EX_CTX))
+		lpfc_nvme_unsol_ls_issue_abort(phba, axchg, sid, oxid);
+}
+
+/**
  * lpfc_complete_unsol_iocb - Complete an unsolicited sequence
  * @phba: Pointer to HBA context object.
  * @pring: Pointer to driver SLI ring object.
@@ -2820,7 +2935,7 @@ lpfc_complete_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 
 	switch (fch_type) {
 	case FC_TYPE_NVME:
-		lpfc_nvmet_unsol_ls_event(phba, pring, saveq);
+		lpfc_nvme_unsol_ls_handler(phba, saveq);
 		return 1;
 	default:
 		break;
@@ -13996,8 +14111,8 @@ lpfc_sli4_nvmet_handle_rcqe(struct lpfc_hba *phba, struct lpfc_queue *cq,
 
 		/* Just some basic sanity checks on FCP Command frame */
 		fctl = (fc_hdr->fh_f_ctl[0] << 16 |
-		fc_hdr->fh_f_ctl[1] << 8 |
-		fc_hdr->fh_f_ctl[2]);
+			fc_hdr->fh_f_ctl[1] << 8 |
+			fc_hdr->fh_f_ctl[2]);
 		if (((fctl &
 		    (FC_FC_FIRST_SEQ | FC_FC_END_SEQ | FC_FC_SEQ_INIT)) !=
 		    (FC_FC_FIRST_SEQ | FC_FC_END_SEQ | FC_FC_SEQ_INIT)) ||
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 24/29] lpfc: Refactor Send LS Request support
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (22 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 23/29] lpfc: Refactor NVME LS receive handling James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:20   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 25/29] lpfc: Refactor Send LS Abort support James Smart
                   ` (6 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, Paul Ely, martin.petersen

Currently, the ability to send an NVME LS request is limited to the nvme
(host) side of the driver.  In preparation of both the nvme and nvmet sides
support Send LS Request, rework the existing send ls_req and ls_req
completion routines such that there is common code that can be used by
both sides.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc_nvme.c | 289 +++++++++++++++++++++++++-----------------
 drivers/scsi/lpfc/lpfc_nvme.h |  13 ++
 2 files changed, 184 insertions(+), 118 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index daded70ce7b6..e93636986c6f 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -410,43 +410,43 @@ lpfc_nvme_handle_lsreq(struct lpfc_hba *phba,
 	return 1;
 }
 
-static void
-lpfc_nvme_cmpl_gen_req(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
-		       struct lpfc_wcqe_complete *wcqe)
+/**
+ * __lpfc_nvme_ls_req_cmp - Generic completion handler for a NVME
+ *        LS request.
+ * @phba: Pointer to HBA context object
+ * @vport: The local port that issued the LS
+ * @cmdwqe: Pointer to driver command WQE object.
+ * @wcqe: Pointer to driver response CQE object.
+ *
+ * This function is the generic completion handler for NVME LS requests.
+ * The function updates any states and statistics, calls the transport
+ * ls_req done() routine, then tears down the command and buffers used
+ * for the LS request.
+ **/
+void
+__lpfc_nvme_ls_req_cmp(struct lpfc_hba *phba,  struct lpfc_vport *vport,
+			struct lpfc_iocbq *cmdwqe,
+			struct lpfc_wcqe_complete *wcqe)
 {
-	struct lpfc_vport *vport = cmdwqe->vport;
-	struct lpfc_nvme_lport *lport;
-	uint32_t status;
 	struct nvmefc_ls_req *pnvme_lsreq;
 	struct lpfc_dmabuf *buf_ptr;
 	struct lpfc_nodelist *ndlp;
+	uint32_t status;
 
 	pnvme_lsreq = (struct nvmefc_ls_req *)cmdwqe->context2;
+	ndlp = (struct lpfc_nodelist *)cmdwqe->context1;
 	status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK;
 
-	if (vport->localport) {
-		lport = (struct lpfc_nvme_lport *)vport->localport->private;
-		if (lport) {
-			atomic_inc(&lport->fc4NvmeLsCmpls);
-			if (status) {
-				if (bf_get(lpfc_wcqe_c_xb, wcqe))
-					atomic_inc(&lport->cmpl_ls_xb);
-				atomic_inc(&lport->cmpl_ls_err);
-			}
-		}
-	}
-
-	ndlp = (struct lpfc_nodelist *)cmdwqe->context1;
 	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
-			 "6047 nvme cmpl Enter "
-			 "Data %px DID %x Xri: %x status %x reason x%x "
-			 "cmd:x%px lsreg:x%px bmp:x%px ndlp:x%px\n",
+			 "6047 NVMEx LS REQ %px cmpl DID %x Xri: %x "
+			 "status %x reason x%x cmd:x%px lsreg:x%px bmp:x%px "
+			 "ndlp:x%px\n",
 			 pnvme_lsreq, ndlp ? ndlp->nlp_DID : 0,
 			 cmdwqe->sli4_xritag, status,
 			 (wcqe->parameter & 0xffff),
 			 cmdwqe, pnvme_lsreq, cmdwqe->context3, ndlp);
 
-	lpfc_nvmeio_data(phba, "NVME LS  CMPL: xri x%x stat x%x parm x%x\n",
+	lpfc_nvmeio_data(phba, "NVMEx LS CMPL: xri x%x stat x%x parm x%x\n",
 			 cmdwqe->sli4_xritag, status, wcqe->parameter);
 
 	if (cmdwqe->context3) {
@@ -459,7 +459,7 @@ lpfc_nvme_cmpl_gen_req(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 		pnvme_lsreq->done(pnvme_lsreq, status);
 	else
 		lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC,
-				 "6046 nvme cmpl without done call back? "
+				 "6046 NVMEx cmpl without done call back? "
 				 "Data %px DID %x Xri: %x status %x\n",
 				pnvme_lsreq, ndlp ? ndlp->nlp_DID : 0,
 				cmdwqe->sli4_xritag, status);
@@ -470,6 +470,31 @@ lpfc_nvme_cmpl_gen_req(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 	lpfc_sli_release_iocbq(phba, cmdwqe);
 }
 
+static void
+lpfc_nvme_ls_req_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
+		       struct lpfc_wcqe_complete *wcqe)
+{
+	struct lpfc_vport *vport = cmdwqe->vport;
+	struct lpfc_nvme_lport *lport;
+	uint32_t status;
+
+	status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK;
+
+	if (vport->localport) {
+		lport = (struct lpfc_nvme_lport *)vport->localport->private;
+		if (lport) {
+			atomic_inc(&lport->fc4NvmeLsCmpls);
+			if (status) {
+				if (bf_get(lpfc_wcqe_c_xb, wcqe))
+					atomic_inc(&lport->cmpl_ls_xb);
+				atomic_inc(&lport->cmpl_ls_err);
+			}
+		}
+	}
+
+	__lpfc_nvme_ls_req_cmp(phba, vport, cmdwqe, wcqe);
+}
+
 static int
 lpfc_nvme_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
 		  struct lpfc_dmabuf *inp,
@@ -571,13 +596,6 @@ lpfc_nvme_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
 
 
 	/* Issue GEN REQ WQE for NPORT <did> */
-	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
-			 "6050 Issue GEN REQ WQE to NPORT x%x "
-			 "Data: x%x x%x wq:x%px lsreq:x%px bmp:x%px "
-			 "xmit:%d 1st:%d\n",
-			 ndlp->nlp_DID, genwqe->iotag,
-			 vport->port_state,
-			genwqe, pnvme_lsreq, bmp, xmit_len, first_len);
 	genwqe->wqe_cmpl = cmpl;
 	genwqe->iocb_cmpl = NULL;
 	genwqe->drvrTimeout = tmo + LPFC_DRVR_TIMEOUT;
@@ -589,105 +607,108 @@ lpfc_nvme_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
 
 	rc = lpfc_sli4_issue_wqe(phba, &phba->sli4_hba.hdwq[0], genwqe);
 	if (rc) {
-		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC | LOG_ELS,
 				 "6045 Issue GEN REQ WQE to NPORT x%x "
-				 "Data: x%x x%x\n",
+				 "Data: x%x x%x  rc x%x\n",
 				 ndlp->nlp_DID, genwqe->iotag,
-				 vport->port_state);
+				 vport->port_state, rc);
 		lpfc_sli_release_iocbq(phba, genwqe);
 		return 1;
 	}
+
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC | LOG_ELS,
+			 "6050 Issue GEN REQ WQE to NPORT x%x "
+			 "Data: oxid: x%x state: x%x wq:x%px lsreq:x%px "
+			 "bmp:x%px xmit:%d 1st:%d\n",
+			 ndlp->nlp_DID, genwqe->sli4_xritag,
+			 vport->port_state,
+			 genwqe, pnvme_lsreq, bmp, xmit_len, first_len);
 	return 0;
 }
 
+
 /**
- * lpfc_nvme_ls_req - Issue an Link Service request
- * @lpfc_pnvme: Pointer to the driver's nvme instance data
- * @lpfc_nvme_lport: Pointer to the driver's local port data
- * @lpfc_nvme_rport: Pointer to the rport getting the @lpfc_nvme_ereq
+ * __lpfc_nvme_ls_req - Generic service routine to issue an NVME LS request
+ * @vport: The local port issuing the LS
+ * @ndlp: The remote port to send the LS to
+ * @pnvme_lsreq: Pointer to LS request structure from the transport
  *
- * Driver registers this routine to handle any link service request
- * from the nvme_fc transport to a remote nvme-aware port.
+ * Routine validates the ndlp, builds buffers and sends a GEN_REQUEST
+ * WQE to perform the LS operation.
  *
  * Return value :
  *   0 - Success
- *   TODO: What are the failure codes.
+ *   non-zero: various error codes, in form of -Exxx
  **/
-static int
-lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport,
-		 struct nvme_fc_remote_port *pnvme_rport,
-		 struct nvmefc_ls_req *pnvme_lsreq)
+int
+__lpfc_nvme_ls_req(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+		      struct nvmefc_ls_req *pnvme_lsreq,
+		      void (*gen_req_cmp)(struct lpfc_hba *phba,
+				struct lpfc_iocbq *cmdwqe,
+				struct lpfc_wcqe_complete *wcqe))
 {
-	int ret = 0;
-	struct lpfc_nvme_lport *lport;
-	struct lpfc_nvme_rport *rport;
-	struct lpfc_vport *vport;
-	struct lpfc_nodelist *ndlp;
-	struct ulp_bde64 *bpl;
 	struct lpfc_dmabuf *bmp;
+	struct ulp_bde64 *bpl;
+	int ret;
 	uint16_t ntype, nstate;
 
-	/* there are two dma buf in the request, actually there is one and
-	 * the second one is just the start address + cmd size.
-	 * Before calling lpfc_nvme_gen_req these buffers need to be wrapped
-	 * in a lpfc_dmabuf struct. When freeing we just free the wrapper
-	 * because the nvem layer owns the data bufs.
-	 * We do not have to break these packets open, we don't care what is in
-	 * them. And we do not have to look at the resonse data, we only care
-	 * that we got a response. All of the caring is going to happen in the
-	 * nvme-fc layer.
-	 */
-
-	lport = (struct lpfc_nvme_lport *)pnvme_lport->private;
-	rport = (struct lpfc_nvme_rport *)pnvme_rport->private;
-	if (unlikely(!lport) || unlikely(!rport))
-		return -EINVAL;
-
-	vport = lport->vport;
-
-	if (vport->load_flag & FC_UNLOADING)
-		return -ENODEV;
-
-	/* Need the ndlp.  It is stored in the driver's rport. */
-	ndlp = rport->ndlp;
 	if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) {
-		lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE | LOG_NVME_IOERR,
-				 "6051 Remoteport x%px, rport has invalid ndlp. "
-				 "Failing LS Req\n", pnvme_rport);
+		lpfc_printf_vlog(vport, KERN_ERR,
+				 LOG_NVME_DISC | LOG_NODE | LOG_NVME_IOERR,
+				 "6051 NVMEx LS REQ: Bad NDLP x%px, Failing "
+				 "LS Req\n",
+				 ndlp);
 		return -ENODEV;
 	}
 
-	/* The remote node has to be a mapped nvme target or an
-	 * unmapped nvme initiator or it's an error.
-	 */
 	ntype = ndlp->nlp_type;
 	nstate = ndlp->nlp_state;
 	if ((ntype & NLP_NVME_TARGET && nstate != NLP_STE_MAPPED_NODE) ||
 	    (ntype & NLP_NVME_INITIATOR && nstate != NLP_STE_UNMAPPED_NODE)) {
-		lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE | LOG_NVME_IOERR,
-				 "6088 DID x%06x not ready for "
-				 "IO. State x%x, Type x%x\n",
-				 pnvme_rport->port_id,
-				 ndlp->nlp_state, ndlp->nlp_type);
+		lpfc_printf_vlog(vport, KERN_ERR,
+				 LOG_NVME_DISC | LOG_NODE | LOG_NVME_IOERR,
+				 "6088 NVMEx LS REQ: Fail DID x%06x not "
+				 "ready for IO. Type x%x, State x%x\n",
+				 ndlp->nlp_DID, ntype, nstate);
 		return -ENODEV;
 	}
-	bmp = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL);
+
+	/*
+	 * there are two dma buf in the request, actually there is one and
+	 * the second one is just the start address + cmd size.
+	 * Before calling lpfc_nvme_gen_req these buffers need to be wrapped
+	 * in a lpfc_dmabuf struct. When freeing we just free the wrapper
+	 * because the nvem layer owns the data bufs.
+	 * We do not have to break these packets open, we don't care what is
+	 * in them. And we do not have to look at the resonse data, we only
+	 * care that we got a response. All of the caring is going to happen
+	 * in the nvme-fc layer.
+	 */
+
+	bmp = kmalloc(sizeof(*bmp), GFP_KERNEL);
 	if (!bmp) {
 
-		lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC,
-				 "6044 Could not find node for DID %x\n",
-				 pnvme_rport->port_id);
-		return 2;
+		lpfc_printf_vlog(vport, KERN_ERR,
+				 LOG_NVME_DISC | LOG_NVME_IOERR,
+				 "6044 NVMEx LS REQ: Could not alloc LS buf "
+				 "for DID %x\n",
+				 ndlp->nlp_DID);
+		return -ENOMEM;
 	}
-	INIT_LIST_HEAD(&bmp->list);
+
 	bmp->virt = lpfc_mbuf_alloc(vport->phba, MEM_PRI, &(bmp->phys));
 	if (!bmp->virt) {
-		lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC,
-				 "6042 Could not find node for DID %x\n",
-				 pnvme_rport->port_id);
+		lpfc_printf_vlog(vport, KERN_ERR,
+				 LOG_NVME_DISC | LOG_NVME_IOERR,
+				 "6042 NVMEx LS REQ: Could not alloc mbuf "
+				 "for DID %x\n",
+				 ndlp->nlp_DID);
 		kfree(bmp);
-		return 3;
+		return -ENOMEM;
 	}
+
+	INIT_LIST_HEAD(&bmp->list);
+
 	bpl = (struct ulp_bde64 *)bmp->virt;
 	bpl->addrHigh = le32_to_cpu(putPaddrHigh(pnvme_lsreq->rqstdma));
 	bpl->addrLow = le32_to_cpu(putPaddrLow(pnvme_lsreq->rqstdma));
@@ -702,37 +723,69 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport,
 	bpl->tus.f.bdeSize = pnvme_lsreq->rsplen;
 	bpl->tus.w = le32_to_cpu(bpl->tus.w);
 
-	/* Expand print to include key fields. */
 	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
-			 "6149 Issue LS Req to DID 0x%06x lport x%px, "
-			 "rport x%px lsreq x%px rqstlen:%d rsplen:%d "
-			 "%pad %pad\n",
-			 ndlp->nlp_DID, pnvme_lport, pnvme_rport,
-			 pnvme_lsreq, pnvme_lsreq->rqstlen,
-			 pnvme_lsreq->rsplen, &pnvme_lsreq->rqstdma,
-			 &pnvme_lsreq->rspdma);
-
-	atomic_inc(&lport->fc4NvmeLsRequests);
+			"6149 NVMEx LS REQ: Issue to DID 0x%06x lsreq x%px, "
+			"rqstlen:%d rsplen:%d %pad %pad\n",
+			ndlp->nlp_DID, pnvme_lsreq, pnvme_lsreq->rqstlen,
+			pnvme_lsreq->rsplen, &pnvme_lsreq->rqstdma,
+			&pnvme_lsreq->rspdma);
 
-	/* Hardcode the wait to 30 seconds.  Connections are failing otherwise.
-	 * This code allows it all to work.
-	 */
 	ret = lpfc_nvme_gen_req(vport, bmp, pnvme_lsreq->rqstaddr,
-				pnvme_lsreq, lpfc_nvme_cmpl_gen_req,
-				ndlp, 2, 30, 0);
+				pnvme_lsreq, gen_req_cmp, ndlp, 2,
+				LPFC_NVME_LS_TIMEOUT, 0);
 	if (ret != WQE_SUCCESS) {
-		atomic_inc(&lport->xmt_ls_err);
-		lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC,
-				 "6052 EXIT. issue ls wqe failed lport x%px, "
-				 "rport x%px lsreq x%px Status %x DID %x\n",
-				 pnvme_lport, pnvme_rport, pnvme_lsreq,
-				 ret, ndlp->nlp_DID);
+		lpfc_printf_vlog(vport, KERN_ERR,
+				 LOG_NVME_DISC | LOG_NVME_IOERR,
+				 "6052 NVMEx REQ: EXIT. issue ls wqe failed "
+				 "lsreq x%px Status %x DID %x\n",
+				 pnvme_lsreq, ret, ndlp->nlp_DID);
 		lpfc_mbuf_free(vport->phba, bmp->virt, bmp->phys);
 		kfree(bmp);
-		return ret;
+		return -EIO;
 	}
 
-	/* Stub in routine and return 0 for now. */
+	return 0;
+}
+
+/**
+ * lpfc_nvme_ls_req - Issue an NVME Link Service request
+ * @lpfc_nvme_lport: Transport localport that LS is to be issued from.
+ * @lpfc_nvme_rport: Transport remoteport that LS is to be sent to.
+ * @pnvme_lsreq - the transport nvme_ls_req structure for the LS
+ *
+ * Driver registers this routine to handle any link service request
+ * from the nvme_fc transport to a remote nvme-aware port.
+ *
+ * Return value :
+ *   0 - Success
+ *   non-zero: various error codes, in form of -Exxx
+ **/
+static int
+lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport,
+		 struct nvme_fc_remote_port *pnvme_rport,
+		 struct nvmefc_ls_req *pnvme_lsreq)
+{
+	struct lpfc_nvme_lport *lport;
+	struct lpfc_nvme_rport *rport;
+	struct lpfc_vport *vport;
+	int ret;
+
+	lport = (struct lpfc_nvme_lport *)pnvme_lport->private;
+	rport = (struct lpfc_nvme_rport *)pnvme_rport->private;
+	if (unlikely(!lport) || unlikely(!rport))
+		return -EINVAL;
+
+	vport = lport->vport;
+	if (vport->load_flag & FC_UNLOADING)
+		return -ENODEV;
+
+	atomic_inc(&lport->fc4NvmeLsRequests);
+
+	ret = __lpfc_nvme_ls_req(vport, rport->ndlp, pnvme_lsreq,
+				 lpfc_nvme_ls_req_cmp);
+	if (ret)
+		atomic_inc(&lport->xmt_ls_err);
+
 	return ret;
 }
 
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index 7525b12b06c8..65df27bbb7bb 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -80,6 +80,12 @@ struct lpfc_nvme_fcpreq_priv {
 	struct lpfc_io_buf *nvme_buf;
 };
 
+/*
+ * set NVME LS request timeouts to 30s. It is larger than the 2*R_A_TOV
+ * set by the spec, which appears to have issues with some devices.
+ */
+#define LPFC_NVME_LS_TIMEOUT		30
+
 
 #define LPFC_NVMET_DEFAULT_SEGS		(64 + 1)	/* 256K IOs */
 #define LPFC_NVMET_RQE_MIN_POST		128
@@ -225,6 +231,13 @@ struct lpfc_async_xchg_ctx {
 
 
 /* routines found in lpfc_nvme.c */
+int __lpfc_nvme_ls_req(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+		struct nvmefc_ls_req *pnvme_lsreq,
+		void (*gen_req_cmp)(struct lpfc_hba *phba,
+				struct lpfc_iocbq *cmdwqe,
+				struct lpfc_wcqe_complete *wcqe));
+void __lpfc_nvme_ls_req_cmp(struct lpfc_hba *phba,  struct lpfc_vport *vport,
+		struct lpfc_iocbq *cmdwqe, struct lpfc_wcqe_complete *wcqe);
 
 /* routines found in lpfc_nvmet.c */
 int lpfc_nvme_unsol_ls_issue_abort(struct lpfc_hba *phba,
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 25/29] lpfc: Refactor Send LS Abort support
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (23 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 24/29] lpfc: Refactor Send LS Request support James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:21   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 26/29] lpfc: Refactor Send LS Response support James Smart
                   ` (5 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, Paul Ely, martin.petersen

Send LS Abort support is needed when Send LS Request is supported.

Currently, the ability to abort an NVME LS request is limited to the nvme
(host) side of the driver.  In preparation of both the nvme and nvmet sides
supporting Send LS Abort, rework the existing ls_req abort routines such
that there is common code that can be used by both sides.

While refactoring it was seen the logic in the abort routine was incorrect.
It attempted to abort all NVME LS's on the indicated port. As such, the
routine was reworked to abort only the NVME LS request that was specified.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc_nvme.c | 125 +++++++++++++++++++++++++-----------------
 drivers/scsi/lpfc/lpfc_nvme.h |   2 +
 2 files changed, 77 insertions(+), 50 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index e93636986c6f..c6082c65d902 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -790,83 +790,108 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport,
 }
 
 /**
- * lpfc_nvme_ls_abort - Issue an Link Service request
- * @lpfc_pnvme: Pointer to the driver's nvme instance data
- * @lpfc_nvme_lport: Pointer to the driver's local port data
- * @lpfc_nvme_rport: Pointer to the rport getting the @lpfc_nvme_ereq
+ * __lpfc_nvme_ls_abort - Generic service routine to abort a prior
+ *         NVME LS request
+ * @vport: The local port that issued the LS
+ * @ndlp: The remote port the LS was sent to
+ * @pnvme_lsreq: Pointer to LS request structure from the transport
  *
- * Driver registers this routine to handle any link service request
- * from the nvme_fc transport to a remote nvme-aware port.
+ * The driver validates the ndlp, looks for the LS, and aborts the
+ * LS if found.
  *
- * Return value :
- *   0 - Success
- *   TODO: What are the failure codes.
+ * Returns:
+ * 0 : if LS found and aborted
+ * non-zero: various error conditions in form -Exxx
  **/
-static void
-lpfc_nvme_ls_abort(struct nvme_fc_local_port *pnvme_lport,
-		   struct nvme_fc_remote_port *pnvme_rport,
-		   struct nvmefc_ls_req *pnvme_lsreq)
+int
+__lpfc_nvme_ls_abort(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			struct nvmefc_ls_req *pnvme_lsreq)
 {
-	struct lpfc_nvme_lport *lport;
-	struct lpfc_vport *vport;
-	struct lpfc_hba *phba;
-	struct lpfc_nodelist *ndlp;
-	LIST_HEAD(abort_list);
+	struct lpfc_hba *phba = vport->phba;
 	struct lpfc_sli_ring *pring;
 	struct lpfc_iocbq *wqe, *next_wqe;
+	bool foundit = false;
 
-	lport = (struct lpfc_nvme_lport *)pnvme_lport->private;
-	if (unlikely(!lport))
-		return;
-	vport = lport->vport;
-	phba = vport->phba;
-
-	if (vport->load_flag & FC_UNLOADING)
-		return;
-
-	ndlp = lpfc_findnode_did(vport, pnvme_rport->port_id);
 	if (!ndlp) {
-		lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_ABTS,
-				 "6049 Could not find node for DID %x\n",
-				 pnvme_rport->port_id);
-		return;
+		lpfc_printf_log(phba, KERN_ERR,
+				LOG_NVME_DISC | LOG_NODE |
+					LOG_NVME_IOERR | LOG_NVME_ABTS,
+				"6049 NVMEx LS REQ Abort: Bad NDLP x%px DID "
+				"x%06x, Failing LS Req\n",
+				ndlp, ndlp ? ndlp->nlp_DID : 0);
+		return -EINVAL;
 	}
 
-	/* Expand print to include key fields. */
-	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_ABTS,
-			 "6040 ENTER.  lport x%px, rport x%px lsreq x%px rqstlen:%d "
-			 "rsplen:%d %pad %pad\n",
-			 pnvme_lport, pnvme_rport,
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC | LOG_NVME_ABTS,
+			 "6040 NVMEx LS REQ Abort: Issue LS_ABORT for lsreq "
+			 "x%p rqstlen:%d rsplen:%d %pad %pad\n",
 			 pnvme_lsreq, pnvme_lsreq->rqstlen,
 			 pnvme_lsreq->rsplen, &pnvme_lsreq->rqstdma,
 			 &pnvme_lsreq->rspdma);
 
 	/*
-	 * Lock the ELS ring txcmplq and build a local list of all ELS IOs
-	 * that need an ABTS.  The IOs need to stay on the txcmplq so that
-	 * the abort operation completes them successfully.
+	 * Lock the ELS ring txcmplq and look for the wqe that matches
+	 * this ELS. If found, issue an abort on the wqe.
 	 */
 	pring = phba->sli4_hba.nvmels_wq->pring;
 	spin_lock_irq(&phba->hbalock);
 	spin_lock(&pring->ring_lock);
 	list_for_each_entry_safe(wqe, next_wqe, &pring->txcmplq, list) {
-		/* Add to abort_list on on NDLP match. */
-		if (lpfc_check_sli_ndlp(phba, pring, wqe, ndlp)) {
+		if (wqe->context2 == pnvme_lsreq) {
 			wqe->iocb_flag |= LPFC_DRIVER_ABORTED;
-			list_add_tail(&wqe->dlist, &abort_list);
+			foundit = true;
+			break;
 		}
 	}
 	spin_unlock(&pring->ring_lock);
+
+	if (foundit)
+		lpfc_sli_issue_abort_iotag(phba, pring, wqe);
 	spin_unlock_irq(&phba->hbalock);
 
-	/* Abort the targeted IOs and remove them from the abort list. */
-	list_for_each_entry_safe(wqe, next_wqe, &abort_list, dlist) {
+	if (foundit)
+		return 0;
+
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC | LOG_NVME_ABTS,
+			 "6213 NVMEx LS REQ Abort: Unable to locate req x%p\n",
+			 pnvme_lsreq);
+	return 1;
+}
+
+/**
+ * lpfc_nvme_ls_abort - Abort a prior NVME LS request
+ * @lpfc_nvme_lport: Transport localport that LS is to be issued from.
+ * @lpfc_nvme_rport: Transport remoteport that LS is to be sent to.
+ * @pnvme_lsreq - the transport nvme_ls_req structure for the LS
+ *
+ * Driver registers this routine to abort a NVME LS request that is
+ * in progress (from the transports perspective).
+ **/
+static void
+lpfc_nvme_ls_abort(struct nvme_fc_local_port *pnvme_lport,
+		   struct nvme_fc_remote_port *pnvme_rport,
+		   struct nvmefc_ls_req *pnvme_lsreq)
+{
+	struct lpfc_nvme_lport *lport;
+	struct lpfc_vport *vport;
+	struct lpfc_hba *phba;
+	struct lpfc_nodelist *ndlp;
+	int ret;
+
+	lport = (struct lpfc_nvme_lport *)pnvme_lport->private;
+	if (unlikely(!lport))
+		return;
+	vport = lport->vport;
+	phba = vport->phba;
+
+	if (vport->load_flag & FC_UNLOADING)
+		return;
+
+	ndlp = lpfc_findnode_did(vport, pnvme_rport->port_id);
+
+	ret = __lpfc_nvme_ls_abort(vport, ndlp, pnvme_lsreq);
+	if (!ret)
 		atomic_inc(&lport->xmt_ls_abort);
-		spin_lock_irq(&phba->hbalock);
-		list_del_init(&wqe->dlist);
-		lpfc_sli_issue_abort_iotag(phba, pring, wqe);
-		spin_unlock_irq(&phba->hbalock);
-	}
 }
 
 /* Fix up the existing sgls for NVME IO. */
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index 65df27bbb7bb..3ebcf885cac5 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -238,6 +238,8 @@ int __lpfc_nvme_ls_req(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 				struct lpfc_wcqe_complete *wcqe));
 void __lpfc_nvme_ls_req_cmp(struct lpfc_hba *phba,  struct lpfc_vport *vport,
 		struct lpfc_iocbq *cmdwqe, struct lpfc_wcqe_complete *wcqe);
+int __lpfc_nvme_ls_abort(struct lpfc_vport *vport,
+		struct lpfc_nodelist *ndlp, struct nvmefc_ls_req *pnvme_lsreq);
 
 /* routines found in lpfc_nvmet.c */
 int lpfc_nvme_unsol_ls_issue_abort(struct lpfc_hba *phba,
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 26/29] lpfc: Refactor Send LS Response support
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (24 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 25/29] lpfc: Refactor Send LS Abort support James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:21   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 27/29] lpfc: nvme: Add Receive LS Request and Send LS Response support to nvme James Smart
                   ` (4 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, Paul Ely, martin.petersen

Currently, the ability to send an NVME LS response is limited to the nvmet
(controller/target) side of the driver.  In preparation of both the nvme
and nvmet sides supporting Send LS Response, rework the existing send
ls_rsp and ls_rsp completion routines such that there is common code that
can be used by both sides.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc_nvme.h  |   7 ++
 drivers/scsi/lpfc/lpfc_nvmet.c | 255 ++++++++++++++++++++++++++++-------------
 2 files changed, 184 insertions(+), 78 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index 3ebcf885cac5..2ce29dfeedda 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -245,3 +245,10 @@ int __lpfc_nvme_ls_abort(struct lpfc_vport *vport,
 int lpfc_nvme_unsol_ls_issue_abort(struct lpfc_hba *phba,
 			struct lpfc_async_xchg_ctx *ctxp, uint32_t sid,
 			uint16_t xri);
+int __lpfc_nvme_xmt_ls_rsp(struct lpfc_async_xchg_ctx *axchg,
+			struct nvmefc_ls_rsp *ls_rsp,
+			void (*xmt_ls_rsp_cmp)(struct lpfc_hba *phba,
+				struct lpfc_iocbq *cmdwqe,
+				struct lpfc_wcqe_complete *wcqe));
+void __lpfc_nvme_xmt_ls_rsp_cmp(struct lpfc_hba *phba,
+		struct lpfc_iocbq *cmdwqe, struct lpfc_wcqe_complete *wcqe);
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index e6895c719683..edec7c3ffab1 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -281,6 +281,53 @@ lpfc_nvmet_defer_release(struct lpfc_hba *phba,
 }
 
 /**
+ * __lpfc_nvme_xmt_ls_rsp_cmp - Generic completion handler for the
+ *         transmission of an NVME LS response.
+ * @phba: Pointer to HBA context object.
+ * @cmdwqe: Pointer to driver command WQE object.
+ * @wcqe: Pointer to driver response CQE object.
+ *
+ * The function is called from SLI ring event handler with no
+ * lock held. The function frees memory resources used for the command
+ * used to send the NVME LS RSP.
+ **/
+void
+__lpfc_nvme_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
+			   struct lpfc_wcqe_complete *wcqe)
+{
+	struct lpfc_async_xchg_ctx *axchg = cmdwqe->context2;
+	struct nvmefc_ls_rsp *ls_rsp = &axchg->ls_rsp;
+	uint32_t status, result;
+
+	status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK;
+	result = wcqe->parameter;
+
+	if (axchg->state != LPFC_NVME_STE_LS_RSP || axchg->entry_cnt != 2) {
+		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_DISC | LOG_NVME_IOERR,
+				"6410 NVMEx LS cmpl state mismatch IO x%x: "
+				"%d %d\n",
+				axchg->oxid, axchg->state, axchg->entry_cnt);
+	}
+
+	lpfc_nvmeio_data(phba, "NVMEx LS  CMPL: xri x%x stat x%x result x%x\n",
+			 axchg->oxid, status, result);
+
+	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
+			"6038 NVMEx LS rsp cmpl: %d %d oxid x%x\n",
+			status, result, axchg->oxid);
+
+	lpfc_nlp_put(cmdwqe->context1);
+	cmdwqe->context2 = NULL;
+	cmdwqe->context3 = NULL;
+	lpfc_sli_release_iocbq(phba, cmdwqe);
+	ls_rsp->done(ls_rsp);
+	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
+			"6200 NVMEx LS rsp cmpl done status %d oxid x%x\n",
+			status, axchg->oxid);
+	kfree(axchg);
+}
+
+/**
  * lpfc_nvmet_xmt_ls_rsp_cmp - Completion handler for LS Response
  * @phba: Pointer to HBA context object.
  * @cmdwqe: Pointer to driver command WQE object.
@@ -288,33 +335,23 @@ lpfc_nvmet_defer_release(struct lpfc_hba *phba,
  *
  * The function is called from SLI ring event handler with no
  * lock held. This function is the completion handler for NVME LS commands
- * The function frees memory resources used for the NVME commands.
+ * The function updates any states and statistics, then calls the
+ * generic completion handler to free resources.
  **/
 static void
 lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 			  struct lpfc_wcqe_complete *wcqe)
 {
 	struct lpfc_nvmet_tgtport *tgtp;
-	struct nvmefc_ls_rsp *rsp;
-	struct lpfc_async_xchg_ctx *ctxp;
 	uint32_t status, result;
 
-	status = bf_get(lpfc_wcqe_c_status, wcqe);
-	result = wcqe->parameter;
-	ctxp = cmdwqe->context2;
-
-	if (ctxp->state != LPFC_NVME_STE_LS_RSP || ctxp->entry_cnt != 2) {
-		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
-				"6410 NVMET LS cmpl state mismatch IO x%x: "
-				"%d %d\n",
-				ctxp->oxid, ctxp->state, ctxp->entry_cnt);
-	}
-
 	if (!phba->targetport)
-		goto out;
+		goto finish;
 
-	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
+	status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK;
+	result = wcqe->parameter;
 
+	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
 	if (tgtp) {
 		if (status) {
 			atomic_inc(&tgtp->xmt_ls_rsp_error);
@@ -327,22 +364,8 @@ lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 		}
 	}
 
-out:
-	rsp = &ctxp->ls_rsp;
-
-	lpfc_nvmeio_data(phba, "NVMET LS  CMPL: xri x%x stat x%x result x%x\n",
-			 ctxp->oxid, status, result);
-
-	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
-			"6038 NVMET LS rsp cmpl: %d %d oxid x%x\n",
-			status, result, ctxp->oxid);
-
-	lpfc_nlp_put(cmdwqe->context1);
-	cmdwqe->context2 = NULL;
-	cmdwqe->context3 = NULL;
-	lpfc_sli_release_iocbq(phba, cmdwqe);
-	rsp->done(rsp);
-	kfree(ctxp);
+finish:
+	__lpfc_nvme_xmt_ls_rsp_cmp(phba, cmdwqe, wcqe);
 }
 
 /**
@@ -821,17 +844,32 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 #endif
 }
 
-static int
-lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
-		      struct nvmefc_ls_rsp *rsp)
+/**
+ * __lpfc_nvme_xmt_ls_rsp - Generic service routine to issue transmit
+ *         an NVME LS rsp for a prior NVME LS request that was received.
+ * @axchg: pointer to exchange context for the NVME LS request the response
+ *         is for.
+ * @ls_rsp: pointer to the transport LS RSP that is to be sent
+ * @xmt_ls_rsp_cmp: completion routine to call upon RSP transmit done
+ *
+ * This routine is used to format and send a WQE to transmit a NVME LS
+ * Response.  The response is for a prior NVME LS request that was
+ * received and posted to the transport.
+ *
+ * Returns:
+ *  0 : if response successfully transmit
+ *  non-zero : if response failed to transmit, of the form -Exxx.
+ **/
+int
+__lpfc_nvme_xmt_ls_rsp(struct lpfc_async_xchg_ctx *axchg,
+			struct nvmefc_ls_rsp *ls_rsp,
+			void (*xmt_ls_rsp_cmp)(struct lpfc_hba *phba,
+				struct lpfc_iocbq *cmdwqe,
+				struct lpfc_wcqe_complete *wcqe))
 {
-	struct lpfc_async_xchg_ctx *ctxp =
-		container_of(rsp, struct lpfc_async_xchg_ctx, ls_rsp);
-	struct lpfc_hba *phba = ctxp->phba;
-	struct hbq_dmabuf *nvmebuf =
-		(struct hbq_dmabuf *)ctxp->rqb_buffer;
+	struct lpfc_hba *phba = axchg->phba;
+	struct hbq_dmabuf *nvmebuf = (struct hbq_dmabuf *)axchg->rqb_buffer;
 	struct lpfc_iocbq *nvmewqeq;
-	struct lpfc_nvmet_tgtport *nvmep = tgtport->private;
 	struct lpfc_dmabuf dmabuf;
 	struct ulp_bde64 bpl;
 	int rc;
@@ -839,34 +877,28 @@ lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
 	if (phba->pport->load_flag & FC_UNLOADING)
 		return -ENODEV;
 
-	if (phba->pport->load_flag & FC_UNLOADING)
-		return -ENODEV;
-
 	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
-			"6023 NVMET LS rsp oxid x%x\n", ctxp->oxid);
+			"6023 NVMEx LS rsp oxid x%x\n", axchg->oxid);
 
-	if ((ctxp->state != LPFC_NVME_STE_LS_RCV) ||
-	    (ctxp->entry_cnt != 1)) {
-		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
-				"6412 NVMET LS rsp state mismatch "
+	if (axchg->state != LPFC_NVME_STE_LS_RCV || axchg->entry_cnt != 1) {
+		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_DISC | LOG_NVME_IOERR,
+				"6412 NVMEx LS rsp state mismatch "
 				"oxid x%x: %d %d\n",
-				ctxp->oxid, ctxp->state, ctxp->entry_cnt);
+				axchg->oxid, axchg->state, axchg->entry_cnt);
+		return -EALREADY;
 	}
-	ctxp->state = LPFC_NVME_STE_LS_RSP;
-	ctxp->entry_cnt++;
+	axchg->state = LPFC_NVME_STE_LS_RSP;
+	axchg->entry_cnt++;
 
-	nvmewqeq = lpfc_nvmet_prep_ls_wqe(phba, ctxp, rsp->rspdma,
-				      rsp->rsplen);
+	nvmewqeq = lpfc_nvmet_prep_ls_wqe(phba, axchg, ls_rsp->rspdma,
+					 ls_rsp->rsplen);
 	if (nvmewqeq == NULL) {
-		atomic_inc(&nvmep->xmt_ls_drop);
-		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
-				"6150 LS Drop IO x%x: Prep\n",
-				ctxp->oxid);
-		lpfc_in_buf_free(phba, &nvmebuf->dbuf);
-		atomic_inc(&nvmep->xmt_ls_abort);
-		lpfc_nvme_unsol_ls_issue_abort(phba, ctxp,
-						ctxp->sid, ctxp->oxid);
-		return -ENOMEM;
+		lpfc_printf_log(phba, KERN_ERR,
+				LOG_NVME_DISC | LOG_NVME_IOERR | LOG_NVME_ABTS,
+				"6150 NVMEx LS Drop Rsp x%x: Prep\n",
+				axchg->oxid);
+		rc = -ENOMEM;
+		goto out_free_buf;
 	}
 
 	/* Save numBdes for bpl2sgl */
@@ -876,39 +908,106 @@ lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
 	dmabuf.virt = &bpl;
 	bpl.addrLow = nvmewqeq->wqe.xmit_sequence.bde.addrLow;
 	bpl.addrHigh = nvmewqeq->wqe.xmit_sequence.bde.addrHigh;
-	bpl.tus.f.bdeSize = rsp->rsplen;
+	bpl.tus.f.bdeSize = ls_rsp->rsplen;
 	bpl.tus.f.bdeFlags = 0;
 	bpl.tus.w = le32_to_cpu(bpl.tus.w);
+	/*
+	 * Note: although we're using stack space for the dmabuf, the
+	 * call to lpfc_sli4_issue_wqe is synchronous, so it will not
+	 * be referenced after it returns back to this routine.
+	 */
 
-	nvmewqeq->wqe_cmpl = lpfc_nvmet_xmt_ls_rsp_cmp;
+	nvmewqeq->wqe_cmpl = xmt_ls_rsp_cmp;
 	nvmewqeq->iocb_cmpl = NULL;
-	nvmewqeq->context2 = ctxp;
+	nvmewqeq->context2 = axchg;
 
-	lpfc_nvmeio_data(phba, "NVMET LS  RESP: xri x%x wqidx x%x len x%x\n",
-			 ctxp->oxid, nvmewqeq->hba_wqidx, rsp->rsplen);
+	lpfc_nvmeio_data(phba, "NVMEx LS RSP: xri x%x wqidx x%x len x%x\n",
+			 axchg->oxid, nvmewqeq->hba_wqidx, ls_rsp->rsplen);
+
+	rc = lpfc_sli4_issue_wqe(phba, axchg->hdwq, nvmewqeq);
+
+	/* clear to be sure there's no reference */
+	nvmewqeq->context3 = NULL;
 
-	rc = lpfc_sli4_issue_wqe(phba, ctxp->hdwq, nvmewqeq);
 	if (rc == WQE_SUCCESS) {
 		/*
 		 * Okay to repost buffer here, but wait till cmpl
 		 * before freeing ctxp and iocbq.
 		 */
 		lpfc_in_buf_free(phba, &nvmebuf->dbuf);
-		atomic_inc(&nvmep->xmt_ls_rsp);
 		return 0;
 	}
-	/* Give back resources */
-	atomic_inc(&nvmep->xmt_ls_drop);
-	lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
-			"6151 LS Drop IO x%x: Issue %d\n",
-			ctxp->oxid, rc);
+
+	lpfc_printf_log(phba, KERN_ERR,
+			LOG_NVME_DISC | LOG_NVME_IOERR | LOG_NVME_ABTS,
+			"6151 NVMEx LS RSP x%x: failed to transmit %d\n",
+			axchg->oxid, rc);
+
+	rc = -ENXIO;
 
 	lpfc_nlp_put(nvmewqeq->context1);
 
+out_free_buf:
+	/* Give back resources */
 	lpfc_in_buf_free(phba, &nvmebuf->dbuf);
-	atomic_inc(&nvmep->xmt_ls_abort);
-	lpfc_nvme_unsol_ls_issue_abort(phba, ctxp, ctxp->sid, ctxp->oxid);
-	return -ENXIO;
+
+	/*
+	 * As transport doesn't track completions of responses, if the rsp
+	 * fails to send, the transport will effectively ignore the rsp
+	 * and consider the LS done. However, the driver has an active
+	 * exchange open for the LS - so be sure to abort the exchange
+	 * if the response isn't sent.
+	 */
+	lpfc_nvme_unsol_ls_issue_abort(phba, axchg, axchg->sid, axchg->oxid);
+	return rc;
+}
+
+/**
+ * lpfc_nvmet_xmt_ls_rsp - Transmit NVME LS response
+ * @tgtport: pointer to target port that NVME LS is to be transmit from.
+ * @ls_rsp: pointer to the transport LS RSP that is to be sent
+ *
+ * Driver registers this routine to transmit responses for received NVME
+ * LS requests.
+ *
+ * This routine is used to format and send a WQE to transmit a NVME LS
+ * Response. The ls_rsp is used to reverse-map the LS to the original
+ * NVME LS request sequence, which provides addressing information for
+ * the remote port the LS to be sent to, as well as the exchange id
+ * that is the LS is bound to.
+ *
+ * Returns:
+ *  0 : if response successfully transmit
+ *  non-zero : if response failed to transmit, of the form -Exxx.
+ **/
+static int
+lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
+		      struct nvmefc_ls_rsp *ls_rsp)
+{
+	struct lpfc_async_xchg_ctx *axchg =
+		container_of(ls_rsp, struct lpfc_async_xchg_ctx, ls_rsp);
+	struct lpfc_nvmet_tgtport *nvmep = tgtport->private;
+	int rc;
+
+	if (axchg->phba->pport->load_flag & FC_UNLOADING)
+		return -ENODEV;
+
+	rc = __lpfc_nvme_xmt_ls_rsp(axchg, ls_rsp, lpfc_nvmet_xmt_ls_rsp_cmp);
+
+	if (rc) {
+		atomic_inc(&nvmep->xmt_ls_drop);
+		/*
+		 * unless the failure is due to having already sent
+		 * the response, an abort will be generated for the
+		 * exchange if the rsp can't be sent.
+		 */
+		if (rc != -EALREADY)
+			atomic_inc(&nvmep->xmt_ls_abort);
+		return rc;
+	}
+
+	atomic_inc(&nvmep->xmt_ls_rsp);
+	return 0;
 }
 
 static int
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 27/29] lpfc: nvme: Add Receive LS Request and Send LS Response support to nvme
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (25 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 26/29] lpfc: Refactor Send LS Response support James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:23   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 28/29] lpfc: nvmet: Add support for NVME LS request hosthandle James Smart
                   ` (3 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, Paul Ely, martin.petersen

Now that common helpers exist, add the ability to receive NVME LS requests
to the driver. New requests will be delivered to the transport by
nvme_fc_rcv_ls_req().

In order to complete the LS, add support for Send LS Response and send
LS response completion handling to the driver.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc_nvme.c | 130 ++++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/lpfc/lpfc_nvme.h |   9 +++
 2 files changed, 139 insertions(+)

diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index c6082c65d902..9f5e8964f83c 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -400,6 +400,10 @@ lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
  * request. Any remaining validation is done and the LS is then forwarded
  * to the nvme-fc transport via nvme_fc_rcv_ls_req().
  *
+ * The calling sequence should be: nvme_fc_rcv_ls_req() -> (processing)
+ * -> lpfc_nvme_xmt_ls_rsp/cmp -> req->done.
+ * lpfc_nvme_xmt_ls_rsp_cmp should free the allocated axchg.
+ *
  * Returns 0 if LS was handled and delivered to the transport
  * Returns 1 if LS failed to be handled and should be dropped
  */
@@ -407,6 +411,46 @@ int
 lpfc_nvme_handle_lsreq(struct lpfc_hba *phba,
 			struct lpfc_async_xchg_ctx *axchg)
 {
+#if (IS_ENABLED(CONFIG_NVME_FC))
+	struct lpfc_vport *vport;
+	struct lpfc_nvme_rport *lpfc_rport;
+	struct nvme_fc_remote_port *remoteport;
+	struct lpfc_nvme_lport *lport;
+	uint32_t *payload = axchg->payload;
+	int rc;
+
+	vport = axchg->ndlp->vport;
+	lpfc_rport = axchg->ndlp->nrport;
+	if (!lpfc_rport)
+		return -EINVAL;
+
+	remoteport = lpfc_rport->remoteport;
+	if (!vport->localport)
+		return -EINVAL;
+
+	lport = vport->localport->private;
+	if (!lport)
+		return -EINVAL;
+
+	atomic_inc(&lport->rcv_ls_req_in);
+
+	rc = nvme_fc_rcv_ls_req(remoteport, &axchg->ls_rsp, axchg->payload,
+				axchg->size);
+
+	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
+			"6205 NVME Unsol rcv: sz %d rc %d: %08x %08x %08x "
+			"%08x %08x %08x\n",
+			axchg->size, rc,
+			*payload, *(payload+1), *(payload+2),
+			*(payload+3), *(payload+4), *(payload+5));
+
+	if (!rc) {
+		atomic_inc(&lport->rcv_ls_req_out);
+		return 0;
+	}
+
+	atomic_inc(&lport->rcv_ls_req_drop);
+#endif
 	return 1;
 }
 
@@ -859,6 +903,81 @@ __lpfc_nvme_ls_abort(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 }
 
 /**
+ * lpfc_nvme_xmt_ls_rsp_cmp - Completion handler for LS Response
+ * @phba: Pointer to HBA context object.
+ * @cmdwqe: Pointer to driver command WQE object.
+ * @wcqe: Pointer to driver response CQE object.
+ *
+ * The function is called from SLI ring event handler with no
+ * lock held. This function is the completion handler for NVME LS commands
+ * The function updates any states and statistics, then calls the
+ * generic completion handler to free resources.
+ **/
+static void
+lpfc_nvme_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
+			  struct lpfc_wcqe_complete *wcqe)
+{
+	struct lpfc_vport *vport = cmdwqe->vport;
+	struct lpfc_nvme_lport *lport;
+	uint32_t status, result;
+
+	status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK;
+	result = wcqe->parameter;
+
+	if (!vport->localport)
+		goto finish;
+
+	lport = (struct lpfc_nvme_lport *)vport->localport->private;
+	if (lport) {
+		if (status) {
+			atomic_inc(&lport->xmt_ls_rsp_error);
+			if (result == IOERR_ABORT_REQUESTED)
+				atomic_inc(&lport->xmt_ls_rsp_aborted);
+			if (bf_get(lpfc_wcqe_c_xb, wcqe))
+				atomic_inc(&lport->xmt_ls_rsp_xb_set);
+		} else {
+			atomic_inc(&lport->xmt_ls_rsp_cmpl);
+		}
+	}
+
+finish:
+	__lpfc_nvme_xmt_ls_rsp_cmp(phba, cmdwqe, wcqe);
+}
+
+static int
+lpfc_nvme_xmt_ls_rsp(struct nvme_fc_local_port *localport,
+		     struct nvme_fc_remote_port *remoteport,
+		     struct nvmefc_ls_rsp *ls_rsp)
+{
+	struct lpfc_async_xchg_ctx *axchg =
+		container_of(ls_rsp, struct lpfc_async_xchg_ctx, ls_rsp);
+	struct lpfc_nvme_lport *lport;
+	int rc;
+
+	if (axchg->phba->pport->load_flag & FC_UNLOADING)
+		return -ENODEV;
+
+	lport = (struct lpfc_nvme_lport *)localport->private;
+
+	rc = __lpfc_nvme_xmt_ls_rsp(axchg, ls_rsp, lpfc_nvme_xmt_ls_rsp_cmp);
+
+	if (rc) {
+		atomic_inc(&lport->xmt_ls_drop);
+		/*
+		 * unless the failure is due to having already sent
+		 * the response, an abort will be generated for the
+		 * exchange if the rsp can't be sent.
+		 */
+		if (rc != -EALREADY)
+			atomic_inc(&lport->xmt_ls_abort);
+		return rc;
+	}
+
+	atomic_inc(&lport->xmt_ls_rsp);
+	return 0;
+}
+
+/**
  * lpfc_nvme_ls_abort - Abort a prior NVME LS request
  * @lpfc_nvme_lport: Transport localport that LS is to be issued from.
  * @lpfc_nvme_rport: Transport remoteport that LS is to be sent to.
@@ -2090,6 +2209,7 @@ static struct nvme_fc_port_template lpfc_nvme_template = {
 	.fcp_io       = lpfc_nvme_fcp_io_submit,
 	.ls_abort     = lpfc_nvme_ls_abort,
 	.fcp_abort    = lpfc_nvme_fcp_abort,
+	.xmt_ls_rsp   = lpfc_nvme_xmt_ls_rsp,
 
 	.max_hw_queues = 1,
 	.max_sgl_segments = LPFC_NVME_DEFAULT_SEGS,
@@ -2285,6 +2405,16 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport)
 		atomic_set(&lport->cmpl_fcp_err, 0);
 		atomic_set(&lport->cmpl_ls_xb, 0);
 		atomic_set(&lport->cmpl_ls_err, 0);
+		atomic_set(&lport->xmt_ls_rsp, 0);
+		atomic_set(&lport->xmt_ls_drop, 0);
+		atomic_set(&lport->xmt_ls_rsp_cmpl, 0);
+		atomic_set(&lport->xmt_ls_rsp_error, 0);
+		atomic_set(&lport->xmt_ls_rsp_aborted, 0);
+		atomic_set(&lport->xmt_ls_rsp_xb_set, 0);
+		atomic_set(&lport->rcv_ls_req_in, 0);
+		atomic_set(&lport->rcv_ls_req_out, 0);
+		atomic_set(&lport->rcv_ls_req_drop, 0);
+
 		atomic_set(&lport->fc4NvmeLsRequests, 0);
 		atomic_set(&lport->fc4NvmeLsCmpls, 0);
 	}
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index 2ce29dfeedda..e4e696f12433 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -67,6 +67,15 @@ struct lpfc_nvme_lport {
 	atomic_t cmpl_fcp_err;
 	atomic_t cmpl_ls_xb;
 	atomic_t cmpl_ls_err;
+	atomic_t xmt_ls_rsp;
+	atomic_t xmt_ls_drop;
+	atomic_t xmt_ls_rsp_cmpl;
+	atomic_t xmt_ls_rsp_error;
+	atomic_t xmt_ls_rsp_aborted;
+	atomic_t xmt_ls_rsp_xb_set;
+	atomic_t rcv_ls_req_in;
+	atomic_t rcv_ls_req_out;
+	atomic_t rcv_ls_req_drop;
 };
 
 struct lpfc_nvme_rport {
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 28/29] lpfc: nvmet: Add support for NVME LS request hosthandle
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (26 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 27/29] lpfc: nvme: Add Receive LS Request and Send LS Response support to nvme James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:23   ` Hannes Reinecke
  2020-02-05 18:37 ` [PATCH 29/29] lpfc: nvmet: Add Send LS Request and Abort LS Request support James Smart
                   ` (2 subsequent siblings)
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, Paul Ely, martin.petersen

As the nvmet layer does not have the concept of a remoteport object, which
can be used to identify the entity on the other end of the fabric that is
to receive an LS, the hosthandle was introduced.  The driver passes the
hosthandle, a value representative of the remote port, with a ls request
receive. The LS request will create the association.  The transport will
remember the hosthandle for the association, and if there is a need to
initiate a LS request to the remote port for the association, the
hosthandle will be used. When the driver loses connectivity with the
remote port, it needs to notify the transport that the hosthandle is no
longer valid, allowing the transport to terminate associations related to
the hosthandle.

This patch adds support to the driver for the hosthandle. The driver will
use the ndlp pointer of the remote port for the hosthandle in calls to
nvmet_fc_rcv_ls_req().  The discovery engine is updated to invalidate the
hosthandle whenever connectivity with the remote port is lost.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc_crtn.h      |  2 ++
 drivers/scsi/lpfc/lpfc_hbadisc.c   |  6 +++++
 drivers/scsi/lpfc/lpfc_nportdisc.c | 11 ++++++++
 drivers/scsi/lpfc/lpfc_nvme.h      |  3 +++
 drivers/scsi/lpfc/lpfc_nvmet.c     | 53 +++++++++++++++++++++++++++++++++++++-
 5 files changed, 74 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
index 928e40fcf544..9ff292540072 100644
--- a/drivers/scsi/lpfc/lpfc_crtn.h
+++ b/drivers/scsi/lpfc/lpfc_crtn.h
@@ -572,6 +572,8 @@ void lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba, uint32_t idx,
 				struct rqb_dmabuf *nvmebuf, uint64_t isr_ts,
 				uint8_t cqflag);
 void lpfc_nvme_mod_param_dep(struct lpfc_hba *phba);
+void lpfc_nvmet_invalidate_host(struct lpfc_hba *phba,
+			struct lpfc_nodelist *ndlp);
 void lpfc_nvme_abort_fcreq_cmpl(struct lpfc_hba *phba,
 				struct lpfc_iocbq *cmdiocb,
 				struct lpfc_wcqe_complete *abts_cmpl);
diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
index 05d51945defd..6943943340d3 100644
--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
+++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
@@ -822,6 +822,12 @@ lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove)
 		if ((phba->sli_rev < LPFC_SLI_REV4) &&
 		    (!remove && ndlp->nlp_type & NLP_FABRIC))
 			continue;
+
+		/* Notify transport of connectivity loss to trigger cleanup. */
+		if (phba->nvmet_support &&
+		    ndlp->nlp_state == NLP_STE_UNMAPPED_NODE)
+			lpfc_nvmet_invalidate_host(phba, ndlp);
+
 		lpfc_disc_state_machine(vport, ndlp, NULL,
 					remove
 					? NLP_EVT_DEVICE_RM
diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
index 1324e34f2a46..400201c2e2f6 100644
--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
+++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
@@ -434,6 +434,11 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 					 (unsigned long long)
 					 wwn_to_u64(sp->portName.u.wwn));
 
+		/* Notify transport of connectivity loss to trigger cleanup. */
+		if (phba->nvmet_support &&
+		    ndlp->nlp_state == NLP_STE_UNMAPPED_NODE)
+			lpfc_nvmet_invalidate_host(phba, ndlp);
+
 		ndlp->nlp_prev_state = ndlp->nlp_state;
 		/* rport needs to be unregistered first */
 		lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
@@ -749,6 +754,12 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL);
 	else
 		lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
+
+	/* Notify transport of connectivity loss to trigger cleanup. */
+	if (phba->nvmet_support &&
+	    ndlp->nlp_state == NLP_STE_UNMAPPED_NODE)
+		lpfc_nvmet_invalidate_host(phba, ndlp);
+
 	if (ndlp->nlp_DID == Fabric_DID) {
 		if (vport->port_state <= LPFC_FDISC)
 			goto out;
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index e4e696f12433..b3c439a91482 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -108,9 +108,12 @@ struct lpfc_nvme_fcpreq_priv {
 #define LPFC_NVMET_WAIT_TMO		(5 * MSEC_PER_SEC)
 
 /* Used for NVME Target */
+#define LPFC_NVMET_INV_HOST_ACTIVE      1
+
 struct lpfc_nvmet_tgtport {
 	struct lpfc_hba *phba;
 	struct completion *tport_unreg_cmp;
+	atomic_t state;		/* tracks nvmet hosthandle invalidation */
 
 	/* Stats counters - lpfc_nvmet_unsol_ls_buffer */
 	atomic_t rcv_ls_req_in;
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index edec7c3ffab1..df0378fd4b59 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -1284,6 +1284,24 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
 }
 
 static void
+lpfc_nvmet_host_release(void *hosthandle)
+{
+	struct lpfc_nodelist *ndlp = hosthandle;
+	struct lpfc_hba *phba = NULL;
+	struct lpfc_nvmet_tgtport *tgtp;
+
+	phba = ndlp->phba;
+	if (!phba->targetport || !phba->targetport->private)
+		return;
+
+	lpfc_printf_log(phba, KERN_ERR, LOG_NVME,
+			"6202 NVMET XPT releasing hosthandle x%px\n",
+			hosthandle);
+	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
+	atomic_set(&tgtp->state, 0);
+}
+
+static void
 lpfc_nvmet_discovery_event(struct nvmet_fc_target_port *tgtport)
 {
 	struct lpfc_nvmet_tgtport *tgtp;
@@ -1307,6 +1325,7 @@ static struct nvmet_fc_target_template lpfc_tgttemplate = {
 	.fcp_req_release = lpfc_nvmet_xmt_fcp_release,
 	.defer_rcv	= lpfc_nvmet_defer_rcv,
 	.discovery_event = lpfc_nvmet_discovery_event,
+	.host_release   = lpfc_nvmet_host_release,
 
 	.max_hw_queues  = 1,
 	.max_sgl_segments = LPFC_NVMET_DEFAULT_SEGS,
@@ -2045,7 +2064,12 @@ lpfc_nvmet_handle_lsreq(struct lpfc_hba *phba,
 
 	atomic_inc(&tgtp->rcv_ls_req_in);
 
-	rc = nvmet_fc_rcv_ls_req(phba->targetport, NULL, &axchg->ls_rsp,
+	/*
+	 * Driver passes the ndlp as the hosthandle argument allowing
+	 * the transport to generate LS requests for any associateions
+	 * that are created.
+	 */
+	rc = nvmet_fc_rcv_ls_req(phba->targetport, axchg->ndlp, &axchg->ls_rsp,
 				 axchg->payload, axchg->size);
 
 	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
@@ -3478,3 +3502,30 @@ lpfc_nvme_unsol_ls_issue_abort(struct lpfc_hba *phba,
 			"6056 Failed to Issue ABTS. Status x%x\n", rc);
 	return 0;
 }
+
+/**
+ * lpfc_nvmet_invalidate_host
+ *
+ * @phba - pointer to the driver instance bound to an adapter port.
+ * @ndlp - pointer to an lpfc_nodelist type
+ *
+ * This routine upcalls the nvmet transport to invalidate an NVME
+ * host to which this target instance had active connections.
+ */
+void
+lpfc_nvmet_invalidate_host(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
+{
+	struct lpfc_nvmet_tgtport *tgtp;
+
+	lpfc_printf_log(phba, KERN_INFO, LOG_NVME | LOG_NVME_ABTS,
+			"6203 Invalidating hosthandle x%px\n",
+			ndlp);
+
+	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
+	atomic_set(&tgtp->state, LPFC_NVMET_INV_HOST_ACTIVE);
+
+#if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
+	/* Need to get the nvmet_fc_target_port pointer here.*/
+	nvmet_fc_invalidate_host(phba->targetport, ndlp);
+#endif
+}
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 29/29] lpfc: nvmet: Add Send LS Request and Abort LS Request support
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (27 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 28/29] lpfc: nvmet: Add support for NVME LS request hosthandle James Smart
@ 2020-02-05 18:37 ` James Smart
  2020-03-06  9:24   ` Hannes Reinecke
  2020-03-06  9:26 ` [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support Hannes Reinecke
  2020-03-31 14:29 ` Christoph Hellwig
  30 siblings, 1 reply; 80+ messages in thread
From: James Smart @ 2020-02-05 18:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: James Smart, Paul Ely, martin.petersen

Now that common helpers exist, add the ability to Send an NVME LS Request
and to Abort an outstanding LS Request to the nvmet side of the driver.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/lpfc/lpfc_nvme.h  |   8 +++
 drivers/scsi/lpfc/lpfc_nvmet.c | 128 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 136 insertions(+)

diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index b3c439a91482..60f9e87b3b1c 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -166,6 +166,14 @@ struct lpfc_nvmet_tgtport {
 	atomic_t defer_ctx;
 	atomic_t defer_fod;
 	atomic_t defer_wqfull;
+
+	/* Stats counters - ls_reqs, ls_aborts, host_invalidate */
+	atomic_t xmt_ls_reqs;
+	atomic_t xmt_ls_cmpls;
+	atomic_t xmt_ls_err;
+	atomic_t cmpl_ls_err;
+	atomic_t cmpl_ls_xb;
+	atomic_t cmpl_ls_reqs;
 };
 
 struct lpfc_nvmet_ctx_info {
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index df0378fd4b59..1182412573c3 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -1283,6 +1283,122 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
 	spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
 }
 
+/**
+ * lpfc_nvmet_ls_req_cmp - completion handler for a nvme ls request
+ * @phba: Pointer to HBA context object
+ * @cmdwqe: Pointer to driver command WQE object.
+ * @wcqe: Pointer to driver response CQE object.
+ *
+ * This function is the completion handler for NVME LS requests.
+ * The function updates any states and statistics, then calls the
+ * generic completion handler to finish completion of the request.
+ **/
+static void
+lpfc_nvmet_ls_req_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
+		       struct lpfc_wcqe_complete *wcqe)
+{
+	struct lpfc_vport *vport = cmdwqe->vport;
+	uint32_t status;
+	struct lpfc_nvmet_tgtport *tgtp;
+
+	status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK;
+
+	if (!phba->targetport)
+		goto finish;
+
+	tgtp = phba->targetport->private;
+	if (tgtp) {
+		atomic_inc(&tgtp->cmpl_ls_reqs);
+		if (status) {
+			if (bf_get(lpfc_wcqe_c_xb, wcqe))
+				atomic_inc(&tgtp->cmpl_ls_xb);
+			atomic_inc(&tgtp->cmpl_ls_err);
+		}
+	}
+
+finish:
+	__lpfc_nvme_ls_req_cmp(phba, vport, cmdwqe, wcqe);
+}
+
+/**
+ * lpfc_nvmet_ls_req - Issue an Link Service request
+ * @targetport - pointer to target instance registered with nvmet transport.
+ * @hosthandle - hosthandle set by the driver in a prior ls_rqst_rcv.
+ *               Driver sets this value to the ndlp pointer.
+ * @pnvme_lsreq - the transport nvme_ls_req structure for the LS
+ *
+ * Driver registers this routine to handle any link service request
+ * from the nvme_fc transport to a remote nvme-aware port.
+ *
+ * Return value :
+ *   0 - Success
+ *   non-zero: various error codes, in form of -Exxx
+ **/
+static int
+lpfc_nvmet_ls_req(struct nvmet_fc_target_port *targetport,
+		  void *hosthandle,
+		  struct nvmefc_ls_req *pnvme_lsreq)
+{
+	struct lpfc_nvmet_tgtport *lpfc_nvmet = targetport->private;
+	struct lpfc_hba *phba;
+	struct lpfc_nodelist *ndlp;
+	int ret;
+	u32 hstate;
+
+	if (!lpfc_nvmet)
+		return -EINVAL;
+
+	phba = lpfc_nvmet->phba;
+	if (phba->pport->load_flag & FC_UNLOADING)
+		return -EINVAL;
+
+	hstate = atomic_read(&lpfc_nvmet->state);
+	if (hstate == LPFC_NVMET_INV_HOST_ACTIVE)
+		return -EACCES;
+
+	ndlp = (struct lpfc_nodelist *)hosthandle;
+
+	atomic_inc(&lpfc_nvmet->xmt_ls_reqs);
+
+	ret = __lpfc_nvme_ls_req(phba->pport, ndlp, pnvme_lsreq,
+				 lpfc_nvmet_ls_req_cmp);
+	if (ret)
+		atomic_inc(&lpfc_nvmet->xmt_ls_err);
+
+	return ret;
+}
+
+/**
+ * lpfc_nvmet_ls_abort - Abort a prior NVME LS request
+ * @targetport: Transport targetport, that LS was issued from.
+ * @hosthandle - hosthandle set by the driver in a prior ls_rqst_rcv.
+ *               Driver sets this value to the ndlp pointer.
+ * @pnvme_lsreq - the transport nvme_ls_req structure for LS to be aborted
+ *
+ * Driver registers this routine to abort an NVME LS request that is
+ * in progress (from the transports perspective).
+ **/
+static void
+lpfc_nvmet_ls_abort(struct nvmet_fc_target_port *targetport,
+		    void *hosthandle,
+		    struct nvmefc_ls_req *pnvme_lsreq)
+{
+	struct lpfc_nvmet_tgtport *lpfc_nvmet = targetport->private;
+	struct lpfc_hba *phba;
+	struct lpfc_nodelist *ndlp;
+	int ret;
+
+	phba = lpfc_nvmet->phba;
+	if (phba->pport->load_flag & FC_UNLOADING)
+		return;
+
+	ndlp = (struct lpfc_nodelist *)hosthandle;
+
+	ret = __lpfc_nvme_ls_abort(phba->pport, ndlp, pnvme_lsreq);
+	if (!ret)
+		atomic_inc(&lpfc_nvmet->xmt_ls_abort);
+}
+
 static void
 lpfc_nvmet_host_release(void *hosthandle)
 {
@@ -1325,6 +1441,8 @@ static struct nvmet_fc_target_template lpfc_tgttemplate = {
 	.fcp_req_release = lpfc_nvmet_xmt_fcp_release,
 	.defer_rcv	= lpfc_nvmet_defer_rcv,
 	.discovery_event = lpfc_nvmet_discovery_event,
+	.ls_req         = lpfc_nvmet_ls_req,
+	.ls_abort       = lpfc_nvmet_ls_abort,
 	.host_release   = lpfc_nvmet_host_release,
 
 	.max_hw_queues  = 1,
@@ -1336,6 +1454,7 @@ static struct nvmet_fc_target_template lpfc_tgttemplate = {
 	.target_features = 0,
 	/* sizes of additional private data for data structures */
 	.target_priv_sz = sizeof(struct lpfc_nvmet_tgtport),
+	.lsrqst_priv_sz = 0,
 };
 
 static void
@@ -1638,6 +1757,9 @@ lpfc_nvmet_create_targetport(struct lpfc_hba *phba)
 		atomic_set(&tgtp->xmt_fcp_xri_abort_cqe, 0);
 		atomic_set(&tgtp->xmt_fcp_abort, 0);
 		atomic_set(&tgtp->xmt_fcp_abort_cmpl, 0);
+		atomic_set(&tgtp->xmt_ls_reqs, 0);
+		atomic_set(&tgtp->xmt_ls_cmpls, 0);
+		atomic_set(&tgtp->xmt_ls_err, 0);
 		atomic_set(&tgtp->xmt_abort_unsol, 0);
 		atomic_set(&tgtp->xmt_abort_sol, 0);
 		atomic_set(&tgtp->xmt_abort_rsp, 0);
@@ -1645,6 +1767,12 @@ lpfc_nvmet_create_targetport(struct lpfc_hba *phba)
 		atomic_set(&tgtp->defer_ctx, 0);
 		atomic_set(&tgtp->defer_fod, 0);
 		atomic_set(&tgtp->defer_wqfull, 0);
+		atomic_set(&tgtp->xmt_ls_reqs, 0);
+		atomic_set(&tgtp->xmt_ls_cmpls, 0);
+		atomic_set(&tgtp->xmt_ls_err, 0);
+		atomic_set(&tgtp->cmpl_ls_err, 0);
+		atomic_set(&tgtp->cmpl_ls_xb, 0);
+		atomic_set(&tgtp->cmpl_ls_reqs, 0);
 	}
 	return error;
 }
-- 
2.13.7


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* Re: [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08
  2020-02-05 18:37 ` [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08 James Smart
@ 2020-02-28 20:36   ` Sagi Grimberg
  2020-03-06  8:16   ` Hannes Reinecke
  2020-03-26 16:10   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Sagi Grimberg @ 2020-02-28 20:36 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 02/29] nvmet-fc: fix typo in comment
  2020-02-05 18:37 ` [PATCH 02/29] nvmet-fc: fix typo in comment James Smart
@ 2020-02-28 20:36   ` Sagi Grimberg
  2020-03-06  8:17   ` Hannes Reinecke
  2020-03-26 16:10   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Sagi Grimberg @ 2020-02-28 20:36 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request
  2020-02-05 18:37 ` [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request James Smart
@ 2020-02-28 20:38   ` Sagi Grimberg
  2020-03-06  8:19   ` Hannes Reinecke
  2020-03-26 16:16   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Sagi Grimberg @ 2020-02-28 20:38 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

Can we have one of the FC guys have a look into this?

I'm not familiar with the details, and it'd be useful
to have a second set of eyes on this.

This is general for the whole series.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header
  2020-02-05 18:37 ` [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header James Smart
@ 2020-02-28 20:40   ` Sagi Grimberg
  2020-03-06  8:21   ` Hannes Reinecke
  2020-03-26 16:26   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Sagi Grimberg @ 2020-02-28 20:40 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 05/29] lpfc: adapt code to changed names in api header
  2020-02-05 18:37 ` [PATCH 05/29] lpfc: " James Smart
@ 2020-02-28 20:40   ` Sagi Grimberg
  2020-03-06  8:25   ` Hannes Reinecke
  2020-03-26 16:30   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Sagi Grimberg @ 2020-02-28 20:40 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 06/29] nvme-fcloop: Fix deallocation of working context
  2020-02-05 18:37 ` [PATCH 06/29] nvme-fcloop: Fix deallocation of working context James Smart
@ 2020-02-28 20:43   ` Sagi Grimberg
  2020-03-06  8:34   ` Hannes Reinecke
  2020-03-26 16:35   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Sagi Grimberg @ 2020-02-28 20:43 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

Is this directly related to the series? Or should
this be a dedicated bug fix (that in turn will go to
stable etc)?

On 2/5/20 10:37 AM, James Smart wrote:
> There's been a longstanding bug of LS completions which freed ls
> op's, particularly the disconnect LS, while executing on a work
> context that is in the memory being free. Not a good thing to do.
> 
> Rework LS handling to make callbacks in the rport context
> rather than the ls_request context.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   drivers/nvme/target/fcloop.c | 76 ++++++++++++++++++++++++++++++--------------
>   1 file changed, 52 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index 130932a5db0c..6533f4196005 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -198,10 +198,13 @@ struct fcloop_lport_priv {
>   };
>   
>   struct fcloop_rport {
> -	struct nvme_fc_remote_port *remoteport;
> -	struct nvmet_fc_target_port *targetport;
> -	struct fcloop_nport *nport;
> -	struct fcloop_lport *lport;
> +	struct nvme_fc_remote_port	*remoteport;
> +	struct nvmet_fc_target_port	*targetport;
> +	struct fcloop_nport		*nport;
> +	struct fcloop_lport		*lport;
> +	spinlock_t			lock;
> +	struct list_head		ls_list;
> +	struct work_struct		ls_work;
>   };
>   
>   struct fcloop_tport {
> @@ -224,11 +227,10 @@ struct fcloop_nport {
>   };
>   
>   struct fcloop_lsreq {
> -	struct fcloop_tport		*tport;
>   	struct nvmefc_ls_req		*lsreq;
> -	struct work_struct		work;
>   	struct nvmefc_ls_rsp		ls_rsp;
>   	int				status;
> +	struct list_head		ls_list; /* fcloop_rport->ls_list */
>   };
>   
>   struct fcloop_rscn {
> @@ -292,21 +294,32 @@ fcloop_delete_queue(struct nvme_fc_local_port *localport,
>   {
>   }
>   
> -
> -/*
> - * Transmit of LS RSP done (e.g. buffers all set). call back up
> - * initiator "done" flows.
> - */
>   static void
> -fcloop_tgt_lsrqst_done_work(struct work_struct *work)
> +fcloop_rport_lsrqst_work(struct work_struct *work)
>   {
> -	struct fcloop_lsreq *tls_req =
> -		container_of(work, struct fcloop_lsreq, work);
> -	struct fcloop_tport *tport = tls_req->tport;
> -	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
> +	struct fcloop_rport *rport =
> +		container_of(work, struct fcloop_rport, ls_work);
> +	struct fcloop_lsreq *tls_req;
>   
> -	if (!tport || tport->remoteport)
> -		lsreq->done(lsreq, tls_req->status);
> +	spin_lock(&rport->lock);
> +	for (;;) {
> +		tls_req = list_first_entry_or_null(&rport->ls_list,
> +				struct fcloop_lsreq, ls_list);
> +		if (!tls_req)
> +			break;
> +
> +		list_del(&tls_req->ls_list);
> +		spin_unlock(&rport->lock);
> +
> +		tls_req->lsreq->done(tls_req->lsreq, tls_req->status);
> +		/*
> +		 * callee may free memory containing tls_req.
> +		 * do not reference lsreq after this.
> +		 */
> +
> +		spin_lock(&rport->lock);
> +	}
> +	spin_unlock(&rport->lock);

Won't it be easier to splice to a local list instead?

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 08/29] nvmet-fc: Better size LS buffers
  2020-02-05 18:37 ` [PATCH 08/29] nvmet-fc: Better size LS buffers James Smart
@ 2020-02-28 21:04   ` Sagi Grimberg
  2020-03-06  8:36   ` Hannes Reinecke
  1 sibling, 0 replies; 80+ messages in thread
From: Sagi Grimberg @ 2020-02-28 21:04 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data
  2020-02-05 18:37 ` [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data James Smart
@ 2020-02-28 21:05   ` Sagi Grimberg
  2020-03-06  8:44   ` Hannes Reinecke
  2020-03-26 16:39   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Sagi Grimberg @ 2020-02-28 21:05 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic
  2020-02-05 18:37 ` [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic James Smart
@ 2020-02-28 21:08   ` Sagi Grimberg
  2020-03-06  8:47   ` Hannes Reinecke
  2020-03-26 19:16   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Sagi Grimberg @ 2020-02-28 21:08 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08
  2020-02-05 18:37 ` [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08 James Smart
  2020-02-28 20:36   ` Sagi Grimberg
@ 2020-03-06  8:16   ` Hannes Reinecke
  2020-03-26 16:10   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:16 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> A couple of minor changes occurred between 1.06 and 1.08:
> - Addition of NVME_SR_RSP opcode
> - change of SR_RSP status code 1 to Reserved
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  include/linux/nvme-fc.h | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/nvme-fc.h b/include/linux/nvme-fc.h
> index e8c30b39bb27..840fa9ac733f 100644
> --- a/include/linux/nvme-fc.h
> +++ b/include/linux/nvme-fc.h
> @@ -4,8 +4,8 @@
>   */
>  
>  /*
> - * This file contains definitions relative to FC-NVME-2 r1.06
> - * (T11-2019-00210-v001).
> + * This file contains definitions relative to FC-NVME-2 r1.08
> + * (T11-2019-00210-v004).
>   */
>  
>  #ifndef _NVME_FC_H
> @@ -81,7 +81,8 @@ struct nvme_fc_ersp_iu {
>  };
>  
>  
> -#define FCNVME_NVME_SR_OPCODE	0x01
> +#define FCNVME_NVME_SR_OPCODE		0x01
> +#define FCNVME_NVME_SR_RSP_OPCODE	0x02
>  
>  struct nvme_fc_nvme_sr_iu {
>  	__u8			fc_id;
> @@ -94,7 +95,7 @@ struct nvme_fc_nvme_sr_iu {
>  
>  enum {
>  	FCNVME_SRSTAT_ACC		= 0x0,
> -	FCNVME_SRSTAT_INV_FCID		= 0x1,
> +	/* reserved			  0x1 */
>  	/* reserved			  0x2 */
>  	FCNVME_SRSTAT_LOGICAL_ERR	= 0x3,
>  	FCNVME_SRSTAT_INV_QUALIF	= 0x4,
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 02/29] nvmet-fc: fix typo in comment
  2020-02-05 18:37 ` [PATCH 02/29] nvmet-fc: fix typo in comment James Smart
  2020-02-28 20:36   ` Sagi Grimberg
@ 2020-03-06  8:17   ` Hannes Reinecke
  2020-03-26 16:10   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:17 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Fix typo in comment: about should be abort
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/target/fc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index a0db6371b43e..a8ceb7721640 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -684,7 +684,7 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
>  	disconnect = atomic_xchg(&queue->connected, 0);
>  
>  	spin_lock_irqsave(&queue->qlock, flags);
> -	/* about outstanding io's */
> +	/* abort outstanding io's */
>  	for (i = 0; i < queue->sqsize; fod++, i++) {
>  		if (fod->active) {
>  			spin_lock(&fod->flock);
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request
  2020-02-05 18:37 ` [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request James Smart
  2020-02-28 20:38   ` Sagi Grimberg
@ 2020-03-06  8:19   ` Hannes Reinecke
  2020-03-26 16:16   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:19 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> The current LLDD api has:
>   nvme-fc: contains api for transport to do LS requests (and aborts of
>     them). However, there is no interface for reception of LS's and sending
>     responses for them.
>   nvmet-fc: contains api for transport to do reception of LS's and sending
>     of responses for them. However, there is no interface for doing LS
>     requests.
> 
> Revise the api's so that both nvme-fc and nvmet-fc can send LS's, as well
> as receiving LS's and sending their responses.
> 
> Change name of the rcv_ls_req struct to better reflect generic use as
> a context to used to send an ls rsp.
> 
> Change nvmet_fc_rcv_ls_req() calling sequence to provide handle that
> can be used by transport in later LS request sequences for an association.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  include/linux/nvme-fc-driver.h | 368 ++++++++++++++++++++++++++++++-----------
>  1 file changed, 270 insertions(+), 98 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header
  2020-02-05 18:37 ` [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header James Smart
  2020-02-28 20:40   ` Sagi Grimberg
@ 2020-03-06  8:21   ` Hannes Reinecke
  2020-03-26 16:26   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:21 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> deal with following naming changes in the header:
>   nvmefc_tgt_ls_req -> nvmefc_ls_rsp
>   nvmefc_tgt_ls_req.nvmet_fc_private -> nvmefc_ls_rsp.nvme_fc_private
> 
> Change calling sequence to nvmet_fc_rcv_ls_req() for hosthandle.
> 
> Add stubs for new interfaces:
> host/fc.c: nvme_fc_rcv_ls_req()
> target/fc.c: nvmet_fc_invalidate_host()
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/host/fc.c       | 35 ++++++++++++++++++++
>  drivers/nvme/target/fc.c     | 77 ++++++++++++++++++++++++++++++++------------
>  drivers/nvme/target/fcloop.c | 20 ++++++------
>  3 files changed, 102 insertions(+), 30 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 05/29] lpfc: adapt code to changed names in api header
  2020-02-05 18:37 ` [PATCH 05/29] lpfc: " James Smart
  2020-02-28 20:40   ` Sagi Grimberg
@ 2020-03-06  8:25   ` Hannes Reinecke
  2020-03-26 16:30   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:25 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> deal with following naming changes in the header:
>   nvmefc_tgt_ls_req -> nvmefc_ls_rsp
>   nvmefc_tgt_ls_req.nvmet_fc_private -> nvmefc_ls_rsp.nvme_fc_private
> 
> Change calling sequence to nvmet_fc_rcv_ls_req() for hosthandle.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc_nvmet.c | 10 +++++-----
>  drivers/scsi/lpfc/lpfc_nvmet.h |  2 +-
>  2 files changed, 6 insertions(+), 6 deletions(-)
> Please merge this patch with the two previous ones; we should strive to
make every patch self-contained in the sense that it allows for a clean
compilation.
Otherwise you'll break bisecting.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 06/29] nvme-fcloop: Fix deallocation of working context
  2020-02-05 18:37 ` [PATCH 06/29] nvme-fcloop: Fix deallocation of working context James Smart
  2020-02-28 20:43   ` Sagi Grimberg
@ 2020-03-06  8:34   ` Hannes Reinecke
  2020-03-26 16:35   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:34 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> There's been a longstanding bug of LS completions which freed ls
> op's, particularly the disconnect LS, while executing on a work
> context that is in the memory being free. Not a good thing to do.
> 
> Rework LS handling to make callbacks in the rport context
> rather than the ls_request context.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/target/fcloop.c | 76 ++++++++++++++++++++++++++++++--------------
>  1 file changed, 52 insertions(+), 24 deletions(-)
> 
[ .. ]
As a nice side effect, this is the patch which fixes the crash with
fcloop I've been seeing (and complained about) with my new fcloop blktest.

Consider sending this one separately.

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 07/29] nvme-fc nvmet-fc: refactor for common LS definitions
  2020-02-05 18:37 ` [PATCH 07/29] nvme-fc nvmet-fc: refactor for common LS definitions James Smart
@ 2020-03-06  8:35   ` Hannes Reinecke
  2020-03-26 16:36   ` Himanshu Madhani
  1 sibling, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:35 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Routines in the target will want to be used in the host as well.
> Error definitions should now shared as both sides will process
> requests and responses to requests.
> 
> Moved common declarations to new fc.h header kept in the host
> subdirectory.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/host/fc.c   |  36 +------------
>  drivers/nvme/host/fc.h   | 133 +++++++++++++++++++++++++++++++++++++++++++++++
>  drivers/nvme/target/fc.c | 115 ++++------------------------------------
>  3 files changed, 143 insertions(+), 141 deletions(-)
>  create mode 100644 drivers/nvme/host/fc.h
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index f8f79cd88769..2e5163600f63 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -14,6 +14,7 @@
>  #include "fabrics.h"
>  #include <linux/nvme-fc-driver.h>
>  #include <linux/nvme-fc.h>
> +#include "fc.h"
>  #include <scsi/scsi_transport_fc.h>
>  
>  /* *************************** Data Structures/Defines ****************** */

This doesn't apply cleanly (the first line should read '#include
"trace.h"), but other than that:

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 08/29] nvmet-fc: Better size LS buffers
  2020-02-05 18:37 ` [PATCH 08/29] nvmet-fc: Better size LS buffers James Smart
  2020-02-28 21:04   ` Sagi Grimberg
@ 2020-03-06  8:36   ` Hannes Reinecke
  1 sibling, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:36 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Current code uses NVME_FC_MAX_LS_BUFFER_SIZE (2KB) when allocating
> buffers for LS requests and responses. This is considerable overkill
> for what is actually defined.
> 
> Rework code to have unions for all possible requests and responses
> and size based on the unions.  Remove NVME_FC_MAX_LS_BUFFER_SIZE.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/host/fc.h   | 15 ++++++++++++++
>  drivers/nvme/target/fc.c | 53 +++++++++++++++++++++---------------------------
>  2 files changed, 38 insertions(+), 30 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data
  2020-02-05 18:37 ` [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data James Smart
  2020-02-28 21:05   ` Sagi Grimberg
@ 2020-03-06  8:44   ` Hannes Reinecke
  2020-03-26 16:39   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:44 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Ensure that when allocations are done, and the lldd options indicate
> no private data is needed, that private pointers will be set to NULL
> (catches driver error that forgot to set private data size).
> 
> Slightly reorg the allocations so that private data follows allocations
> for LS request/response buffers. Ensures better alignments for the buffers
> as well as the private pointer.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/host/fc.c   | 81 ++++++++++++++++++++++++++++++------------------
>  drivers/nvme/target/fc.c |  5 ++-
>  2 files changed, 54 insertions(+), 32 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 10/29] nvmefc: Use common definitions for LS names, formatting, and validation
  2020-02-05 18:37 ` [PATCH 10/29] nvmefc: Use common definitions for LS names, formatting, and validation James Smart
@ 2020-03-06  8:44   ` Hannes Reinecke
  2020-03-26 16:41   ` Himanshu Madhani
  1 sibling, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:44 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Given that both host and target now generate and receive LS's create
> a single table definition for LS names. Each tranport half will have
> a local version of the table.
> 
> As Create Association LS is issued by both sides, and received by
> both sides, create common routines to format the LS and to validate
> the LS.
> 
> Convert the host side transport to use the new common Create
> Association LS formatting routine.
> 
> Convert the target side transport to use the new common Create
> Association LS validation routine.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/host/fc.c   | 25 ++-------------
>  drivers/nvme/host/fc.h   | 79 ++++++++++++++++++++++++++++++++++++++++++++++++
>  drivers/nvme/target/fc.c | 28 ++---------------
>  3 files changed, 83 insertions(+), 49 deletions(-)
> Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic
  2020-02-05 18:37 ` [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic James Smart
  2020-02-28 21:08   ` Sagi Grimberg
@ 2020-03-06  8:47   ` Hannes Reinecke
  2020-03-26 19:16   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  8:47 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Convert the assoc_active flag to an atomic to remove any small
> race conditions on transitioning to active and back.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/host/fc.c | 23 ++++++++++++++++-------
>  1 file changed, 16 insertions(+), 7 deletions(-)
> 
As it's just a single value, wouldn't 'test_and_set_bit' work as well here?
It could even merged with 'ioq_live' and 'err_work_active' to save some
space here ...

Might even be beneficial for performance; atomics always imply a full
barrier, and I tend to think that 'test_and_set_bit' is slightly better
in that regard.
I might be mistaken, though :-)

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 12/29] nvme-fc: Add Disconnect Association Rcv support
  2020-02-05 18:37 ` [PATCH 12/29] nvme-fc: Add Disconnect Association Rcv support James Smart
@ 2020-03-06  9:00   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:00 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> The nvme-fc host transport did not support the reception of a
> FC-NVME LS. Reception is necessary to implement full compliance
> with FC-NVME-2.
> 
> Populate the LS receive handler, and specifically the handling
> of a Disconnect Association LS. The response to the LS, if it
> matched a controller, must be sent after the aborts for any
> I/O on any connection have been sent.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/host/fc.c | 363 ++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 359 insertions(+), 4 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 13/29] nvmet-fc: add LS failure messages
  2020-02-05 18:37 ` [PATCH 13/29] nvmet-fc: add LS failure messages James Smart
@ 2020-03-06  9:01   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:01 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Add LS reception failure messages
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/target/fc.c | 22 +++++++++++++++++++---
>  1 file changed, 19 insertions(+), 3 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 14/29] nvmet-fc: perform small cleanups on unneeded checks
  2020-02-05 18:37 ` [PATCH 14/29] nvmet-fc: perform small cleanups on unneeded checks James Smart
@ 2020-03-06  9:01   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:01 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> While code reviewing saw a couple of items that can be cleaned up:
> - In nvmet_fc_delete_target_queue(), the routine unlocks, then checks
>   and relocks.  Reorganize to avoid the unlock/relock.
> - In nvmet_fc_delete_target_queue(), there's a check on the disconnect
>   state that is unnecessary as the routine validates the state before
>   starting any action.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/target/fc.c | 11 ++++-------
>  1 file changed, 4 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index a91c443c9098..35b5cc0d2240 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -688,20 +688,18 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
>  		if (fod->active) {
>  			spin_lock(&fod->flock);
>  			fod->abort = true;
> -			writedataactive = fod->writedataactive;
> -			spin_unlock(&fod->flock);
>  			/*
>  			 * only call lldd abort routine if waiting for
>  			 * writedata. other outstanding ops should finish
>  			 * on their own.
>  			 */
> -			if (writedataactive) {
> -				spin_lock(&fod->flock);
> +			if (fod->writedataactive) {
>  				fod->aborted = true;
>  				spin_unlock(&fod->flock);
>  				tgtport->ops->fcp_abort(
>  					&tgtport->fc_target_port, fod->fcpreq);
> -			}
> +			} else
> +				spin_unlock(&fod->flock);
>  		}
>  	}
>  
'writedataactive' is now unused, and should be removed.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/29] nvmet-fc: track hostport handle for associations
  2020-02-05 18:37 ` [PATCH 15/29] nvmet-fc: track hostport handle for associations James Smart
@ 2020-03-06  9:02   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:02 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> In preparation for sending LS requests for an association that
> terminates, save and track the hosthandle that is part of the
> LS's that are received to create associations.
> 
> Support consists of:
> - Create a hostport structure that will be 1:1 mapped to a
>   host port handle. The hostport structure is specific to
>   a targetport.
> - Whenever an association is created, create a host port for
>   the hosthandle the Create Association LS was received from.
>   There will be only 1 hostport structure created, with all
>   associations that have the same hosthandle sharing the
>   hostport structure.
> - When the association is terminated, the hostport reference
>   will be removed. After the last association for the host
>   port is removed, the hostport will be deleted.
> - Add support for the new nvmet_fc_invalidate_host() interface.
>   In the past, the LLDD didn't notify loss of connectivity to
>   host ports - the LLD would simply reject new requests and wait
>   for the kato timeout to kill the association. Now, when host
>   port connectivity is lost, the LLDD can notify the transport.
>   The transport will initiate the termination of all associations
>   for that host port. When the last association has been terminated
>   and the hosthandle will no longer be referenced, the new
>   host_release callback will be made to the lldd.
> - For compatibility with prior behavior which didn't report the
>   hosthandle:  the LLDD must set hosthandle to NULL. In these
>   cases, not LS request will be made, and no host_release callbacks
>   will be made either.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/target/fc.c | 177 +++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 170 insertions(+), 7 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 16/29] nvmet-fc: rename ls_list to ls_rcv_list
  2020-02-05 18:37 ` [PATCH 16/29] nvmet-fc: rename ls_list to ls_rcv_list James Smart
@ 2020-03-06  9:03   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:03 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> In preparation to add ls request support, rename the current ls_list,
> which is RCV LS request only, to ls_rcv_list.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/target/fc.c | 22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 17/29] nvmet-fc: Add Disconnect Association Xmt support
  2020-02-05 18:37 ` [PATCH 17/29] nvmet-fc: Add Disconnect Association Xmt support James Smart
@ 2020-03-06  9:04   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:04 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> As part of FC-NVME-2 (and ammendment on FC-NVME), the target is to
> send a Disconnect LS after an association is terminated and any
> exchanges for the association have been ABTS'd. The target is also
> not to send the receipt to any Disconnect Association LS, received
> to initiate the association termination or received while the
> association is terminating, until the Disconnect LS has been transmit.
> 
> Add support for sending Disconnect Association LS after all I/O's
> complete (which is after ABTS'd certainly). Utilizes the new LLDD
> api to send ls requests.
> 
> There is no need to track the Disconnect LS response or to retry
> after timeout. All spec requirements will have been met by waiting
> for i/o completion to initiate the transmission.
> 
> Add support for tracking the reception of Disconnect Association
> and defering the response transmission until after the Disconnect
> Association LS has been transmit.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/target/fc.c | 298 +++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 287 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index d52393cd29f7..3e94c4909cf9 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -25,7 +25,7 @@
>  struct nvmet_fc_tgtport;
>  struct nvmet_fc_tgt_assoc;
>  
> -struct nvmet_fc_ls_iod {
> +struct nvmet_fc_ls_iod {		// for an LS RQST RCV
>  	struct nvmefc_ls_rsp		*lsrsp;
>  	struct nvmefc_tgt_fcp_req	*fcpreq;	/* only if RS */
>  
> @@ -45,6 +45,18 @@ struct nvmet_fc_ls_iod {
>  	struct work_struct		work;
>  } __aligned(sizeof(unsigned long long));
>  
> +struct nvmet_fc_ls_req_op {		// for an LS RQST XMT
> +	struct nvmefc_ls_req		ls_req;
> +
> +	struct nvmet_fc_tgtport		*tgtport;
> +	void				*hosthandle;
> +
> +	int				ls_error;
> +	struct list_head		lsreq_list; /* tgtport->ls_req_list */
> +	bool				req_queued;
> +};
> +
> +
>  /* desired maximum for a single sequence - if sg list allows it */
>  #define NVMET_FC_MAX_SEQ_LENGTH		(256 * 1024)
>  
Please, use normal comments, not C++-style // thingies.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 18/29] nvme-fcloop: refactor to enable target to host LS
  2020-02-05 18:37 ` [PATCH 18/29] nvme-fcloop: refactor to enable target to host LS James Smart
@ 2020-03-06  9:06   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:06 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Currently nvmefc-loop only sends LS's from host to target.
> Slightly rework data structures and routine names to reflect this
> path. Allows a straight-forward conversion to be used by ls's
> from target to host.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/target/fcloop.c | 19 +++++++++++++------
>  1 file changed, 13 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index 6533f4196005..5293069e2769 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -226,7 +226,13 @@ struct fcloop_nport {
>  	u32 port_id;
>  };
>  
> +enum {
> +	H2T	= 0,
> +	T2H	= 1,
> +};
> +
>  struct fcloop_lsreq {
> +	int				lsdir;	/* H2T or T2H */
>  	struct nvmefc_ls_req		*lsreq;
>  	struct nvmefc_ls_rsp		ls_rsp;
>  	int				status;
Please move it after 'ls_rsp'; otherwise we'll have a misalignment on 64
bit.

> @@ -323,7 +329,7 @@ fcloop_rport_lsrqst_work(struct work_struct *work)
>  }
>  
>  static int
> -fcloop_ls_req(struct nvme_fc_local_port *localport,
> +fcloop_h2t_ls_req(struct nvme_fc_local_port *localport,
>  			struct nvme_fc_remote_port *remoteport,
>  			struct nvmefc_ls_req *lsreq)
>  {
> @@ -331,6 +337,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
>  	struct fcloop_rport *rport = remoteport->private;
>  	int ret = 0;
>  
> +	tls_req->lsdir = H2T;
>  	tls_req->lsreq = lsreq;
>  	INIT_LIST_HEAD(&tls_req->ls_list);
>  
> @@ -351,7 +358,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
>  }
>  
>  static int
> -fcloop_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
> +fcloop_h2t_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
>  			struct nvmefc_ls_rsp *lsrsp)
>  {
>  	struct fcloop_lsreq *tls_req = ls_rsp_to_lsreq(lsrsp);
> @@ -762,7 +769,7 @@ fcloop_fcp_req_release(struct nvmet_fc_target_port *tgtport,
>  }
>  
>  static void
> -fcloop_ls_abort(struct nvme_fc_local_port *localport,
> +fcloop_h2t_ls_abort(struct nvme_fc_local_port *localport,
>  			struct nvme_fc_remote_port *remoteport,
>  				struct nvmefc_ls_req *lsreq)
>  {
> @@ -880,9 +887,9 @@ static struct nvme_fc_port_template fctemplate = {
>  	.remoteport_delete	= fcloop_remoteport_delete,
>  	.create_queue		= fcloop_create_queue,
>  	.delete_queue		= fcloop_delete_queue,
> -	.ls_req			= fcloop_ls_req,
> +	.ls_req			= fcloop_h2t_ls_req,
>  	.fcp_io			= fcloop_fcp_req,
> -	.ls_abort		= fcloop_ls_abort,
> +	.ls_abort		= fcloop_h2t_ls_abort,
>  	.fcp_abort		= fcloop_fcp_abort,
>  	.max_hw_queues		= FCLOOP_HW_QUEUES,
>  	.max_sgl_segments	= FCLOOP_SGL_SEGS,
> @@ -897,7 +904,7 @@ static struct nvme_fc_port_template fctemplate = {
>  
>  static struct nvmet_fc_target_template tgttemplate = {
>  	.targetport_delete	= fcloop_targetport_delete,
> -	.xmt_ls_rsp		= fcloop_xmt_ls_rsp,
> +	.xmt_ls_rsp		= fcloop_h2t_xmt_ls_rsp,
>  	.fcp_op			= fcloop_fcp_op,
>  	.fcp_abort		= fcloop_tgt_fcp_abort,
>  	.fcp_req_release	= fcloop_fcp_req_release,
> 
Otherwise:

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 19/29] nvme-fcloop: add target to host LS request support
  2020-02-05 18:37 ` [PATCH 19/29] nvme-fcloop: add target to host LS request support James Smart
@ 2020-03-06  9:07   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:07 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Add support for performing LS requests from target to host.
> Include sending request from targetport, reception into host,
> host sending ls rsp.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/nvme/target/fcloop.c | 131 ++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 118 insertions(+), 13 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 20/29] lpfc: Refactor lpfc nvme headers
  2020-02-05 18:37 ` [PATCH 20/29] lpfc: Refactor lpfc nvme headers James Smart
@ 2020-03-06  9:18   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:18 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Paul Ely, martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> A lot of files in lpfc include nvme headers, building up relationships that
> require a file to change for its headers when there is no other change
> necessary. It would be better to localize the nvme headers.
> 
> There is also no need for separate nvme (initiator) and nvmet (tgt)
> header files.
> 
> Refactor the inclusion of nvme headers so that all nvme items are
> included by lpfc_nvme.h
> 
> Merge lpfc_nvmet.h into lpfc_nvme.h so that there is a single header used
> by both the nvme and nvmet sides. This prepares for structure sharing
> between the two roles. Prep to add shared function prototypes for upcoming
> shared routines.
> 
> Signed-off-by: Paul Ely <paul.ely@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc_attr.c      |   3 -
>  drivers/scsi/lpfc/lpfc_ct.c        |   1 -
>  drivers/scsi/lpfc/lpfc_debugfs.c   |   3 -
>  drivers/scsi/lpfc/lpfc_hbadisc.c   |   2 -
>  drivers/scsi/lpfc/lpfc_init.c      |   3 -
>  drivers/scsi/lpfc/lpfc_mem.c       |   4 -
>  drivers/scsi/lpfc/lpfc_nportdisc.c |   2 -
>  drivers/scsi/lpfc/lpfc_nvme.c      |   3 -
>  drivers/scsi/lpfc/lpfc_nvme.h      | 147 ++++++++++++++++++++++++++++++++++
>  drivers/scsi/lpfc/lpfc_nvmet.c     |   5 --
>  drivers/scsi/lpfc/lpfc_nvmet.h     | 158 -------------------------------------
>  drivers/scsi/lpfc/lpfc_sli.c       |   3 -
>  12 files changed, 147 insertions(+), 187 deletions(-)
>  delete mode 100644 drivers/scsi/lpfc/lpfc_nvmet.h
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 21/29] lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx
  2020-02-05 18:37 ` [PATCH 21/29] lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx James Smart
@ 2020-03-06  9:19   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:19 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Paul Ely, martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> To support FC-NVME-2 support (actually FC-NVME (rev 1) with Ammendment 1),
> both the nvme (host) and nvmet (controller/target) sides will need to be
> able to receive LS requests.  Currently, this support is in the nvmet side
> only. To prepare for both sides supporting LS receive, rename
> lpfc_nvmet_rcv_ctx to lpfc_async_xchg_ctx and commonize the definition.
> 
> Signed-off-by: Paul Ely <paul.ely@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc.h         |   2 +-
>  drivers/scsi/lpfc/lpfc_crtn.h    |   1 -
>  drivers/scsi/lpfc/lpfc_debugfs.c |   2 +-
>  drivers/scsi/lpfc/lpfc_init.c    |   2 +-
>  drivers/scsi/lpfc/lpfc_nvme.h    |   7 +--
>  drivers/scsi/lpfc/lpfc_nvmet.c   | 109 ++++++++++++++++++++-------------------
>  drivers/scsi/lpfc/lpfc_sli.c     |   2 +-
>  7 files changed, 63 insertions(+), 62 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 22/29] lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions
  2020-02-05 18:37 ` [PATCH 22/29] lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions James Smart
@ 2020-03-06  9:19   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:19 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Paul Ely, martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> The last step of commonization is to remove the 'T' suffix from
> state and flag field definitions.  This is minor, but removes the
> mental association that it solely applies to nvmet use.
> 
> Signed-off-by: Paul Ely <paul.ely@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc_init.c  |   2 +-
>  drivers/scsi/lpfc/lpfc_nvme.h  |  37 +++++-----
>  drivers/scsi/lpfc/lpfc_nvmet.c | 158 ++++++++++++++++++++---------------------
>  3 files changed, 100 insertions(+), 97 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/29] lpfc: Refactor NVME LS receive handling
  2020-02-05 18:37 ` [PATCH 23/29] lpfc: Refactor NVME LS receive handling James Smart
@ 2020-03-06  9:20   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:20 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Paul Ely, martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> In preparation for supporting both intiator mode and target mode
> receiving NVME LS's, commonize the existing NVME LS request receive
> handling found in the base driver and in the nvmet side.
> 
> Using the original lpfc_nvmet_unsol_ls_event() and
> lpfc_nvme_unsol_ls_buffer() routines as a templates, commonize the
> reception of an NVME LS request. The common routine will validate the LS
> request, that it was received from a logged-in node, and allocate a
> lpfc_async_xchg_ctx that is used to manage the LS request. The role of
> the port is then inspected to determine which handler is to receive the
> LS - nvme or nvmet. As such, the nvmet handler is tied back in. A handler
> is created in nvme and is stubbed out.
> 
> Signed-off-by: Paul Ely <paul.ely@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc_crtn.h  |   6 +-
>  drivers/scsi/lpfc/lpfc_nvme.c  |  19 +++++
>  drivers/scsi/lpfc/lpfc_nvme.h  |   5 ++
>  drivers/scsi/lpfc/lpfc_nvmet.c | 163 ++++++++++-------------------------------
>  drivers/scsi/lpfc/lpfc_sli.c   | 121 +++++++++++++++++++++++++++++-
>  5 files changed, 184 insertions(+), 130 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 24/29] lpfc: Refactor Send LS Request support
  2020-02-05 18:37 ` [PATCH 24/29] lpfc: Refactor Send LS Request support James Smart
@ 2020-03-06  9:20   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:20 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Paul Ely, martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Currently, the ability to send an NVME LS request is limited to the nvme
> (host) side of the driver.  In preparation of both the nvme and nvmet sides
> support Send LS Request, rework the existing send ls_req and ls_req
> completion routines such that there is common code that can be used by
> both sides.
> 
> Signed-off-by: Paul Ely <paul.ely@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc_nvme.c | 289 +++++++++++++++++++++++++-----------------
>  drivers/scsi/lpfc/lpfc_nvme.h |  13 ++
>  2 files changed, 184 insertions(+), 118 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 25/29] lpfc: Refactor Send LS Abort support
  2020-02-05 18:37 ` [PATCH 25/29] lpfc: Refactor Send LS Abort support James Smart
@ 2020-03-06  9:21   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:21 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Paul Ely, martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Send LS Abort support is needed when Send LS Request is supported.
> 
> Currently, the ability to abort an NVME LS request is limited to the nvme
> (host) side of the driver.  In preparation of both the nvme and nvmet sides
> supporting Send LS Abort, rework the existing ls_req abort routines such
> that there is common code that can be used by both sides.
> 
> While refactoring it was seen the logic in the abort routine was incorrect.
> It attempted to abort all NVME LS's on the indicated port. As such, the
> routine was reworked to abort only the NVME LS request that was specified.
> 
> Signed-off-by: Paul Ely <paul.ely@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc_nvme.c | 125 +++++++++++++++++++++++++-----------------
>  drivers/scsi/lpfc/lpfc_nvme.h |   2 +
>  2 files changed, 77 insertions(+), 50 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 26/29] lpfc: Refactor Send LS Response support
  2020-02-05 18:37 ` [PATCH 26/29] lpfc: Refactor Send LS Response support James Smart
@ 2020-03-06  9:21   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:21 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Paul Ely, martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Currently, the ability to send an NVME LS response is limited to the nvmet
> (controller/target) side of the driver.  In preparation of both the nvme
> and nvmet sides supporting Send LS Response, rework the existing send
> ls_rsp and ls_rsp completion routines such that there is common code that
> can be used by both sides.
> 
> Signed-off-by: Paul Ely <paul.ely@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc_nvme.h  |   7 ++
>  drivers/scsi/lpfc/lpfc_nvmet.c | 255 ++++++++++++++++++++++++++++-------------
>  2 files changed, 184 insertions(+), 78 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 27/29] lpfc: nvme: Add Receive LS Request and Send LS Response support to nvme
  2020-02-05 18:37 ` [PATCH 27/29] lpfc: nvme: Add Receive LS Request and Send LS Response support to nvme James Smart
@ 2020-03-06  9:23   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:23 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Paul Ely, martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Now that common helpers exist, add the ability to receive NVME LS requests
> to the driver. New requests will be delivered to the transport by
> nvme_fc_rcv_ls_req().
> 
> In order to complete the LS, add support for Send LS Response and send
> LS response completion handling to the driver.
> 
> Signed-off-by: Paul Ely <paul.ely@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc_nvme.c | 130 ++++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/lpfc/lpfc_nvme.h |   9 +++
>  2 files changed, 139 insertions(+)
> 
> diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
> index c6082c65d902..9f5e8964f83c 100644
> --- a/drivers/scsi/lpfc/lpfc_nvme.c
> +++ b/drivers/scsi/lpfc/lpfc_nvme.c
> @@ -400,6 +400,10 @@ lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
>   * request. Any remaining validation is done and the LS is then forwarded
>   * to the nvme-fc transport via nvme_fc_rcv_ls_req().
>   *
> + * The calling sequence should be: nvme_fc_rcv_ls_req() -> (processing)
> + * -> lpfc_nvme_xmt_ls_rsp/cmp -> req->done.
> + * lpfc_nvme_xmt_ls_rsp_cmp should free the allocated axchg.
> + *
>   * Returns 0 if LS was handled and delivered to the transport
>   * Returns 1 if LS failed to be handled and should be dropped
>   */
> @@ -407,6 +411,46 @@ int
>  lpfc_nvme_handle_lsreq(struct lpfc_hba *phba,
>  			struct lpfc_async_xchg_ctx *axchg)
>  {
> +#if (IS_ENABLED(CONFIG_NVME_FC))
> +	struct lpfc_vport *vport;
> +	struct lpfc_nvme_rport *lpfc_rport;
> +	struct nvme_fc_remote_port *remoteport;
> +	struct lpfc_nvme_lport *lport;
> +	uint32_t *payload = axchg->payload;
> +	int rc;
> +
> +	vport = axchg->ndlp->vport;
> +	lpfc_rport = axchg->ndlp->nrport;
> +	if (!lpfc_rport)
> +		return -EINVAL;
> +
> +	remoteport = lpfc_rport->remoteport;
> +	if (!vport->localport)
> +		return -EINVAL;
> +
> +	lport = vport->localport->private;
> +	if (!lport)
> +		return -EINVAL;
> +
> +	atomic_inc(&lport->rcv_ls_req_in);
> +
> +	rc = nvme_fc_rcv_ls_req(remoteport, &axchg->ls_rsp, axchg->payload,
> +				axchg->size);
> +
> +	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
> +			"6205 NVME Unsol rcv: sz %d rc %d: %08x %08x %08x "
> +			"%08x %08x %08x\n",
> +			axchg->size, rc,
> +			*payload, *(payload+1), *(payload+2),
> +			*(payload+3), *(payload+4), *(payload+5));
> +
> +	if (!rc) {
> +		atomic_inc(&lport->rcv_ls_req_out);
> +		return 0;
> +	}
> +
> +	atomic_inc(&lport->rcv_ls_req_drop);
> +#endif
>  	return 1;
>  }
>  
> @@ -859,6 +903,81 @@ __lpfc_nvme_ls_abort(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
>  }
>  
>  /**
> + * lpfc_nvme_xmt_ls_rsp_cmp - Completion handler for LS Response
> + * @phba: Pointer to HBA context object.
> + * @cmdwqe: Pointer to driver command WQE object.
> + * @wcqe: Pointer to driver response CQE object.
> + *
> + * The function is called from SLI ring event handler with no
> + * lock held. This function is the completion handler for NVME LS commands
> + * The function updates any states and statistics, then calls the
> + * generic completion handler to free resources.
> + **/
> +static void
> +lpfc_nvme_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
> +			  struct lpfc_wcqe_complete *wcqe)
> +{
> +	struct lpfc_vport *vport = cmdwqe->vport;
> +	struct lpfc_nvme_lport *lport;
> +	uint32_t status, result;
> +
> +	status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK;
> +	result = wcqe->parameter;
> +
> +	if (!vport->localport)
> +		goto finish;
> +
> +	lport = (struct lpfc_nvme_lport *)vport->localport->private;
> +	if (lport) {
> +		if (status) {
> +			atomic_inc(&lport->xmt_ls_rsp_error);
> +			if (result == IOERR_ABORT_REQUESTED)
> +				atomic_inc(&lport->xmt_ls_rsp_aborted);
> +			if (bf_get(lpfc_wcqe_c_xb, wcqe))
> +				atomic_inc(&lport->xmt_ls_rsp_xb_set);
> +		} else {
> +			atomic_inc(&lport->xmt_ls_rsp_cmpl);
> +		}
> +	}
> +
> +finish:
> +	__lpfc_nvme_xmt_ls_rsp_cmp(phba, cmdwqe, wcqe);
> +}
> +
> +static int
> +lpfc_nvme_xmt_ls_rsp(struct nvme_fc_local_port *localport,
> +		     struct nvme_fc_remote_port *remoteport,
> +		     struct nvmefc_ls_rsp *ls_rsp)
> +{
> +	struct lpfc_async_xchg_ctx *axchg =
> +		container_of(ls_rsp, struct lpfc_async_xchg_ctx, ls_rsp);
> +	struct lpfc_nvme_lport *lport;
> +	int rc;
> +
> +	if (axchg->phba->pport->load_flag & FC_UNLOADING)
> +		return -ENODEV;
> +
> +	lport = (struct lpfc_nvme_lport *)localport->private;
> +
> +	rc = __lpfc_nvme_xmt_ls_rsp(axchg, ls_rsp, lpfc_nvme_xmt_ls_rsp_cmp);
> +
> +	if (rc) {
> +		atomic_inc(&lport->xmt_ls_drop);
> +		/*
> +		 * unless the failure is due to having already sent
> +		 * the response, an abort will be generated for the
> +		 * exchange if the rsp can't be sent.
> +		 */
> +		if (rc != -EALREADY)
> +			atomic_inc(&lport->xmt_ls_abort);
> +		return rc;
> +	}
> +
> +	atomic_inc(&lport->xmt_ls_rsp);
> +	return 0;
> +}
> +
> +/**
>   * lpfc_nvme_ls_abort - Abort a prior NVME LS request
>   * @lpfc_nvme_lport: Transport localport that LS is to be issued from.
>   * @lpfc_nvme_rport: Transport remoteport that LS is to be sent to.
> @@ -2090,6 +2209,7 @@ static struct nvme_fc_port_template lpfc_nvme_template = {
>  	.fcp_io       = lpfc_nvme_fcp_io_submit,
>  	.ls_abort     = lpfc_nvme_ls_abort,
>  	.fcp_abort    = lpfc_nvme_fcp_abort,
> +	.xmt_ls_rsp   = lpfc_nvme_xmt_ls_rsp,
>  
>  	.max_hw_queues = 1,
>  	.max_sgl_segments = LPFC_NVME_DEFAULT_SEGS,
> @@ -2285,6 +2405,16 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport)
>  		atomic_set(&lport->cmpl_fcp_err, 0);
>  		atomic_set(&lport->cmpl_ls_xb, 0);
>  		atomic_set(&lport->cmpl_ls_err, 0);
> +		atomic_set(&lport->xmt_ls_rsp, 0);
> +		atomic_set(&lport->xmt_ls_drop, 0);
> +		atomic_set(&lport->xmt_ls_rsp_cmpl, 0);
> +		atomic_set(&lport->xmt_ls_rsp_error, 0);
> +		atomic_set(&lport->xmt_ls_rsp_aborted, 0);
> +		atomic_set(&lport->xmt_ls_rsp_xb_set, 0);
> +		atomic_set(&lport->rcv_ls_req_in, 0);
> +		atomic_set(&lport->rcv_ls_req_out, 0);
> +		atomic_set(&lport->rcv_ls_req_drop, 0);
> +
>  		atomic_set(&lport->fc4NvmeLsRequests, 0);
>  		atomic_set(&lport->fc4NvmeLsCmpls, 0);
>  	}
> diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
> index 2ce29dfeedda..e4e696f12433 100644
> --- a/drivers/scsi/lpfc/lpfc_nvme.h
> +++ b/drivers/scsi/lpfc/lpfc_nvme.h
> @@ -67,6 +67,15 @@ struct lpfc_nvme_lport {
>  	atomic_t cmpl_fcp_err;
>  	atomic_t cmpl_ls_xb;
>  	atomic_t cmpl_ls_err;
> +	atomic_t xmt_ls_rsp;
> +	atomic_t xmt_ls_drop;
> +	atomic_t xmt_ls_rsp_cmpl;
> +	atomic_t xmt_ls_rsp_error;
> +	atomic_t xmt_ls_rsp_aborted;
> +	atomic_t xmt_ls_rsp_xb_set;
> +	atomic_t rcv_ls_req_in;
> +	atomic_t rcv_ls_req_out;
> +	atomic_t rcv_ls_req_drop;
>  };
>  
>  struct lpfc_nvme_rport {
> 
And here's me, worrying about the impact a single atomic_t might have ...
Any ideas of making this conditional and allowing to enable statistics
separately?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 28/29] lpfc: nvmet: Add support for NVME LS request hosthandle
  2020-02-05 18:37 ` [PATCH 28/29] lpfc: nvmet: Add support for NVME LS request hosthandle James Smart
@ 2020-03-06  9:23   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:23 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Paul Ely, martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> As the nvmet layer does not have the concept of a remoteport object, which
> can be used to identify the entity on the other end of the fabric that is
> to receive an LS, the hosthandle was introduced.  The driver passes the
> hosthandle, a value representative of the remote port, with a ls request
> receive. The LS request will create the association.  The transport will
> remember the hosthandle for the association, and if there is a need to
> initiate a LS request to the remote port for the association, the
> hosthandle will be used. When the driver loses connectivity with the
> remote port, it needs to notify the transport that the hosthandle is no
> longer valid, allowing the transport to terminate associations related to
> the hosthandle.
> 
> This patch adds support to the driver for the hosthandle. The driver will
> use the ndlp pointer of the remote port for the hosthandle in calls to
> nvmet_fc_rcv_ls_req().  The discovery engine is updated to invalidate the
> hosthandle whenever connectivity with the remote port is lost.
> 
> Signed-off-by: Paul Ely <paul.ely@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc_crtn.h      |  2 ++
>  drivers/scsi/lpfc/lpfc_hbadisc.c   |  6 +++++
>  drivers/scsi/lpfc/lpfc_nportdisc.c | 11 ++++++++
>  drivers/scsi/lpfc/lpfc_nvme.h      |  3 +++
>  drivers/scsi/lpfc/lpfc_nvmet.c     | 53 +++++++++++++++++++++++++++++++++++++-
>  5 files changed, 74 insertions(+), 1 deletion(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 29/29] lpfc: nvmet: Add Send LS Request and Abort LS Request support
  2020-02-05 18:37 ` [PATCH 29/29] lpfc: nvmet: Add Send LS Request and Abort LS Request support James Smart
@ 2020-03-06  9:24   ` Hannes Reinecke
  0 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:24 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Paul Ely, martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> Now that common helpers exist, add the ability to Send an NVME LS Request
> and to Abort an outstanding LS Request to the nvmet side of the driver.
> 
> Signed-off-by: Paul Ely <paul.ely@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/lpfc/lpfc_nvme.h  |   8 +++
>  drivers/scsi/lpfc/lpfc_nvmet.c | 128 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 136 insertions(+)
> 
> diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
> index b3c439a91482..60f9e87b3b1c 100644
> --- a/drivers/scsi/lpfc/lpfc_nvme.h
> +++ b/drivers/scsi/lpfc/lpfc_nvme.h
> @@ -166,6 +166,14 @@ struct lpfc_nvmet_tgtport {
>  	atomic_t defer_ctx;
>  	atomic_t defer_fod;
>  	atomic_t defer_wqfull;
> +
> +	/* Stats counters - ls_reqs, ls_aborts, host_invalidate */
> +	atomic_t xmt_ls_reqs;
> +	atomic_t xmt_ls_cmpls;
> +	atomic_t xmt_ls_err;
> +	atomic_t cmpl_ls_err;
> +	atomic_t cmpl_ls_xb;
> +	atomic_t cmpl_ls_reqs;
>  };
>  
>  struct lpfc_nvmet_ctx_info {
> diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
> index df0378fd4b59..1182412573c3 100644
> --- a/drivers/scsi/lpfc/lpfc_nvmet.c
> +++ b/drivers/scsi/lpfc/lpfc_nvmet.c
> @@ -1283,6 +1283,122 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
>  	spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
>  }
>  
> +/**
> + * lpfc_nvmet_ls_req_cmp - completion handler for a nvme ls request
> + * @phba: Pointer to HBA context object
> + * @cmdwqe: Pointer to driver command WQE object.
> + * @wcqe: Pointer to driver response CQE object.
> + *
> + * This function is the completion handler for NVME LS requests.
> + * The function updates any states and statistics, then calls the
> + * generic completion handler to finish completion of the request.
> + **/
> +static void
> +lpfc_nvmet_ls_req_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
> +		       struct lpfc_wcqe_complete *wcqe)
> +{
> +	struct lpfc_vport *vport = cmdwqe->vport;
> +	uint32_t status;
> +	struct lpfc_nvmet_tgtport *tgtp;
> +
> +	status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK;
> +
> +	if (!phba->targetport)
> +		goto finish;
> +
> +	tgtp = phba->targetport->private;
> +	if (tgtp) {
> +		atomic_inc(&tgtp->cmpl_ls_reqs);
> +		if (status) {
> +			if (bf_get(lpfc_wcqe_c_xb, wcqe))
> +				atomic_inc(&tgtp->cmpl_ls_xb);
> +			atomic_inc(&tgtp->cmpl_ls_err);
> +		}
> +	}
> +
> +finish:
> +	__lpfc_nvme_ls_req_cmp(phba, vport, cmdwqe, wcqe);
> +}
> +
> +/**
> + * lpfc_nvmet_ls_req - Issue an Link Service request
> + * @targetport - pointer to target instance registered with nvmet transport.
> + * @hosthandle - hosthandle set by the driver in a prior ls_rqst_rcv.
> + *               Driver sets this value to the ndlp pointer.
> + * @pnvme_lsreq - the transport nvme_ls_req structure for the LS
> + *
> + * Driver registers this routine to handle any link service request
> + * from the nvme_fc transport to a remote nvme-aware port.
> + *
> + * Return value :
> + *   0 - Success
> + *   non-zero: various error codes, in form of -Exxx
> + **/
> +static int
> +lpfc_nvmet_ls_req(struct nvmet_fc_target_port *targetport,
> +		  void *hosthandle,
> +		  struct nvmefc_ls_req *pnvme_lsreq)
> +{
> +	struct lpfc_nvmet_tgtport *lpfc_nvmet = targetport->private;
> +	struct lpfc_hba *phba;
> +	struct lpfc_nodelist *ndlp;
> +	int ret;
> +	u32 hstate;
> +
> +	if (!lpfc_nvmet)
> +		return -EINVAL;
> +
> +	phba = lpfc_nvmet->phba;
> +	if (phba->pport->load_flag & FC_UNLOADING)
> +		return -EINVAL;
> +
> +	hstate = atomic_read(&lpfc_nvmet->state);
> +	if (hstate == LPFC_NVMET_INV_HOST_ACTIVE)
> +		return -EACCES;
> +
> +	ndlp = (struct lpfc_nodelist *)hosthandle;
> +
> +	atomic_inc(&lpfc_nvmet->xmt_ls_reqs);
> +
> +	ret = __lpfc_nvme_ls_req(phba->pport, ndlp, pnvme_lsreq,
> +				 lpfc_nvmet_ls_req_cmp);
> +	if (ret)
> +		atomic_inc(&lpfc_nvmet->xmt_ls_err);
> +
> +	return ret;
> +}
> +
> +/**
> + * lpfc_nvmet_ls_abort - Abort a prior NVME LS request
> + * @targetport: Transport targetport, that LS was issued from.
> + * @hosthandle - hosthandle set by the driver in a prior ls_rqst_rcv.
> + *               Driver sets this value to the ndlp pointer.
> + * @pnvme_lsreq - the transport nvme_ls_req structure for LS to be aborted
> + *
> + * Driver registers this routine to abort an NVME LS request that is
> + * in progress (from the transports perspective).
> + **/
> +static void
> +lpfc_nvmet_ls_abort(struct nvmet_fc_target_port *targetport,
> +		    void *hosthandle,
> +		    struct nvmefc_ls_req *pnvme_lsreq)
> +{
> +	struct lpfc_nvmet_tgtport *lpfc_nvmet = targetport->private;
> +	struct lpfc_hba *phba;
> +	struct lpfc_nodelist *ndlp;
> +	int ret;
> +
> +	phba = lpfc_nvmet->phba;
> +	if (phba->pport->load_flag & FC_UNLOADING)
> +		return;
> +
> +	ndlp = (struct lpfc_nodelist *)hosthandle;
> +
> +	ret = __lpfc_nvme_ls_abort(phba->pport, ndlp, pnvme_lsreq);
> +	if (!ret)
> +		atomic_inc(&lpfc_nvmet->xmt_ls_abort);
> +}
> +
>  static void
>  lpfc_nvmet_host_release(void *hosthandle)
>  {
> @@ -1325,6 +1441,8 @@ static struct nvmet_fc_target_template lpfc_tgttemplate = {
>  	.fcp_req_release = lpfc_nvmet_xmt_fcp_release,
>  	.defer_rcv	= lpfc_nvmet_defer_rcv,
>  	.discovery_event = lpfc_nvmet_discovery_event,
> +	.ls_req         = lpfc_nvmet_ls_req,
> +	.ls_abort       = lpfc_nvmet_ls_abort,
>  	.host_release   = lpfc_nvmet_host_release,
>  
>  	.max_hw_queues  = 1,
> @@ -1336,6 +1454,7 @@ static struct nvmet_fc_target_template lpfc_tgttemplate = {
>  	.target_features = 0,
>  	/* sizes of additional private data for data structures */
>  	.target_priv_sz = sizeof(struct lpfc_nvmet_tgtport),
> +	.lsrqst_priv_sz = 0,
>  };
>  
>  static void
> @@ -1638,6 +1757,9 @@ lpfc_nvmet_create_targetport(struct lpfc_hba *phba)
>  		atomic_set(&tgtp->xmt_fcp_xri_abort_cqe, 0);
>  		atomic_set(&tgtp->xmt_fcp_abort, 0);
>  		atomic_set(&tgtp->xmt_fcp_abort_cmpl, 0);
> +		atomic_set(&tgtp->xmt_ls_reqs, 0);
> +		atomic_set(&tgtp->xmt_ls_cmpls, 0);
> +		atomic_set(&tgtp->xmt_ls_err, 0);
>  		atomic_set(&tgtp->xmt_abort_unsol, 0);
>  		atomic_set(&tgtp->xmt_abort_sol, 0);
>  		atomic_set(&tgtp->xmt_abort_rsp, 0);
> @@ -1645,6 +1767,12 @@ lpfc_nvmet_create_targetport(struct lpfc_hba *phba)
>  		atomic_set(&tgtp->defer_ctx, 0);
>  		atomic_set(&tgtp->defer_fod, 0);
>  		atomic_set(&tgtp->defer_wqfull, 0);
> +		atomic_set(&tgtp->xmt_ls_reqs, 0);
> +		atomic_set(&tgtp->xmt_ls_cmpls, 0);
> +		atomic_set(&tgtp->xmt_ls_err, 0);
> +		atomic_set(&tgtp->cmpl_ls_err, 0);
> +		atomic_set(&tgtp->cmpl_ls_xb, 0);
> +		atomic_set(&tgtp->cmpl_ls_reqs, 0);
>  	}
>  	return error;
>  }
> 
Same story: any ideas of making this conditional to allow for enabling
statistics separately?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (28 preceding siblings ...)
  2020-02-05 18:37 ` [PATCH 29/29] lpfc: nvmet: Add Send LS Request and Abort LS Request support James Smart
@ 2020-03-06  9:26 ` Hannes Reinecke
  2020-03-31 14:29 ` Christoph Hellwig
  30 siblings, 0 replies; 80+ messages in thread
From: Hannes Reinecke @ 2020-03-06  9:26 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/20 7:37 PM, James Smart wrote:
> At the tail end of FC-NVME (1) standardization, the process for
> terminating an association was changed requiring interlock using FC-NVME
> Disconnect Assocation LS's between both the host port and target port.
> This was immediately relaxed with an ammendment to FC-NVME (1) and with
> wording put into FC-NVME-2. The interlock was removed, but it still
> required both the host port and target port to initiate Disconnect
> Association LS's and respond to LS's.
> 
> The linux nvme-fc and nvmet-fc implementations were interoperable with
> standards but the linux-driver api did not support all the functionality
> needed. It was missing:
> - nvme-fc: didn't support the reception of NVME LS's and the ability to
>   transmit responses to an LS.
> - nvmet-fc: didn't support the ability to send an NVME LS request. It
>   also did not support a method for the transport to specify a remote
>   port for an LS.
> 
> This patch adds the missing functionality. Specifically the patch set:
> - Updates the header with the FC-NVME-2 standard out for final approval.
> - Refactors data structure names that used to be dependent on role (ls
>   requests were specific to nvme; ls responses were specific to nvmet)
>   to generic names that can be used by both nvme-fc and nvmet-fc.
> - Modifies the nvme-fc transport template with interfaces to receive
>   NVME LS's and for the transport to then request LS responses to be
>   sent.
> - Modifies the nvmet-fc transport template with:
>   - The current NVME LS receive interface was modified to supply a
>     handle to indentify the remote port the LS as received from. If
>     the LS creates an association, the handle may be used to initiate
>     NVME LS requests to the remote port. An interface was put in place
>     to invalidate the handle on connectivity losses.
>   - Interfaces for the transport to request an NVME LS request to be
>     performed as well as to abort that LS in cases of error/teardown. 
> - The nvme-fc transport was modified to follow the standard:
>   - Disconnect association logic was revised to send Disconnect LS as
>     soon as all ABTS's were transmit rather than waiting for the ABTS
>     process to fully complete.
>   - Disconnect LS reception is supported, with reception initiating
>     controller reset and reconnect.
>   - Disconnect LS responses will not be transmit until association
>     termination has transmit the Disconnect LS.
> - The nvmet-fc transport was modified to follow the standard:
>   - Disconnect assocation logic was revised to transmit a Disconnect LS
>     request as soon as all ABTS's have been transmit. In the past, no
>     Disconnect LS had been transmit.
>   - Disconnect LS responses will not be sent until the Disconnect LS
>     request has been transmit.
> - nvme-fcloop: was updated with interfaces to allow testing of the
>   transports.
> - Along the way, cleanups and slight corrections were made to the
>   transports.
> - The lpfc driver was modified to support the new transport interfaces
>   for both the nvme and nvmet transports.  As much of the functionality
>   was already present, but specific to one side of the transport,
>   existing code was refactored to create common routines. Addition of
>   the new interfaces was able to slip in rather easily with the common
>   routines.
> 
> This code was cut against the for-5.6 branch.
> 
> I'll work with Martin to minimize any work to merge these lpfc mods 
> with lpfc changes in the scsi tree.
> 
> -- james
> 
> 
> 
> James Smart (29):
>   nvme-fc: Sync header to FC-NVME-2 rev 1.08
>   nvmet-fc: fix typo in comment
>   nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request
>   nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api
>     header
>   lpfc: adapt code to changed names in api header
>   nvme-fcloop: Fix deallocation of working context
>   nvme-fc nvmet-fc: refactor for common LS definitions
>   nvmet-fc: Better size LS buffers
>   nvme-fc: Ensure private pointers are NULL if no data
>   nvmefc: Use common definitions for LS names, formatting, and
>     validation
>   nvme-fc: convert assoc_active flag to atomic
>   nvme-fc: Add Disconnect Association Rcv support
>   nvmet-fc: add LS failure messages
>   nvmet-fc: perform small cleanups on unneeded checks
>   nvmet-fc: track hostport handle for associations
>   nvmet-fc: rename ls_list to ls_rcv_list
>   nvmet-fc: Add Disconnect Association Xmt support
>   nvme-fcloop: refactor to enable target to host LS
>   nvme-fcloop: add target to host LS request support
>   lpfc: Refactor lpfc nvme headers
>   lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx
>   lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions
>   lpfc: Refactor NVME LS receive handling
>   lpfc: Refactor Send LS Request support
>   lpfc: Refactor Send LS Abort support
>   lpfc: Refactor Send LS Response support
>   lpfc: nvme: Add Receive LS Request and Send LS Response support to
>     nvme
>   lpfc: nvmet: Add support for NVME LS request hosthandle
>   lpfc: nvmet: Add Send LS Request and Abort LS Request support
> 
>  drivers/nvme/host/fc.c             | 555 +++++++++++++++++++-----
>  drivers/nvme/host/fc.h             | 227 ++++++++++
>  drivers/nvme/target/fc.c           | 800 +++++++++++++++++++++++++----------
>  drivers/nvme/target/fcloop.c       | 228 ++++++++--
>  drivers/scsi/lpfc/lpfc.h           |   2 +-
>  drivers/scsi/lpfc/lpfc_attr.c      |   3 -
>  drivers/scsi/lpfc/lpfc_crtn.h      |   9 +-
>  drivers/scsi/lpfc/lpfc_ct.c        |   1 -
>  drivers/scsi/lpfc/lpfc_debugfs.c   |   5 +-
>  drivers/scsi/lpfc/lpfc_hbadisc.c   |   8 +-
>  drivers/scsi/lpfc/lpfc_init.c      |   7 +-
>  drivers/scsi/lpfc/lpfc_mem.c       |   4 -
>  drivers/scsi/lpfc/lpfc_nportdisc.c |  13 +-
>  drivers/scsi/lpfc/lpfc_nvme.c      | 550 ++++++++++++++++--------
>  drivers/scsi/lpfc/lpfc_nvme.h      | 198 +++++++++
>  drivers/scsi/lpfc/lpfc_nvmet.c     | 837 +++++++++++++++++++++++--------------
>  drivers/scsi/lpfc/lpfc_nvmet.h     | 158 -------
>  drivers/scsi/lpfc/lpfc_sli.c       | 126 +++++-
>  include/linux/nvme-fc-driver.h     | 368 +++++++++++-----
>  include/linux/nvme-fc.h            |   9 +-
>  20 files changed, 2970 insertions(+), 1138 deletions(-)
>  create mode 100644 drivers/nvme/host/fc.h
>  delete mode 100644 drivers/scsi/lpfc/lpfc_nvmet.h
> 
As mentioned, this patchset resolves one crash I've found when running
my nvme/034 blktest. However, we're still not quite there yet, as the
very same test now results in:

[  686.272596] (NULL device *): {1:0} Association deleted
[  686.318243] (NULL device *): {1:0} Association freed
[ 1309.673673] kmemleak: 2 new suspected memory leaks (see
/sys/kernel/debug/kmemleak)
[ 1488.024222] INFO: task kworker/4:2:656 blocked for more than 491 seconds.
[ 1488.085196]       Tainted: G            E     5.6.0-rc1-default+ #518
[ 1488.142347] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 1488.211323] kworker/4:2     D    0   656      2 0x80004000
[ 1488.211329] Workqueue: events fcloop_tport_lsrqst_work [nvme_fcloop]
[ 1488.211331] Call Trace:
[ 1488.211337]  ? __schedule+0x28b/0x720
[ 1488.369706]  schedule+0x40/0xb0
[ 1488.369708]  schedule_timeout+0x1dd/0x300
[ 1488.369713]  ? free_pcp_prepare+0x59/0x1d0
[ 1488.469700]  wait_for_completion+0xba/0x140
[ 1488.469702]  ? wake_up_q+0xa0/0xa0
[ 1488.469705]  __flush_work+0x177/0x1b0
[ 1488.469708]  ? worker_detach_from_pool+0xa0/0xa0
[ 1488.610280]  fcloop_targetport_delete+0x13/0x20 [nvme_fcloop]
[ 1488.610284]  nvmet_fc_tgtport_put+0x150/0x190 [nvmet_fc]
[ 1488.706952]  nvmet_fc_disconnect_assoc_done+0x9a/0xe0 [nvmet_fc]
[ 1488.706954]  fcloop_tport_lsrqst_work+0x7a/0xa0 [nvme_fcloop]
[ 1488.706957]  process_one_work+0x208/0x400
[ 1488.706959]  worker_thread+0x2d/0x3e0
[ 1488.706961]  ? process_one_work+0x400/0x400
[ 1488.706963]  kthread+0x117/0x130
[ 1488.706966]  ? kthread_create_worker_on_cpu+0x70/0x70
[ 1488.985004]  ret_from_fork+0x35/0x40

So we do miss a completion somewhere ...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08
  2020-02-05 18:37 ` [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08 James Smart
  2020-02-28 20:36   ` Sagi Grimberg
  2020-03-06  8:16   ` Hannes Reinecke
@ 2020-03-26 16:10   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Himanshu Madhani @ 2020-03-26 16:10 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/2020 12:37 PM, James Smart wrote:
> A couple of minor changes occurred between 1.06 and 1.08:
> - Addition of NVME_SR_RSP opcode
> - change of SR_RSP status code 1 to Reserved
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   include/linux/nvme-fc.h | 9 +++++----
>   1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/nvme-fc.h b/include/linux/nvme-fc.h
> index e8c30b39bb27..840fa9ac733f 100644
> --- a/include/linux/nvme-fc.h
> +++ b/include/linux/nvme-fc.h
> @@ -4,8 +4,8 @@
>    */
>   
>   /*
> - * This file contains definitions relative to FC-NVME-2 r1.06
> - * (T11-2019-00210-v001).
> + * This file contains definitions relative to FC-NVME-2 r1.08
> + * (T11-2019-00210-v004).
>    */
>   
>   #ifndef _NVME_FC_H
> @@ -81,7 +81,8 @@ struct nvme_fc_ersp_iu {
>   };
>   
>   
> -#define FCNVME_NVME_SR_OPCODE	0x01
> +#define FCNVME_NVME_SR_OPCODE		0x01
> +#define FCNVME_NVME_SR_RSP_OPCODE	0x02
>   
>   struct nvme_fc_nvme_sr_iu {
>   	__u8			fc_id;
> @@ -94,7 +95,7 @@ struct nvme_fc_nvme_sr_iu {
>   
>   enum {
>   	FCNVME_SRSTAT_ACC		= 0x0,
> -	FCNVME_SRSTAT_INV_FCID		= 0x1,
> +	/* reserved			  0x1 */
>   	/* reserved			  0x2 */
>   	FCNVME_SRSTAT_LOGICAL_ERR	= 0x3,
>   	FCNVME_SRSTAT_INV_QUALIF	= 0x4,
> 
Looks Good.

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 02/29] nvmet-fc: fix typo in comment
  2020-02-05 18:37 ` [PATCH 02/29] nvmet-fc: fix typo in comment James Smart
  2020-02-28 20:36   ` Sagi Grimberg
  2020-03-06  8:17   ` Hannes Reinecke
@ 2020-03-26 16:10   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Himanshu Madhani @ 2020-03-26 16:10 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/2020 12:37 PM, James Smart wrote:
> Fix typo in comment: about should be abort
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   drivers/nvme/target/fc.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index a0db6371b43e..a8ceb7721640 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -684,7 +684,7 @@ nvmet_fc_delete_target_queue(struct nvmet_fc_tgt_queue *queue)
>   	disconnect = atomic_xchg(&queue->connected, 0);
>   
>   	spin_lock_irqsave(&queue->qlock, flags);
> -	/* about outstanding io's */
> +	/* abort outstanding io's */
>   	for (i = 0; i < queue->sqsize; fod++, i++) {
>   		if (fod->active) {
>   			spin_lock(&fod->flock);
> 

Looks Good.

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

-- 
- Himanshu

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request
  2020-02-05 18:37 ` [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request James Smart
  2020-02-28 20:38   ` Sagi Grimberg
  2020-03-06  8:19   ` Hannes Reinecke
@ 2020-03-26 16:16   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Himanshu Madhani @ 2020-03-26 16:16 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/2020 12:37 PM, James Smart wrote:
> The current LLDD api has:
>    nvme-fc: contains api for transport to do LS requests (and aborts of
>      them). However, there is no interface for reception of LS's and sending
>      responses for them.
>    nvmet-fc: contains api for transport to do reception of LS's and sending
>      of responses for them. However, there is no interface for doing LS
>      requests.
> 
> Revise the api's so that both nvme-fc and nvmet-fc can send LS's, as well
> as receiving LS's and sending their responses.
> 
> Change name of the rcv_ls_req struct to better reflect generic use as
> a context to used to send an ls rsp.
> 
> Change nvmet_fc_rcv_ls_req() calling sequence to provide handle that
> can be used by transport in later LS request sequences for an association.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   include/linux/nvme-fc-driver.h | 368 ++++++++++++++++++++++++++++++-----------
>   1 file changed, 270 insertions(+), 98 deletions(-)
> 
> diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
> index 6d0d70f3219c..8b97c899517d 100644
> --- a/include/linux/nvme-fc-driver.h
> +++ b/include/linux/nvme-fc-driver.h
> @@ -10,47 +10,26 @@
>   
>   
>   /*
> - * **********************  LLDD FC-NVME Host API ********************
> + * **********************  FC-NVME LS API ********************
>    *
> - *  For FC LLDD's that are the NVME Host role.
> + *  Data structures used by both FC-NVME hosts and FC-NVME
> + *  targets to perform FC-NVME LS requests or transmit
> + *  responses.
>    *
> - * ******************************************************************
> + * ***********************************************************
>    */
>   
> -
> -
>   /**
> - * struct nvme_fc_port_info - port-specific ids and FC connection-specific
> - *                            data element used during NVME Host role
> - *                            registrations
> - *
> - * Static fields describing the port being registered:
> - * @node_name: FC WWNN for the port
> - * @port_name: FC WWPN for the port
> - * @port_role: What NVME roles are supported (see FC_PORT_ROLE_xxx)
> - * @dev_loss_tmo: maximum delay for reconnects to an association on
> - *             this device. Used only on a remoteport.
> + * struct nvmefc_ls_req - Request structure passed from the transport
> + *            to the LLDD to perform a NVME-FC LS request and obtain
> + *            a response.
> + *            Used by nvme-fc transport (host) to send LS's such as
> + *              Create Association, Create Connection and Disconnect
> + *              Association.
> + *            Used by the nvmet-fc transport (controller) to send
> + *              LS's such as Disconnect Association.
>    *
> - * Initialization values for dynamic port fields:
> - * @port_id:      FC N_Port_ID currently assigned the port. Upper 8 bits must
> - *                be set to 0.
> - */
> -struct nvme_fc_port_info {
> -	u64			node_name;
> -	u64			port_name;
> -	u32			port_role;
> -	u32			port_id;
> -	u32			dev_loss_tmo;
> -};
> -
> -
> -/**
> - * struct nvmefc_ls_req - Request structure passed from NVME-FC transport
> - *                        to LLDD in order to perform a NVME FC-4 LS
> - *                        request and obtain a response.
> - *
> - * Values set by the NVME-FC layer prior to calling the LLDD ls_req
> - * entrypoint.
> + * Values set by the requestor prior to calling the LLDD ls_req entrypoint:
>    * @rqstaddr: pointer to request buffer
>    * @rqstdma:  PCI DMA address of request buffer
>    * @rqstlen:  Length, in bytes, of request buffer
> @@ -63,8 +42,8 @@ struct nvme_fc_port_info {
>    * @private:  pointer to memory allocated alongside the ls request structure
>    *            that is specifically for the LLDD to use while processing the
>    *            request. The length of the buffer corresponds to the
> - *            lsrqst_priv_sz value specified in the nvme_fc_port_template
> - *            supplied by the LLDD.
> + *            lsrqst_priv_sz value specified in the xxx_template supplied
> + *            by the LLDD.
>    * @done:     The callback routine the LLDD is to invoke upon completion of
>    *            the LS request. req argument is the pointer to the original LS
>    *            request structure. Status argument must be 0 upon success, a
> @@ -86,6 +65,101 @@ struct nvmefc_ls_req {
>   } __aligned(sizeof(u64));	/* alignment for other things alloc'd with */
>   
>   
> +/**
> + * struct nvmefc_ls_rsp - Structure passed from the transport to the LLDD
> + *            to request the transmit the NVME-FC LS response to a
> + *            NVME-FC LS request.   The structure originates in the LLDD
> + *            and is given to the transport via the xxx_rcv_ls_req()
> + *            transport routine. As such, the structure represents the
> + *            FC exchange context for the NVME-FC LS request that was
> + *            received and which the response is to be sent for.
> + *            Used by the LLDD to pass the nvmet-fc transport (controller)
> + *              received LS's such as Create Association, Create Connection
> + *              and Disconnect Association.
> + *            Used by the LLDD to pass the nvme-fc transport (host)
> + *              received LS's such as Disconnect Association or Disconnect
> + *              Connection.
> + *
> + * The structure is allocated by the LLDD whenever a LS Request is received
> + * from the FC link. The address of the structure is passed to the nvmet-fc
> + * or nvme-fc layer via the xxx_rcv_ls_req() transport routines.
> + *
> + * The address of the structure is to be passed back to the LLDD
> + * when the response is to be transmit. The LLDD will use the address to
> + * map back to the LLDD exchange structure which maintains information such
> + * the remote N_Port that sent the LS as well as any FC exchange context.
> + * Upon completion of the LS response transmit, the LLDD will pass the
> + * address of the structure back to the transport LS rsp done() routine,
> + * allowing the transport release dma resources. Upon completion of
> + * the done() routine, no further access to the structure will be made by
> + * the transport and the LLDD can de-allocate the structure.
> + *
> + * Field initialization:
> + *   At the time of the xxx_rcv_ls_req() call, there is no content that
> + *     is valid in the structure.
> + *
> + *   When the structure is used for the LLDD->xmt_ls_rsp() call, the
> + *     transport layer will fully set the fields in order to specify the
> + *     response payload buffer and its length as well as the done routine
> + *     to be called upon completion of the transmit.  The transport layer
> + *     will also set a private pointer for its own use in the done routine.
> + *
> + * Values set by the transport layer prior to calling the LLDD xmt_ls_rsp
> + * entrypoint:
> + * @rspbuf:   pointer to the LS response buffer
> + * @rspdma:   PCI DMA address of the LS response buffer
> + * @rsplen:   Length, in bytes, of the LS response buffer
> + * @done:     The callback routine the LLDD is to invoke upon completion of
> + *            transmitting the LS response. req argument is the pointer to
> + *            the original ls request.
> + * @nvme_fc_private:  pointer to an internal transport-specific structure
> + *            used as part of the transport done() processing. The LLDD is
> + *            not to access this pointer.
> + */
> +struct nvmefc_ls_rsp {
> +	void		*rspbuf;
> +	dma_addr_t	rspdma;
> +	u16		rsplen;
> +
> +	void (*done)(struct nvmefc_ls_rsp *rsp);
> +	void		*nvme_fc_private;	/* LLDD is not to access !! */
> +};
> +
> +
> +
> +/*
> + * **********************  LLDD FC-NVME Host API ********************
> + *
> + *  For FC LLDD's that are the NVME Host role.
> + *
> + * ******************************************************************
> + */
> +
> +
> +/**
> + * struct nvme_fc_port_info - port-specific ids and FC connection-specific
> + *                            data element used during NVME Host role
> + *                            registrations
> + *
> + * Static fields describing the port being registered:
> + * @node_name: FC WWNN for the port
> + * @port_name: FC WWPN for the port
> + * @port_role: What NVME roles are supported (see FC_PORT_ROLE_xxx)
> + * @dev_loss_tmo: maximum delay for reconnects to an association on
> + *             this device. Used only on a remoteport.
> + *
> + * Initialization values for dynamic port fields:
> + * @port_id:      FC N_Port_ID currently assigned the port. Upper 8 bits must
> + *                be set to 0.
> + */
> +struct nvme_fc_port_info {
> +	u64			node_name;
> +	u64			port_name;
> +	u32			port_role;
> +	u32			port_id;
> +	u32			dev_loss_tmo;
> +};
> +
>   enum nvmefc_fcp_datadir {
>   	NVMEFC_FCP_NODATA,	/* payload_length and sg_cnt will be zero */
>   	NVMEFC_FCP_WRITE,
> @@ -339,6 +413,21 @@ struct nvme_fc_remote_port {
>    *       indicating an FC transport Aborted status.
>    *       Entrypoint is Mandatory.
>    *
> + * @xmt_ls_rsp:  Called to transmit the response to a FC-NVME FC-4 LS service.
> + *       The nvmefc_ls_rsp structure is the same LLDD-supplied exchange
> + *       structure specified in the nvme_fc_rcv_ls_req() call made when
> + *       the LS request was received. The structure will fully describe
> + *       the buffers for the response payload and the dma address of the
> + *       payload. The LLDD is to transmit the response (or return a
> + *       non-zero errno status), and upon completion of the transmit, call
> + *       the "done" routine specified in the nvmefc_ls_rsp structure
> + *       (argument to done is the address of the nvmefc_ls_rsp structure
> + *       itself). Upon the completion of the done routine, the LLDD shall
> + *       consider the LS handling complete and the nvmefc_ls_rsp structure
> + *       may be freed/released.
> + *       Entrypoint is mandatory if the LLDD calls the nvme_fc_rcv_ls_req()
> + *       entrypoint.
> + *
>    * @max_hw_queues:  indicates the maximum number of hw queues the LLDD
>    *       supports for cpu affinitization.
>    *       Value is Mandatory. Must be at least 1.
> @@ -373,7 +462,7 @@ struct nvme_fc_remote_port {
>    * @lsrqst_priv_sz: The LLDD sets this field to the amount of additional
>    *       memory that it would like fc nvme layer to allocate on the LLDD's
>    *       behalf whenever a ls request structure is allocated. The additional
> - *       memory area solely for the of the LLDD and its location is
> + *       memory area is solely for use by the LLDD and its location is
>    *       specified by the ls_request->private pointer.
>    *       Value is Mandatory. Allowed to be zero.
>    *
> @@ -409,6 +498,9 @@ struct nvme_fc_port_template {
>   				struct nvme_fc_remote_port *,
>   				void *hw_queue_handle,
>   				struct nvmefc_fcp_req *);
> +	int	(*xmt_ls_rsp)(struct nvme_fc_local_port *localport,
> +				struct nvme_fc_remote_port *rport,
> +				struct nvmefc_ls_rsp *ls_rsp);
>   
>   	u32	max_hw_queues;
>   	u16	max_sgl_segments;
> @@ -445,6 +537,34 @@ void nvme_fc_rescan_remoteport(struct nvme_fc_remote_port *remoteport);
>   int nvme_fc_set_remoteport_devloss(struct nvme_fc_remote_port *remoteport,
>   			u32 dev_loss_tmo);
>   
> +/*
> + * Routine called to pass a NVME-FC LS request, received by the lldd,
> + * to the nvme-fc transport.
> + *
> + * If the return value is zero: the LS was successfully accepted by the
> + *   transport.
> + * If the return value is non-zero: the transport has not accepted the
> + *   LS. The lldd should ABTS-LS the LS.
> + *
> + * Note: if the LLDD receives and ABTS for the LS prior to the transport
> + * calling the ops->xmt_ls_rsp() routine to transmit a response, the LLDD
> + * shall mark the LS as aborted, and when the xmt_ls_rsp() is called: the
> + * response shall not be transmit and the struct nvmefc_ls_rsp() done
> + * routine shall be called.  The LLDD may transmit the ABTS response as
> + * soon as the LS was marked or can delay until the xmt_ls_rsp() call is
> + * made.
> + * Note: if an RCV LS was successfully posted to the transport and the
> + * remoteport is then unregistered before xmt_ls_rsp() was called for
> + * the lsrsp structure, the transport will still call xmt_ls_rsp()
> + * afterward to cleanup the outstanding lsrsp structure. The LLDD should
> + * noop the transmission of the rsp and call the lsrsp->done() routine
> + * to allow the lsrsp structure to be released.
> + */
> +int nvme_fc_rcv_ls_req(struct nvme_fc_remote_port *remoteport,
> +			struct nvmefc_ls_rsp *lsrsp,
> +			void *lsreqbuf, u32 lsreqbuf_len);
> +
> +
>   
>   /*
>    * ***************  LLDD FC-NVME Target/Subsystem API ***************
> @@ -474,55 +594,6 @@ struct nvmet_fc_port_info {
>   };
>   
>   
> -/**
> - * struct nvmefc_tgt_ls_req - Structure used between LLDD and NVMET-FC
> - *                            layer to represent the exchange context for
> - *                            a FC-NVME Link Service (LS).
> - *
> - * The structure is allocated by the LLDD whenever a LS Request is received
> - * from the FC link. The address of the structure is passed to the nvmet-fc
> - * layer via the nvmet_fc_rcv_ls_req() call. The address of the structure
> - * will be passed back to the LLDD when the response is to be transmit.
> - * The LLDD is to use the address to map back to the LLDD exchange structure
> - * which maintains information such as the targetport the LS was received
> - * on, the remote FC NVME initiator that sent the LS, and any FC exchange
> - * context.  Upon completion of the LS response transmit, the address of the
> - * structure will be passed back to the LS rsp done() routine, allowing the
> - * nvmet-fc layer to release dma resources. Upon completion of the done()
> - * routine, no further access will be made by the nvmet-fc layer and the
> - * LLDD can de-allocate the structure.
> - *
> - * Field initialization:
> - *   At the time of the nvmet_fc_rcv_ls_req() call, there is no content that
> - *     is valid in the structure.
> - *
> - *   When the structure is used for the LLDD->xmt_ls_rsp() call, the nvmet-fc
> - *     layer will fully set the fields in order to specify the response
> - *     payload buffer and its length as well as the done routine to be called
> - *     upon compeletion of the transmit.  The nvmet-fc layer will also set a
> - *     private pointer for its own use in the done routine.
> - *
> - * Values set by the NVMET-FC layer prior to calling the LLDD xmt_ls_rsp
> - * entrypoint.
> - * @rspbuf:   pointer to the LS response buffer
> - * @rspdma:   PCI DMA address of the LS response buffer
> - * @rsplen:   Length, in bytes, of the LS response buffer
> - * @done:     The callback routine the LLDD is to invoke upon completion of
> - *            transmitting the LS response. req argument is the pointer to
> - *            the original ls request.
> - * @nvmet_fc_private:  pointer to an internal NVMET-FC layer structure used
> - *            as part of the NVMET-FC processing. The LLDD is not to access
> - *            this pointer.
> - */
> -struct nvmefc_tgt_ls_req {
> -	void		*rspbuf;
> -	dma_addr_t	rspdma;
> -	u16		rsplen;
> -
> -	void (*done)(struct nvmefc_tgt_ls_req *req);
> -	void *nvmet_fc_private;		/* LLDD is not to access !! */
> -};
> -
>   /* Operations that NVME-FC layer may request the LLDD to perform for FCP */
>   enum {
>   	NVMET_FCOP_READDATA	= 1,	/* xmt data to initiator */
> @@ -697,17 +768,19 @@ struct nvmet_fc_target_port {
>    *       Entrypoint is Mandatory.
>    *
>    * @xmt_ls_rsp:  Called to transmit the response to a FC-NVME FC-4 LS service.
> - *       The nvmefc_tgt_ls_req structure is the same LLDD-supplied exchange
> + *       The nvmefc_ls_rsp structure is the same LLDD-supplied exchange
>    *       structure specified in the nvmet_fc_rcv_ls_req() call made when
> - *       the LS request was received.  The structure will fully describe
> + *       the LS request was received. The structure will fully describe
>    *       the buffers for the response payload and the dma address of the
> - *       payload. The LLDD is to transmit the response (or return a non-zero
> - *       errno status), and upon completion of the transmit, call the
> - *       "done" routine specified in the nvmefc_tgt_ls_req structure
> - *       (argument to done is the ls reqwuest structure itself).
> - *       After calling the done routine, the LLDD shall consider the
> - *       LS handling complete and the nvmefc_tgt_ls_req structure may
> - *       be freed/released.
> + *       payload. The LLDD is to transmit the response (or return a
> + *       non-zero errno status), and upon completion of the transmit, call
> + *       the "done" routine specified in the nvmefc_ls_rsp structure
> + *       (argument to done is the address of the nvmefc_ls_rsp structure
> + *       itself). Upon the completion of the done() routine, the LLDD shall
> + *       consider the LS handling complete and the nvmefc_ls_rsp structure
> + *       may be freed/released.
> + *       The transport will always call the xmt_ls_rsp() routine for any
> + *       LS received.
>    *       Entrypoint is Mandatory.
>    *
>    * @fcp_op:  Called to perform a data transfer or transmit a response.
> @@ -802,6 +875,39 @@ struct nvmet_fc_target_port {
>    *       should cause the initiator to rescan the discovery controller
>    *       on the targetport.
>    *
> + * @ls_req:  Called to issue a FC-NVME FC-4 LS service request.
> + *       The nvme_fc_ls_req structure will fully describe the buffers for
> + *       the request payload and where to place the response payload.
> + *       The targetport that is to issue the LS request is identified by
> + *       the targetport argument.  The remote port that is to receive the
> + *       LS request is identified by the hosthandle argument. The nvmet-fc
> + *       transport is only allowed to issue FC-NVME LS's on behalf of an
> + *       association that was created prior by a Create Association LS.
> + *       The hosthandle will originate from the LLDD in the struct
> + *       nvmefc_ls_rsp structure for the Create Association LS that
> + *       was delivered to the transport. The transport will save the
> + *       hosthandle as an attribute of the association.  If the LLDD
> + *       loses connectivity with the remote port, it must call the
> + *       nvmet_fc_invalidate_host() routine to remove any references to
> + *       the remote port in the transport.
> + *       The LLDD is to allocate an exchange, issue the LS request, obtain
> + *       the LS response, and call the "done" routine specified in the
> + *       request structure (argument to done is the ls request structure
> + *       itself).
> + *       Entrypoint is Optional - but highly recommended.
> + *
> + * @ls_abort: called to request the LLDD to abort the indicated ls request.
> + *       The call may return before the abort has completed. After aborting
> + *       the request, the LLDD must still call the ls request done routine
> + *       indicating an FC transport Aborted status.
> + *       Entrypoint is Mandatory if the ls_req entry point is specified.
> + *
> + * @host_release: called to inform the LLDD that the request to invalidate
> + *       the host port indicated by the hosthandle has been fully completed.
> + *       No associations exist with the host port and there will be no
> + *       further references to hosthandle.
> + *       Entrypoint is Mandatory if the lldd calls nvmet_fc_invalidate_host().
> + *
>    * @max_hw_queues:  indicates the maximum number of hw queues the LLDD
>    *       supports for cpu affinitization.
>    *       Value is Mandatory. Must be at least 1.
> @@ -830,11 +936,19 @@ struct nvmet_fc_target_port {
>    *       area solely for the of the LLDD and its location is specified by
>    *       the targetport->private pointer.
>    *       Value is Mandatory. Allowed to be zero.
> + *
> + * @lsrqst_priv_sz: The LLDD sets this field to the amount of additional
> + *       memory that it would like nvmet-fc layer to allocate on the LLDD's
> + *       behalf whenever a ls request structure is allocated. The additional
> + *       memory area is solely for use by the LLDD and its location is
> + *       specified by the ls_request->private pointer.
> + *       Value is Mandatory. Allowed to be zero.
> + *
>    */
>   struct nvmet_fc_target_template {
>   	void (*targetport_delete)(struct nvmet_fc_target_port *tgtport);
>   	int (*xmt_ls_rsp)(struct nvmet_fc_target_port *tgtport,
> -				struct nvmefc_tgt_ls_req *tls_req);
> +				struct nvmefc_ls_rsp *ls_rsp);
>   	int (*fcp_op)(struct nvmet_fc_target_port *tgtport,
>   				struct nvmefc_tgt_fcp_req *fcpreq);
>   	void (*fcp_abort)(struct nvmet_fc_target_port *tgtport,
> @@ -844,6 +958,11 @@ struct nvmet_fc_target_template {
>   	void (*defer_rcv)(struct nvmet_fc_target_port *tgtport,
>   				struct nvmefc_tgt_fcp_req *fcpreq);
>   	void (*discovery_event)(struct nvmet_fc_target_port *tgtport);
> +	int  (*ls_req)(struct nvmet_fc_target_port *targetport,
> +				void *hosthandle, struct nvmefc_ls_req *lsreq);
> +	void (*ls_abort)(struct nvmet_fc_target_port *targetport,
> +				void *hosthandle, struct nvmefc_ls_req *lsreq);
> +	void (*host_release)(void *hosthandle);
>   
>   	u32	max_hw_queues;
>   	u16	max_sgl_segments;
> @@ -852,7 +971,9 @@ struct nvmet_fc_target_template {
>   
>   	u32	target_features;
>   
> +	/* sizes of additional private data for data structures */
>   	u32	target_priv_sz;
> +	u32	lsrqst_priv_sz;
>   };
>   
>   
> @@ -863,10 +984,61 @@ int nvmet_fc_register_targetport(struct nvmet_fc_port_info *portinfo,
>   
>   int nvmet_fc_unregister_targetport(struct nvmet_fc_target_port *tgtport);
>   
> +/*
> + * Routine called to pass a NVME-FC LS request, received by the lldd,
> + * to the nvmet-fc transport.
> + *
> + * If the return value is zero: the LS was successfully accepted by the
> + *   transport.
> + * If the return value is non-zero: the transport has not accepted the
> + *   LS. The lldd should ABTS-LS the LS.
> + *
> + * Note: if the LLDD receives and ABTS for the LS prior to the transport
> + * calling the ops->xmt_ls_rsp() routine to transmit a response, the LLDD
> + * shall mark the LS as aborted, and when the xmt_ls_rsp() is called: the
> + * response shall not be transmit and the struct nvmefc_ls_rsp() done
> + * routine shall be called.  The LLDD may transmit the ABTS response as
> + * soon as the LS was marked or can delay until the xmt_ls_rsp() call is
> + * made.
> + * Note: if an RCV LS was successfully posted to the transport and the
> + * targetport is then unregistered before xmt_ls_rsp() was called for
> + * the lsrsp structure, the transport will still call xmt_ls_rsp()
> + * afterward to cleanup the outstanding lsrsp structure. The LLDD should
> + * noop the transmission of the rsp and call the lsrsp->done() routine
> + * to allow the lsrsp structure to be released.
> + */
>   int nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *tgtport,
> -			struct nvmefc_tgt_ls_req *lsreq,
> +			void *hosthandle,
> +			struct nvmefc_ls_rsp *rsp,
>   			void *lsreqbuf, u32 lsreqbuf_len);
>   
> +/*
> + * Routine called by the LLDD whenever it has a logout or loss of
> + * connectivity to a NVME-FC host port which there had been active
> + * NVMe controllers for.  The host port is indicated by the
> + * hosthandle. The hosthandle is given to the nvmet-fc transport
> + * when a NVME LS was received, typically to create a new association.
> + * The nvmet-fc transport will cache the hostport value with the
> + * association for use in LS requests for the association.
> + * When the LLDD calls this routine, the nvmet-fc transport will
> + * immediately terminate all associations that were created with
> + * the hosthandle host port.
> + * The LLDD, after calling this routine and having control returned,
> + * must assume the transport may subsequently utilize hosthandle as
> + * part of sending LS's to terminate the association.  The LLDD
> + * should reject the LS's if they are attempted.
> + * Once the last association has terminated for the hosthandle host
> + * port, the nvmet-fc transport will call the ops->host_release()
> + * callback. As of the callback, the nvmet-fc transport will no
> + * longer reference hosthandle.
> + */
> +void nvmet_fc_invalidate_host(struct nvmet_fc_target_port *tgtport,
> +			void *hosthandle);
> +
> +/*
> + * If nvmet_fc_rcv_fcp_req returns non-zero, the transport has not accepted
> + * the FCP cmd. The lldd should ABTS-LS the cmd.
> + */
>   int nvmet_fc_rcv_fcp_req(struct nvmet_fc_target_port *tgtport,
>   			struct nvmefc_tgt_fcp_req *fcpreq,
>   			void *cmdiubuf, u32 cmdiubuf_len);
> 

Looks Good.

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header
  2020-02-05 18:37 ` [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header James Smart
  2020-02-28 20:40   ` Sagi Grimberg
  2020-03-06  8:21   ` Hannes Reinecke
@ 2020-03-26 16:26   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Himanshu Madhani @ 2020-03-26 16:26 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/2020 12:37 PM, James Smart wrote:
> deal with following naming changes in the header:
>    nvmefc_tgt_ls_req -> nvmefc_ls_rsp
>    nvmefc_tgt_ls_req.nvmet_fc_private -> nvmefc_ls_rsp.nvme_fc_private
> 
> Change calling sequence to nvmet_fc_rcv_ls_req() for hosthandle.
> 
> Add stubs for new interfaces:
> host/fc.c: nvme_fc_rcv_ls_req()
> target/fc.c: nvmet_fc_invalidate_host()
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   drivers/nvme/host/fc.c       | 35 ++++++++++++++++++++
>   drivers/nvme/target/fc.c     | 77 ++++++++++++++++++++++++++++++++------------
>   drivers/nvme/target/fcloop.c | 20 ++++++------
>   3 files changed, 102 insertions(+), 30 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 5a70ac395d53..f8f79cd88769 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -1465,6 +1465,41 @@ nvme_fc_xmt_disconnect_assoc(struct nvme_fc_ctrl *ctrl)
>   		kfree(lsop);
>   }
>   
> +/**
> + * nvme_fc_rcv_ls_req - transport entry point called by an LLDD
> + *                       upon the reception of a NVME LS request.
> + *
> + * The nvme-fc layer will copy payload to an internal structure for
> + * processing.  As such, upon completion of the routine, the LLDD may
> + * immediately free/reuse the LS request buffer passed in the call.
> + *
> + * If this routine returns error, the LLDD should abort the exchange.
> + *
> + * @remoteport: pointer to the (registered) remote port that the LS
> + *              was received from. The remoteport is associated with
> + *              a specific localport.
> + * @lsrsp:      pointer to a nvmefc_ls_rsp response structure to be
> + *              used to reference the exchange corresponding to the LS
> + *              when issuing an ls response.
> + * @lsreqbuf:   pointer to the buffer containing the LS Request
> + * @lsreqbuf_len: length, in bytes, of the received LS request
> + */
> +int
> +nvme_fc_rcv_ls_req(struct nvme_fc_remote_port *portptr,
> +			struct nvmefc_ls_rsp *lsrsp,
> +			void *lsreqbuf, u32 lsreqbuf_len)
> +{
> +	struct nvme_fc_rport *rport = remoteport_to_rport(portptr);
> +	struct nvme_fc_lport *lport = rport->lport;
> +
> +	/* validate there's a routine to transmit a response */
> +	if (!lport->ops->xmt_ls_rsp)
> +		return(-EINVAL);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(nvme_fc_rcv_ls_req);
> +
>   
>   /* *********************** NVME Ctrl Routines **************************** */
>   
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index a8ceb7721640..aac7869a70bb 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -28,7 +28,7 @@ struct nvmet_fc_tgtport;
>   struct nvmet_fc_tgt_assoc;
>   
>   struct nvmet_fc_ls_iod {
> -	struct nvmefc_tgt_ls_req	*lsreq;
> +	struct nvmefc_ls_rsp		*lsrsp;
>   	struct nvmefc_tgt_fcp_req	*fcpreq;	/* only if RS */
>   
>   	struct list_head		ls_list;	/* tgtport->ls_list */
> @@ -1146,6 +1146,42 @@ __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport)
>   	spin_unlock_irqrestore(&tgtport->lock, flags);
>   }
>   
> +/**
> + * nvmet_fc_invalidate_host - transport entry point called by an LLDD
> + *                       to remove references to a hosthandle for LS's.
> + *
> + * The nvmet-fc layer ensures that any references to the hosthandle
> + * on the targetport are forgotten (set to NULL).  The LLDD will
> + * typically call this when a login with a remote host port has been
> + * lost, thus LS's for the remote host port are no longer possible.
> + *
> + * If an LS request is outstanding to the targetport/hosthandle (or
> + * issued concurrently with the call to invalidate the host), the
> + * LLDD is responsible for terminating/aborting the LS and completing
> + * the LS request. It is recommended that these terminations/aborts
> + * occur after calling to invalidate the host handle to avoid additional
> + * retries by the nvmet-fc transport. The nvmet-fc transport may
> + * continue to reference host handle while it cleans up outstanding
> + * NVME associations. The nvmet-fc transport will call the
> + * ops->host_release() callback to notify the LLDD that all references
> + * are complete and the related host handle can be recovered.
> + * Note: if there are no references, the callback may be called before
> + * the invalidate host call returns.
> + *
> + * @target_port: pointer to the (registered) target port that a prior
> + *              LS was received on and which supplied the transport the
> + *              hosthandle.
> + * @hosthandle: the handle (pointer) that represents the host port
> + *              that no longer has connectivity and that LS's should
> + *              no longer be directed to.
> + */
> +void
> +nvmet_fc_invalidate_host(struct nvmet_fc_target_port *target_port,
> +			void *hosthandle)
> +{
> +}
> +EXPORT_SYMBOL_GPL(nvmet_fc_invalidate_host);
> +
>   /*
>    * nvmet layer has called to terminate an association
>    */
> @@ -1371,7 +1407,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
>   		dev_err(tgtport->dev,
>   			"Create Association LS failed: %s\n",
>   			validation_errors[ret]);
> -		iod->lsreq->rsplen = nvmet_fc_format_rjt(acc,
> +		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
>   				FCNVME_RJT_RC_LOGIC,
>   				FCNVME_RJT_EXP_NONE, 0);
> @@ -1384,7 +1420,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
>   
>   	/* format a response */
>   
> -	iod->lsreq->rsplen = sizeof(*acc);
> +	iod->lsrsp->rsplen = sizeof(*acc);
>   
>   	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
>   			fcnvme_lsdesc_len(
> @@ -1462,7 +1498,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
>   		dev_err(tgtport->dev,
>   			"Create Connection LS failed: %s\n",
>   			validation_errors[ret]);
> -		iod->lsreq->rsplen = nvmet_fc_format_rjt(acc,
> +		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
>   				(ret == VERR_NO_ASSOC) ?
>   					FCNVME_RJT_RC_INV_ASSOC :
> @@ -1477,7 +1513,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
>   
>   	/* format a response */
>   
> -	iod->lsreq->rsplen = sizeof(*acc);
> +	iod->lsrsp->rsplen = sizeof(*acc);
>   
>   	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
>   			fcnvme_lsdesc_len(sizeof(struct fcnvme_ls_cr_conn_acc)),
> @@ -1542,7 +1578,7 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
>   		dev_err(tgtport->dev,
>   			"Disconnect LS failed: %s\n",
>   			validation_errors[ret]);
> -		iod->lsreq->rsplen = nvmet_fc_format_rjt(acc,
> +		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
>   				(ret == VERR_NO_ASSOC) ?
>   					FCNVME_RJT_RC_INV_ASSOC :
> @@ -1555,7 +1591,7 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
>   
>   	/* format a response */
>   
> -	iod->lsreq->rsplen = sizeof(*acc);
> +	iod->lsrsp->rsplen = sizeof(*acc);
>   
>   	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
>   			fcnvme_lsdesc_len(
> @@ -1577,9 +1613,9 @@ static void nvmet_fc_fcp_nvme_cmd_done(struct nvmet_req *nvme_req);
>   static const struct nvmet_fabrics_ops nvmet_fc_tgt_fcp_ops;
>   
>   static void
> -nvmet_fc_xmt_ls_rsp_done(struct nvmefc_tgt_ls_req *lsreq)
> +nvmet_fc_xmt_ls_rsp_done(struct nvmefc_ls_rsp *lsrsp)
>   {
> -	struct nvmet_fc_ls_iod *iod = lsreq->nvmet_fc_private;
> +	struct nvmet_fc_ls_iod *iod = lsrsp->nvme_fc_private;
>   	struct nvmet_fc_tgtport *tgtport = iod->tgtport;
>   
>   	fc_dma_sync_single_for_cpu(tgtport->dev, iod->rspdma,
> @@ -1597,9 +1633,9 @@ nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport,
>   	fc_dma_sync_single_for_device(tgtport->dev, iod->rspdma,
>   				  NVME_FC_MAX_LS_BUFFER_SIZE, DMA_TO_DEVICE);
>   
> -	ret = tgtport->ops->xmt_ls_rsp(&tgtport->fc_target_port, iod->lsreq);
> +	ret = tgtport->ops->xmt_ls_rsp(&tgtport->fc_target_port, iod->lsrsp);
>   	if (ret)
> -		nvmet_fc_xmt_ls_rsp_done(iod->lsreq);
> +		nvmet_fc_xmt_ls_rsp_done(iod->lsrsp);
>   }
>   
>   /*
> @@ -1612,12 +1648,12 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
>   	struct fcnvme_ls_rqst_w0 *w0 =
>   			(struct fcnvme_ls_rqst_w0 *)iod->rqstbuf;
>   
> -	iod->lsreq->nvmet_fc_private = iod;
> -	iod->lsreq->rspbuf = iod->rspbuf;
> -	iod->lsreq->rspdma = iod->rspdma;
> -	iod->lsreq->done = nvmet_fc_xmt_ls_rsp_done;
> +	iod->lsrsp->nvme_fc_private = iod;
> +	iod->lsrsp->rspbuf = iod->rspbuf;
> +	iod->lsrsp->rspdma = iod->rspdma;
> +	iod->lsrsp->done = nvmet_fc_xmt_ls_rsp_done;
>   	/* Be preventative. handlers will later set to valid length */
> -	iod->lsreq->rsplen = 0;
> +	iod->lsrsp->rsplen = 0;
>   
>   	iod->assoc = NULL;
>   
> @@ -1640,7 +1676,7 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
>   		nvmet_fc_ls_disconnect(tgtport, iod);
>   		break;
>   	default:
> -		iod->lsreq->rsplen = nvmet_fc_format_rjt(iod->rspbuf,
> +		iod->lsrsp->rsplen = nvmet_fc_format_rjt(iod->rspbuf,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, w0->ls_cmd,
>   				FCNVME_RJT_RC_INVAL, FCNVME_RJT_EXP_NONE, 0);
>   	}
> @@ -1674,14 +1710,15 @@ nvmet_fc_handle_ls_rqst_work(struct work_struct *work)
>    *
>    * @target_port: pointer to the (registered) target port the LS was
>    *              received on.
> - * @lsreq:      pointer to a lsreq request structure to be used to reference
> + * @lsrsp:      pointer to a lsrsp structure to be used to reference
>    *              the exchange corresponding to the LS.
>    * @lsreqbuf:   pointer to the buffer containing the LS Request
>    * @lsreqbuf_len: length, in bytes, of the received LS request
>    */
>   int
>   nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *target_port,
> -			struct nvmefc_tgt_ls_req *lsreq,
> +			void *hosthandle,
> +			struct nvmefc_ls_rsp *lsrsp,
>   			void *lsreqbuf, u32 lsreqbuf_len)
>   {
>   	struct nvmet_fc_tgtport *tgtport = targetport_to_tgtport(target_port);
> @@ -1699,7 +1736,7 @@ nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *target_port,
>   		return -ENOENT;
>   	}
>   
> -	iod->lsreq = lsreq;
> +	iod->lsrsp = lsrsp;
>   	iod->fcpreq = NULL;
>   	memcpy(iod->rqstbuf, lsreqbuf, lsreqbuf_len);
>   	iod->rqstdatalen = lsreqbuf_len;
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index 1c50af6219f3..130932a5db0c 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -227,7 +227,7 @@ struct fcloop_lsreq {
>   	struct fcloop_tport		*tport;
>   	struct nvmefc_ls_req		*lsreq;
>   	struct work_struct		work;
> -	struct nvmefc_tgt_ls_req	tgt_ls_req;
> +	struct nvmefc_ls_rsp		ls_rsp;
>   	int				status;
>   };
>   
> @@ -265,9 +265,9 @@ struct fcloop_ini_fcpreq {
>   };
>   
>   static inline struct fcloop_lsreq *
> -tgt_ls_req_to_lsreq(struct nvmefc_tgt_ls_req *tgt_lsreq)
> +ls_rsp_to_lsreq(struct nvmefc_ls_rsp *lsrsp)
>   {
> -	return container_of(tgt_lsreq, struct fcloop_lsreq, tgt_ls_req);
> +	return container_of(lsrsp, struct fcloop_lsreq, ls_rsp);
>   }
>   
>   static inline struct fcloop_fcpreq *
> @@ -330,7 +330,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
>   
>   	tls_req->status = 0;
>   	tls_req->tport = rport->targetport->private;
> -	ret = nvmet_fc_rcv_ls_req(rport->targetport, &tls_req->tgt_ls_req,
> +	ret = nvmet_fc_rcv_ls_req(rport->targetport, NULL, &tls_req->ls_rsp,
>   				 lsreq->rqstaddr, lsreq->rqstlen);
>   
>   	return ret;
> @@ -338,15 +338,15 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
>   
>   static int
>   fcloop_xmt_ls_rsp(struct nvmet_fc_target_port *tport,
> -			struct nvmefc_tgt_ls_req *tgt_lsreq)
> +			struct nvmefc_ls_rsp *lsrsp)
>   {
> -	struct fcloop_lsreq *tls_req = tgt_ls_req_to_lsreq(tgt_lsreq);
> +	struct fcloop_lsreq *tls_req = ls_rsp_to_lsreq(lsrsp);
>   	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
>   
> -	memcpy(lsreq->rspaddr, tgt_lsreq->rspbuf,
> -		((lsreq->rsplen < tgt_lsreq->rsplen) ?
> -				lsreq->rsplen : tgt_lsreq->rsplen));
> -	tgt_lsreq->done(tgt_lsreq);
> +	memcpy(lsreq->rspaddr, lsrsp->rspbuf,
> +		((lsreq->rsplen < lsrsp->rsplen) ?
> +				lsreq->rsplen : lsrsp->rsplen));
> +	lsrsp->done(lsrsp);
>   
>   	schedule_work(&tls_req->work);
>   
> 


Looks Good.

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 05/29] lpfc: adapt code to changed names in api header
  2020-02-05 18:37 ` [PATCH 05/29] lpfc: " James Smart
  2020-02-28 20:40   ` Sagi Grimberg
  2020-03-06  8:25   ` Hannes Reinecke
@ 2020-03-26 16:30   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Himanshu Madhani @ 2020-03-26 16:30 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/2020 12:37 PM, James Smart wrote:
> deal with following naming changes in the header:
>    nvmefc_tgt_ls_req -> nvmefc_ls_rsp
>    nvmefc_tgt_ls_req.nvmet_fc_private -> nvmefc_ls_rsp.nvme_fc_private
> 
> Change calling sequence to nvmet_fc_rcv_ls_req() for hosthandle.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   drivers/scsi/lpfc/lpfc_nvmet.c | 10 +++++-----
>   drivers/scsi/lpfc/lpfc_nvmet.h |  2 +-
>   2 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
> index 9dc9afe1c255..47b983eddbb2 100644
> --- a/drivers/scsi/lpfc/lpfc_nvmet.c
> +++ b/drivers/scsi/lpfc/lpfc_nvmet.c
> @@ -302,7 +302,7 @@ lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
>   			  struct lpfc_wcqe_complete *wcqe)
>   {
>   	struct lpfc_nvmet_tgtport *tgtp;
> -	struct nvmefc_tgt_ls_req *rsp;
> +	struct nvmefc_ls_rsp *rsp;
>   	struct lpfc_nvmet_rcv_ctx *ctxp;
>   	uint32_t status, result;
>   
> @@ -335,7 +335,7 @@ lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
>   	}
>   
>   out:
> -	rsp = &ctxp->ctx.ls_req;
> +	rsp = &ctxp->ctx.ls_rsp;
>   
>   	lpfc_nvmeio_data(phba, "NVMET LS  CMPL: xri x%x stat x%x result x%x\n",
>   			 ctxp->oxid, status, result);
> @@ -830,10 +830,10 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
>   
>   static int
>   lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
> -		      struct nvmefc_tgt_ls_req *rsp)
> +		      struct nvmefc_ls_rsp *rsp)
>   {
>   	struct lpfc_nvmet_rcv_ctx *ctxp =
> -		container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.ls_req);
> +		container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.ls_rsp);
>   	struct lpfc_hba *phba = ctxp->phba;
>   	struct hbq_dmabuf *nvmebuf =
>   		(struct hbq_dmabuf *)ctxp->rqb_buffer;
> @@ -2000,7 +2000,7 @@ lpfc_nvmet_unsol_ls_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
>   	 * lpfc_nvmet_xmt_ls_rsp_cmp should free the allocated ctxp.
>   	 */
>   	atomic_inc(&tgtp->rcv_ls_req_in);
> -	rc = nvmet_fc_rcv_ls_req(phba->targetport, &ctxp->ctx.ls_req,
> +	rc = nvmet_fc_rcv_ls_req(phba->targetport, NULL, &ctxp->ctx.ls_rsp,
>   				 payload, size);
>   
>   	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
> diff --git a/drivers/scsi/lpfc/lpfc_nvmet.h b/drivers/scsi/lpfc/lpfc_nvmet.h
> index b80b1639b9a7..f0196f3ef90d 100644
> --- a/drivers/scsi/lpfc/lpfc_nvmet.h
> +++ b/drivers/scsi/lpfc/lpfc_nvmet.h
> @@ -105,7 +105,7 @@ struct lpfc_nvmet_ctx_info {
>   
>   struct lpfc_nvmet_rcv_ctx {
>   	union {
> -		struct nvmefc_tgt_ls_req ls_req;
> +		struct nvmefc_ls_rsp ls_rsp;
>   		struct nvmefc_tgt_fcp_req fcp_req;
>   	} ctx;
>   	struct list_head list;
> 

FWIW, Looks good.

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 06/29] nvme-fcloop: Fix deallocation of working context
  2020-02-05 18:37 ` [PATCH 06/29] nvme-fcloop: Fix deallocation of working context James Smart
  2020-02-28 20:43   ` Sagi Grimberg
  2020-03-06  8:34   ` Hannes Reinecke
@ 2020-03-26 16:35   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Himanshu Madhani @ 2020-03-26 16:35 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/2020 12:37 PM, James Smart wrote:
> There's been a longstanding bug of LS completions which freed ls
> op's, particularly the disconnect LS, while executing on a work
> context that is in the memory being free. Not a good thing to do.
> 
> Rework LS handling to make callbacks in the rport context
> rather than the ls_request context.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   drivers/nvme/target/fcloop.c | 76 ++++++++++++++++++++++++++++++--------------
>   1 file changed, 52 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index 130932a5db0c..6533f4196005 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -198,10 +198,13 @@ struct fcloop_lport_priv {
>   };
>   
>   struct fcloop_rport {
> -	struct nvme_fc_remote_port *remoteport;
> -	struct nvmet_fc_target_port *targetport;
> -	struct fcloop_nport *nport;
> -	struct fcloop_lport *lport;
> +	struct nvme_fc_remote_port	*remoteport;
> +	struct nvmet_fc_target_port	*targetport;
> +	struct fcloop_nport		*nport;
> +	struct fcloop_lport		*lport;
> +	spinlock_t			lock;
> +	struct list_head		ls_list;
> +	struct work_struct		ls_work;
>   };
>   
>   struct fcloop_tport {
> @@ -224,11 +227,10 @@ struct fcloop_nport {
>   };
>   
>   struct fcloop_lsreq {
> -	struct fcloop_tport		*tport;
>   	struct nvmefc_ls_req		*lsreq;
> -	struct work_struct		work;
>   	struct nvmefc_ls_rsp		ls_rsp;
>   	int				status;
> +	struct list_head		ls_list; /* fcloop_rport->ls_list */
>   };
>   
>   struct fcloop_rscn {
> @@ -292,21 +294,32 @@ fcloop_delete_queue(struct nvme_fc_local_port *localport,
>   {
>   }
>   
> -
> -/*
> - * Transmit of LS RSP done (e.g. buffers all set). call back up
> - * initiator "done" flows.
> - */
>   static void
> -fcloop_tgt_lsrqst_done_work(struct work_struct *work)
> +fcloop_rport_lsrqst_work(struct work_struct *work)
>   {
> -	struct fcloop_lsreq *tls_req =
> -		container_of(work, struct fcloop_lsreq, work);
> -	struct fcloop_tport *tport = tls_req->tport;
> -	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
> +	struct fcloop_rport *rport =
> +		container_of(work, struct fcloop_rport, ls_work);
> +	struct fcloop_lsreq *tls_req;
>   
> -	if (!tport || tport->remoteport)
> -		lsreq->done(lsreq, tls_req->status);
> +	spin_lock(&rport->lock);
> +	for (;;) {
> +		tls_req = list_first_entry_or_null(&rport->ls_list,
> +				struct fcloop_lsreq, ls_list);
> +		if (!tls_req)
> +			break;
> +
> +		list_del(&tls_req->ls_list);
> +		spin_unlock(&rport->lock);
> +
> +		tls_req->lsreq->done(tls_req->lsreq, tls_req->status);
> +		/*
> +		 * callee may free memory containing tls_req.
> +		 * do not reference lsreq after this.
> +		 */
> +
> +		spin_lock(&rport->lock);
> +	}
> +	spin_unlock(&rport->lock);
>   }
>   
>   static int
> @@ -319,17 +332,18 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
>   	int ret = 0;
>   
>   	tls_req->lsreq = lsreq;
> -	INIT_WORK(&tls_req->work, fcloop_tgt_lsrqst_done_work);
> +	INIT_LIST_HEAD(&tls_req->ls_list);
>   
>   	if (!rport->targetport) {
>   		tls_req->status = -ECONNREFUSED;
> -		tls_req->tport = NULL;
> -		schedule_work(&tls_req->work);
> +		spin_lock(&rport->lock);
> +		list_add_tail(&rport->ls_list, &tls_req->ls_list);
> +		spin_unlock(&rport->lock);
> +		schedule_work(&rport->ls_work);
>   		return ret;
>   	}
>   
>   	tls_req->status = 0;
> -	tls_req->tport = rport->targetport->private;
>   	ret = nvmet_fc_rcv_ls_req(rport->targetport, NULL, &tls_req->ls_rsp,
>   				 lsreq->rqstaddr, lsreq->rqstlen);
>   
> @@ -337,18 +351,28 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
>   }
>   
>   static int
> -fcloop_xmt_ls_rsp(struct nvmet_fc_target_port *tport,
> +fcloop_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
>   			struct nvmefc_ls_rsp *lsrsp)
>   {
>   	struct fcloop_lsreq *tls_req = ls_rsp_to_lsreq(lsrsp);
>   	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
> +	struct fcloop_tport *tport = targetport->private;
> +	struct nvme_fc_remote_port *remoteport = tport->remoteport;
> +	struct fcloop_rport *rport;
>   
>   	memcpy(lsreq->rspaddr, lsrsp->rspbuf,
>   		((lsreq->rsplen < lsrsp->rsplen) ?
>   				lsreq->rsplen : lsrsp->rsplen));
> +
>   	lsrsp->done(lsrsp);
>   
> -	schedule_work(&tls_req->work);
> +	if (remoteport) {
> +		rport = remoteport->private;
> +		spin_lock(&rport->lock);
> +		list_add_tail(&rport->ls_list, &tls_req->ls_list);
> +		spin_unlock(&rport->lock);
> +		schedule_work(&rport->ls_work);
> +	}
>   
>   	return 0;
>   }
> @@ -834,6 +858,7 @@ fcloop_remoteport_delete(struct nvme_fc_remote_port *remoteport)
>   {
>   	struct fcloop_rport *rport = remoteport->private;
>   
> +	flush_work(&rport->ls_work);
>   	fcloop_nport_put(rport->nport);
>   }
>   
> @@ -1136,6 +1161,9 @@ fcloop_create_remote_port(struct device *dev, struct device_attribute *attr,
>   	rport->nport = nport;
>   	rport->lport = nport->lport;
>   	nport->rport = rport;
> +	spin_lock_init(&rport->lock);
> +	INIT_WORK(&rport->ls_work, fcloop_rport_lsrqst_work);
> +	INIT_LIST_HEAD(&rport->ls_list);
>   
>   	return count;
>   }
> 

Looks Good.

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

-- 
- Himanshu

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 07/29] nvme-fc nvmet-fc: refactor for common LS definitions
  2020-02-05 18:37 ` [PATCH 07/29] nvme-fc nvmet-fc: refactor for common LS definitions James Smart
  2020-03-06  8:35   ` Hannes Reinecke
@ 2020-03-26 16:36   ` Himanshu Madhani
  1 sibling, 0 replies; 80+ messages in thread
From: Himanshu Madhani @ 2020-03-26 16:36 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/2020 12:37 PM, James Smart wrote:
> Routines in the target will want to be used in the host as well.
> Error definitions should now shared as both sides will process
> requests and responses to requests.
> 
> Moved common declarations to new fc.h header kept in the host
> subdirectory.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   drivers/nvme/host/fc.c   |  36 +------------
>   drivers/nvme/host/fc.h   | 133 +++++++++++++++++++++++++++++++++++++++++++++++
>   drivers/nvme/target/fc.c | 115 ++++------------------------------------
>   3 files changed, 143 insertions(+), 141 deletions(-)
>   create mode 100644 drivers/nvme/host/fc.h
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index f8f79cd88769..2e5163600f63 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -14,6 +14,7 @@
>   #include "fabrics.h"
>   #include <linux/nvme-fc-driver.h>
>   #include <linux/nvme-fc.h>
> +#include "fc.h"
>   #include <scsi/scsi_transport_fc.h>
>   
>   /* *************************** Data Structures/Defines ****************** */
> @@ -1141,41 +1142,6 @@ nvme_fc_send_ls_req_async(struct nvme_fc_rport *rport,
>   	return __nvme_fc_send_ls_req(rport, lsop, done);
>   }
>   
> -/* Validation Error indexes into the string table below */
> -enum {
> -	VERR_NO_ERROR		= 0,
> -	VERR_LSACC		= 1,
> -	VERR_LSDESC_RQST	= 2,
> -	VERR_LSDESC_RQST_LEN	= 3,
> -	VERR_ASSOC_ID		= 4,
> -	VERR_ASSOC_ID_LEN	= 5,
> -	VERR_CONN_ID		= 6,
> -	VERR_CONN_ID_LEN	= 7,
> -	VERR_CR_ASSOC		= 8,
> -	VERR_CR_ASSOC_ACC_LEN	= 9,
> -	VERR_CR_CONN		= 10,
> -	VERR_CR_CONN_ACC_LEN	= 11,
> -	VERR_DISCONN		= 12,
> -	VERR_DISCONN_ACC_LEN	= 13,
> -};
> -
> -static char *validation_errors[] = {
> -	"OK",
> -	"Not LS_ACC",
> -	"Not LSDESC_RQST",
> -	"Bad LSDESC_RQST Length",
> -	"Not Association ID",
> -	"Bad Association ID Length",
> -	"Not Connection ID",
> -	"Bad Connection ID Length",
> -	"Not CR_ASSOC Rqst",
> -	"Bad CR_ASSOC ACC Length",
> -	"Not CR_CONN Rqst",
> -	"Bad CR_CONN ACC Length",
> -	"Not Disconnect Rqst",
> -	"Bad Disconnect ACC Length",
> -};
> -
>   static int
>   nvme_fc_connect_admin_queue(struct nvme_fc_ctrl *ctrl,
>   	struct nvme_fc_queue *queue, u16 qsize, u16 ersp_ratio)
> diff --git a/drivers/nvme/host/fc.h b/drivers/nvme/host/fc.h
> new file mode 100644
> index 000000000000..d2861cdd58ee
> --- /dev/null
> +++ b/drivers/nvme/host/fc.h
> @@ -0,0 +1,133 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2016, Avago Technologies
> + */
> +
> +#ifndef _NVME_FC_TRANSPORT_H
> +#define _NVME_FC_TRANSPORT_H 1
> +
> +
> +/*
> + * Common definitions between the nvme_fc (host) transport and
> + * nvmet_fc (target) transport implementation.
> + */
> +
> +/*
> + * ******************  FC-NVME LS HANDLING ******************
> + */
> +
> +static inline void
> +nvme_fc_format_rsp_hdr(void *buf, u8 ls_cmd, __be32 desc_len, u8 rqst_ls_cmd)
> +{
> +	struct fcnvme_ls_acc_hdr *acc = buf;
> +
> +	acc->w0.ls_cmd = ls_cmd;
> +	acc->desc_list_len = desc_len;
> +	acc->rqst.desc_tag = cpu_to_be32(FCNVME_LSDESC_RQST);
> +	acc->rqst.desc_len =
> +			fcnvme_lsdesc_len(sizeof(struct fcnvme_lsdesc_rqst));
> +	acc->rqst.w0.ls_cmd = rqst_ls_cmd;
> +}
> +
> +static inline int
> +nvme_fc_format_rjt(void *buf, u16 buflen, u8 ls_cmd,
> +			u8 reason, u8 explanation, u8 vendor)
> +{
> +	struct fcnvme_ls_rjt *rjt = buf;
> +
> +	nvme_fc_format_rsp_hdr(buf, FCNVME_LSDESC_RQST,
> +			fcnvme_lsdesc_len(sizeof(struct fcnvme_ls_rjt)),
> +			ls_cmd);
> +	rjt->rjt.desc_tag = cpu_to_be32(FCNVME_LSDESC_RJT);
> +	rjt->rjt.desc_len = fcnvme_lsdesc_len(sizeof(struct fcnvme_lsdesc_rjt));
> +	rjt->rjt.reason_code = reason;
> +	rjt->rjt.reason_explanation = explanation;
> +	rjt->rjt.vendor = vendor;
> +
> +	return sizeof(struct fcnvme_ls_rjt);
> +}
> +
> +/* Validation Error indexes into the string table below */
> +enum {
> +	VERR_NO_ERROR		= 0,
> +	VERR_CR_ASSOC_LEN	= 1,
> +	VERR_CR_ASSOC_RQST_LEN	= 2,
> +	VERR_CR_ASSOC_CMD	= 3,
> +	VERR_CR_ASSOC_CMD_LEN	= 4,
> +	VERR_ERSP_RATIO		= 5,
> +	VERR_ASSOC_ALLOC_FAIL	= 6,
> +	VERR_QUEUE_ALLOC_FAIL	= 7,
> +	VERR_CR_CONN_LEN	= 8,
> +	VERR_CR_CONN_RQST_LEN	= 9,
> +	VERR_ASSOC_ID		= 10,
> +	VERR_ASSOC_ID_LEN	= 11,
> +	VERR_NO_ASSOC		= 12,
> +	VERR_CONN_ID		= 13,
> +	VERR_CONN_ID_LEN	= 14,
> +	VERR_INVAL_CONN		= 15,
> +	VERR_CR_CONN_CMD	= 16,
> +	VERR_CR_CONN_CMD_LEN	= 17,
> +	VERR_DISCONN_LEN	= 18,
> +	VERR_DISCONN_RQST_LEN	= 19,
> +	VERR_DISCONN_CMD	= 20,
> +	VERR_DISCONN_CMD_LEN	= 21,
> +	VERR_DISCONN_SCOPE	= 22,
> +	VERR_RS_LEN		= 23,
> +	VERR_RS_RQST_LEN	= 24,
> +	VERR_RS_CMD		= 25,
> +	VERR_RS_CMD_LEN		= 26,
> +	VERR_RS_RCTL		= 27,
> +	VERR_RS_RO		= 28,
> +	VERR_LSACC		= 29,
> +	VERR_LSDESC_RQST	= 30,
> +	VERR_LSDESC_RQST_LEN	= 31,
> +	VERR_CR_ASSOC		= 32,
> +	VERR_CR_ASSOC_ACC_LEN	= 33,
> +	VERR_CR_CONN		= 34,
> +	VERR_CR_CONN_ACC_LEN	= 35,
> +	VERR_DISCONN		= 36,
> +	VERR_DISCONN_ACC_LEN	= 37,
> +};
> +
> +static char *validation_errors[] = {
> +	"OK",
> +	"Bad CR_ASSOC Length",
> +	"Bad CR_ASSOC Rqst Length",
> +	"Not CR_ASSOC Cmd",
> +	"Bad CR_ASSOC Cmd Length",
> +	"Bad Ersp Ratio",
> +	"Association Allocation Failed",
> +	"Queue Allocation Failed",
> +	"Bad CR_CONN Length",
> +	"Bad CR_CONN Rqst Length",
> +	"Not Association ID",
> +	"Bad Association ID Length",
> +	"No Association",
> +	"Not Connection ID",
> +	"Bad Connection ID Length",
> +	"Invalid Connection ID",
> +	"Not CR_CONN Cmd",
> +	"Bad CR_CONN Cmd Length",
> +	"Bad DISCONN Length",
> +	"Bad DISCONN Rqst Length",
> +	"Not DISCONN Cmd",
> +	"Bad DISCONN Cmd Length",
> +	"Bad Disconnect Scope",
> +	"Bad RS Length",
> +	"Bad RS Rqst Length",
> +	"Not RS Cmd",
> +	"Bad RS Cmd Length",
> +	"Bad RS R_CTL",
> +	"Bad RS Relative Offset",
> +	"Not LS_ACC",
> +	"Not LSDESC_RQST",
> +	"Bad LSDESC_RQST Length",
> +	"Not CR_ASSOC Rqst",
> +	"Bad CR_ASSOC ACC Length",
> +	"Not CR_CONN Rqst",
> +	"Bad CR_CONN ACC Length",
> +	"Not Disconnect Rqst",
> +	"Bad Disconnect ACC Length",
> +};
> +
> +#endif /* _NVME_FC_TRANSPORT_H */
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index aac7869a70bb..1f3118a3b0a3 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -14,6 +14,7 @@
>   #include "nvmet.h"
>   #include <linux/nvme-fc-driver.h>
>   #include <linux/nvme-fc.h>
> +#include "../host/fc.h"
>   
>   
>   /* *************************** Data Structures/Defines ****************** */
> @@ -1258,102 +1259,6 @@ EXPORT_SYMBOL_GPL(nvmet_fc_unregister_targetport);
>   
>   
>   static void
> -nvmet_fc_format_rsp_hdr(void *buf, u8 ls_cmd, __be32 desc_len, u8 rqst_ls_cmd)
> -{
> -	struct fcnvme_ls_acc_hdr *acc = buf;
> -
> -	acc->w0.ls_cmd = ls_cmd;
> -	acc->desc_list_len = desc_len;
> -	acc->rqst.desc_tag = cpu_to_be32(FCNVME_LSDESC_RQST);
> -	acc->rqst.desc_len =
> -			fcnvme_lsdesc_len(sizeof(struct fcnvme_lsdesc_rqst));
> -	acc->rqst.w0.ls_cmd = rqst_ls_cmd;
> -}
> -
> -static int
> -nvmet_fc_format_rjt(void *buf, u16 buflen, u8 ls_cmd,
> -			u8 reason, u8 explanation, u8 vendor)
> -{
> -	struct fcnvme_ls_rjt *rjt = buf;
> -
> -	nvmet_fc_format_rsp_hdr(buf, FCNVME_LSDESC_RQST,
> -			fcnvme_lsdesc_len(sizeof(struct fcnvme_ls_rjt)),
> -			ls_cmd);
> -	rjt->rjt.desc_tag = cpu_to_be32(FCNVME_LSDESC_RJT);
> -	rjt->rjt.desc_len = fcnvme_lsdesc_len(sizeof(struct fcnvme_lsdesc_rjt));
> -	rjt->rjt.reason_code = reason;
> -	rjt->rjt.reason_explanation = explanation;
> -	rjt->rjt.vendor = vendor;
> -
> -	return sizeof(struct fcnvme_ls_rjt);
> -}
> -
> -/* Validation Error indexes into the string table below */
> -enum {
> -	VERR_NO_ERROR		= 0,
> -	VERR_CR_ASSOC_LEN	= 1,
> -	VERR_CR_ASSOC_RQST_LEN	= 2,
> -	VERR_CR_ASSOC_CMD	= 3,
> -	VERR_CR_ASSOC_CMD_LEN	= 4,
> -	VERR_ERSP_RATIO		= 5,
> -	VERR_ASSOC_ALLOC_FAIL	= 6,
> -	VERR_QUEUE_ALLOC_FAIL	= 7,
> -	VERR_CR_CONN_LEN	= 8,
> -	VERR_CR_CONN_RQST_LEN	= 9,
> -	VERR_ASSOC_ID		= 10,
> -	VERR_ASSOC_ID_LEN	= 11,
> -	VERR_NO_ASSOC		= 12,
> -	VERR_CONN_ID		= 13,
> -	VERR_CONN_ID_LEN	= 14,
> -	VERR_NO_CONN		= 15,
> -	VERR_CR_CONN_CMD	= 16,
> -	VERR_CR_CONN_CMD_LEN	= 17,
> -	VERR_DISCONN_LEN	= 18,
> -	VERR_DISCONN_RQST_LEN	= 19,
> -	VERR_DISCONN_CMD	= 20,
> -	VERR_DISCONN_CMD_LEN	= 21,
> -	VERR_DISCONN_SCOPE	= 22,
> -	VERR_RS_LEN		= 23,
> -	VERR_RS_RQST_LEN	= 24,
> -	VERR_RS_CMD		= 25,
> -	VERR_RS_CMD_LEN		= 26,
> -	VERR_RS_RCTL		= 27,
> -	VERR_RS_RO		= 28,
> -};
> -
> -static char *validation_errors[] = {
> -	"OK",
> -	"Bad CR_ASSOC Length",
> -	"Bad CR_ASSOC Rqst Length",
> -	"Not CR_ASSOC Cmd",
> -	"Bad CR_ASSOC Cmd Length",
> -	"Bad Ersp Ratio",
> -	"Association Allocation Failed",
> -	"Queue Allocation Failed",
> -	"Bad CR_CONN Length",
> -	"Bad CR_CONN Rqst Length",
> -	"Not Association ID",
> -	"Bad Association ID Length",
> -	"No Association",
> -	"Not Connection ID",
> -	"Bad Connection ID Length",
> -	"No Connection",
> -	"Not CR_CONN Cmd",
> -	"Bad CR_CONN Cmd Length",
> -	"Bad DISCONN Length",
> -	"Bad DISCONN Rqst Length",
> -	"Not DISCONN Cmd",
> -	"Bad DISCONN Cmd Length",
> -	"Bad Disconnect Scope",
> -	"Bad RS Length",
> -	"Bad RS Rqst Length",
> -	"Not RS Cmd",
> -	"Bad RS Cmd Length",
> -	"Bad RS R_CTL",
> -	"Bad RS Relative Offset",
> -};
> -
> -static void
>   nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
>   			struct nvmet_fc_ls_iod *iod)
>   {
> @@ -1407,7 +1312,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
>   		dev_err(tgtport->dev,
>   			"Create Association LS failed: %s\n",
>   			validation_errors[ret]);
> -		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
> +		iod->lsrsp->rsplen = nvme_fc_format_rjt(acc,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
>   				FCNVME_RJT_RC_LOGIC,
>   				FCNVME_RJT_EXP_NONE, 0);
> @@ -1422,7 +1327,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
>   
>   	iod->lsrsp->rsplen = sizeof(*acc);
>   
> -	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
> +	nvme_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
>   			fcnvme_lsdesc_len(
>   				sizeof(struct fcnvme_ls_cr_assoc_acc)),
>   			FCNVME_LS_CREATE_ASSOCIATION);
> @@ -1498,7 +1403,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
>   		dev_err(tgtport->dev,
>   			"Create Connection LS failed: %s\n",
>   			validation_errors[ret]);
> -		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
> +		iod->lsrsp->rsplen = nvme_fc_format_rjt(acc,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
>   				(ret == VERR_NO_ASSOC) ?
>   					FCNVME_RJT_RC_INV_ASSOC :
> @@ -1515,7 +1420,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
>   
>   	iod->lsrsp->rsplen = sizeof(*acc);
>   
> -	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
> +	nvme_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
>   			fcnvme_lsdesc_len(sizeof(struct fcnvme_ls_cr_conn_acc)),
>   			FCNVME_LS_CREATE_CONNECTION);
>   	acc->connectid.desc_tag = cpu_to_be32(FCNVME_LSDESC_CONN_ID);
> @@ -1578,13 +1483,11 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
>   		dev_err(tgtport->dev,
>   			"Disconnect LS failed: %s\n",
>   			validation_errors[ret]);
> -		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
> +		iod->lsrsp->rsplen = nvme_fc_format_rjt(acc,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
>   				(ret == VERR_NO_ASSOC) ?
>   					FCNVME_RJT_RC_INV_ASSOC :
> -					(ret == VERR_NO_CONN) ?
> -						FCNVME_RJT_RC_INV_CONN :
> -						FCNVME_RJT_RC_LOGIC,
> +					FCNVME_RJT_RC_LOGIC,
>   				FCNVME_RJT_EXP_NONE, 0);
>   		return;
>   	}
> @@ -1593,7 +1496,7 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
>   
>   	iod->lsrsp->rsplen = sizeof(*acc);
>   
> -	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
> +	nvme_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
>   			fcnvme_lsdesc_len(
>   				sizeof(struct fcnvme_ls_disconnect_assoc_acc)),
>   			FCNVME_LS_DISCONNECT_ASSOC);
> @@ -1676,7 +1579,7 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
>   		nvmet_fc_ls_disconnect(tgtport, iod);
>   		break;
>   	default:
> -		iod->lsrsp->rsplen = nvmet_fc_format_rjt(iod->rspbuf,
> +		iod->lsrsp->rsplen = nvme_fc_format_rjt(iod->rspbuf,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, w0->ls_cmd,
>   				FCNVME_RJT_RC_INVAL, FCNVME_RJT_EXP_NONE, 0);
>   	}
> 

Makes sense.

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data
  2020-02-05 18:37 ` [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data James Smart
  2020-02-28 21:05   ` Sagi Grimberg
  2020-03-06  8:44   ` Hannes Reinecke
@ 2020-03-26 16:39   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Himanshu Madhani @ 2020-03-26 16:39 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/2020 12:37 PM, James Smart wrote:
> Ensure that when allocations are done, and the lldd options indicate
> no private data is needed, that private pointers will be set to NULL
> (catches driver error that forgot to set private data size).
> 
> Slightly reorg the allocations so that private data follows allocations
> for LS request/response buffers. Ensures better alignments for the buffers
> as well as the private pointer.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   drivers/nvme/host/fc.c   | 81 ++++++++++++++++++++++++++++++------------------
>   drivers/nvme/target/fc.c |  5 ++-
>   2 files changed, 54 insertions(+), 32 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 2e5163600f63..1a58e3dc0399 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -396,7 +396,10 @@ nvme_fc_register_localport(struct nvme_fc_port_info *pinfo,
>   	newrec->ops = template;
>   	newrec->dev = dev;
>   	ida_init(&newrec->endp_cnt);
> -	newrec->localport.private = &newrec[1];
> +	if (template->local_priv_sz)
> +		newrec->localport.private = &newrec[1];
> +	else
> +		newrec->localport.private = NULL;
>   	newrec->localport.node_name = pinfo->node_name;
>   	newrec->localport.port_name = pinfo->port_name;
>   	newrec->localport.port_role = pinfo->port_role;
> @@ -705,7 +708,10 @@ nvme_fc_register_remoteport(struct nvme_fc_local_port *localport,
>   	newrec->remoteport.localport = &lport->localport;
>   	newrec->dev = lport->dev;
>   	newrec->lport = lport;
> -	newrec->remoteport.private = &newrec[1];
> +	if (lport->ops->remote_priv_sz)
> +		newrec->remoteport.private = &newrec[1];
> +	else
> +		newrec->remoteport.private = NULL;
>   	newrec->remoteport.port_role = pinfo->port_role;
>   	newrec->remoteport.node_name = pinfo->node_name;
>   	newrec->remoteport.port_name = pinfo->port_name;
> @@ -1153,18 +1159,23 @@ nvme_fc_connect_admin_queue(struct nvme_fc_ctrl *ctrl,
>   	int ret, fcret = 0;
>   
>   	lsop = kzalloc((sizeof(*lsop) +
> -			 ctrl->lport->ops->lsrqst_priv_sz +
> -			 sizeof(*assoc_rqst) + sizeof(*assoc_acc)), GFP_KERNEL);
> +			 sizeof(*assoc_rqst) + sizeof(*assoc_acc) +
> +			 ctrl->lport->ops->lsrqst_priv_sz), GFP_KERNEL);
>   	if (!lsop) {
> +		dev_info(ctrl->ctrl.device,
> +			"NVME-FC{%d}: send Create Association failed: ENOMEM\n",
> +			ctrl->cnum);
>   		ret = -ENOMEM;
>   		goto out_no_memory;
>   	}
> -	lsreq = &lsop->ls_req;
>   
> -	lsreq->private = (void *)&lsop[1];
> -	assoc_rqst = (struct fcnvme_ls_cr_assoc_rqst *)
> -			(lsreq->private + ctrl->lport->ops->lsrqst_priv_sz);
> +	assoc_rqst = (struct fcnvme_ls_cr_assoc_rqst *)&lsop[1];
>   	assoc_acc = (struct fcnvme_ls_cr_assoc_acc *)&assoc_rqst[1];
> +	lsreq = &lsop->ls_req;
> +	if (ctrl->lport->ops->lsrqst_priv_sz)
> +		lsreq->private = &assoc_acc[1];
> +	else
> +		lsreq->private = NULL;
>   
>   	assoc_rqst->w0.ls_cmd = FCNVME_LS_CREATE_ASSOCIATION;
>   	assoc_rqst->desc_list_len =
> @@ -1262,18 +1273,23 @@ nvme_fc_connect_queue(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
>   	int ret, fcret = 0;
>   
>   	lsop = kzalloc((sizeof(*lsop) +
> -			 ctrl->lport->ops->lsrqst_priv_sz +
> -			 sizeof(*conn_rqst) + sizeof(*conn_acc)), GFP_KERNEL);
> +			 sizeof(*conn_rqst) + sizeof(*conn_acc) +
> +			 ctrl->lport->ops->lsrqst_priv_sz), GFP_KERNEL);
>   	if (!lsop) {
> +		dev_info(ctrl->ctrl.device,
> +			"NVME-FC{%d}: send Create Connection failed: ENOMEM\n",
> +			ctrl->cnum);
>   		ret = -ENOMEM;
>   		goto out_no_memory;
>   	}
> -	lsreq = &lsop->ls_req;
>   
> -	lsreq->private = (void *)&lsop[1];
> -	conn_rqst = (struct fcnvme_ls_cr_conn_rqst *)
> -			(lsreq->private + ctrl->lport->ops->lsrqst_priv_sz);
> +	conn_rqst = (struct fcnvme_ls_cr_conn_rqst *)&lsop[1];
>   	conn_acc = (struct fcnvme_ls_cr_conn_acc *)&conn_rqst[1];
> +	lsreq = &lsop->ls_req;
> +	if (ctrl->lport->ops->lsrqst_priv_sz)
> +		lsreq->private = (void *)&conn_acc[1];
> +	else
> +		lsreq->private = NULL;
>   
>   	conn_rqst->w0.ls_cmd = FCNVME_LS_CREATE_CONNECTION;
>   	conn_rqst->desc_list_len = cpu_to_be32(
> @@ -1387,19 +1403,23 @@ nvme_fc_xmt_disconnect_assoc(struct nvme_fc_ctrl *ctrl)
>   	int ret;
>   
>   	lsop = kzalloc((sizeof(*lsop) +
> -			 ctrl->lport->ops->lsrqst_priv_sz +
> -			 sizeof(*discon_rqst) + sizeof(*discon_acc)),
> -			GFP_KERNEL);
> -	if (!lsop)
> -		/* couldn't sent it... too bad */
> +			sizeof(*discon_rqst) + sizeof(*discon_acc) +
> +			ctrl->lport->ops->lsrqst_priv_sz), GFP_KERNEL);
> +	if (!lsop) {
> +		dev_info(ctrl->ctrl.device,
> +			"NVME-FC{%d}: send Disconnect Association "
> +			"failed: ENOMEM\n",
> +			ctrl->cnum);
>   		return;
> +	}
>   
> -	lsreq = &lsop->ls_req;
> -
> -	lsreq->private = (void *)&lsop[1];
> -	discon_rqst = (struct fcnvme_ls_disconnect_assoc_rqst *)
> -			(lsreq->private + ctrl->lport->ops->lsrqst_priv_sz);
> +	discon_rqst = (struct fcnvme_ls_disconnect_assoc_rqst *)&lsop[1];
>   	discon_acc = (struct fcnvme_ls_disconnect_assoc_acc *)&discon_rqst[1];
> +	lsreq = &lsop->ls_req;
> +	if (ctrl->lport->ops->lsrqst_priv_sz)
> +		lsreq->private = (void *)&discon_acc[1];
> +	else
> +		lsreq->private = NULL;
>   
>   	discon_rqst->w0.ls_cmd = FCNVME_LS_DISCONNECT_ASSOC;
>   	discon_rqst->desc_list_len = cpu_to_be32(
> @@ -1785,15 +1805,17 @@ nvme_fc_init_aen_ops(struct nvme_fc_ctrl *ctrl)
>   	struct nvme_fc_fcp_op *aen_op;
>   	struct nvme_fc_cmd_iu *cmdiu;
>   	struct nvme_command *sqe;
> -	void *private;
> +	void *private = NULL;
>   	int i, ret;
>   
>   	aen_op = ctrl->aen_ops;
>   	for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++) {
> -		private = kzalloc(ctrl->lport->ops->fcprqst_priv_sz,
> +		if (ctrl->lport->ops->fcprqst_priv_sz) {
> +			private = kzalloc(ctrl->lport->ops->fcprqst_priv_sz,
>   						GFP_KERNEL);
> -		if (!private)
> -			return -ENOMEM;
> +			if (!private)
> +				return -ENOMEM;
> +		}
>   
>   		cmdiu = &aen_op->cmd_iu;
>   		sqe = &cmdiu->sqe;
> @@ -1824,9 +1846,6 @@ nvme_fc_term_aen_ops(struct nvme_fc_ctrl *ctrl)
>   
>   	aen_op = ctrl->aen_ops;
>   	for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++) {
> -		if (!aen_op->fcp_req.private)
> -			continue;
> -
>   		__nvme_fc_exit_request(ctrl, aen_op);
>   
>   		kfree(aen_op->fcp_req.private);
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index 66de6bd8f4fd..66a60a218994 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -1047,7 +1047,10 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
>   
>   	newrec->fc_target_port.node_name = pinfo->node_name;
>   	newrec->fc_target_port.port_name = pinfo->port_name;
> -	newrec->fc_target_port.private = &newrec[1];
> +	if (template->target_priv_sz)
> +		newrec->fc_target_port.private = &newrec[1];
> +	else
> +		newrec->fc_target_port.private = NULL;
>   	newrec->fc_target_port.port_id = pinfo->port_id;
>   	newrec->fc_target_port.port_num = idx;
>   	INIT_LIST_HEAD(&newrec->tgt_list);
> 

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 10/29] nvmefc: Use common definitions for LS names, formatting, and validation
  2020-02-05 18:37 ` [PATCH 10/29] nvmefc: Use common definitions for LS names, formatting, and validation James Smart
  2020-03-06  8:44   ` Hannes Reinecke
@ 2020-03-26 16:41   ` Himanshu Madhani
  1 sibling, 0 replies; 80+ messages in thread
From: Himanshu Madhani @ 2020-03-26 16:41 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/2020 12:37 PM, James Smart wrote:
> Given that both host and target now generate and receive LS's create
> a single table definition for LS names. Each tranport half will have
> a local version of the table.
> 
> As Create Association LS is issued by both sides, and received by
> both sides, create common routines to format the LS and to validate
> the LS.
> 
> Convert the host side transport to use the new common Create
> Association LS formatting routine.
> 
> Convert the target side transport to use the new common Create
> Association LS validation routine.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   drivers/nvme/host/fc.c   | 25 ++-------------
>   drivers/nvme/host/fc.h   | 79 ++++++++++++++++++++++++++++++++++++++++++++++++
>   drivers/nvme/target/fc.c | 28 ++---------------
>   3 files changed, 83 insertions(+), 49 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 1a58e3dc0399..8fed69504c38 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -1421,29 +1421,8 @@ nvme_fc_xmt_disconnect_assoc(struct nvme_fc_ctrl *ctrl)
>   	else
>   		lsreq->private = NULL;
>   
> -	discon_rqst->w0.ls_cmd = FCNVME_LS_DISCONNECT_ASSOC;
> -	discon_rqst->desc_list_len = cpu_to_be32(
> -				sizeof(struct fcnvme_lsdesc_assoc_id) +
> -				sizeof(struct fcnvme_lsdesc_disconn_cmd));
> -
> -	discon_rqst->associd.desc_tag = cpu_to_be32(FCNVME_LSDESC_ASSOC_ID);
> -	discon_rqst->associd.desc_len =
> -			fcnvme_lsdesc_len(
> -				sizeof(struct fcnvme_lsdesc_assoc_id));
> -
> -	discon_rqst->associd.association_id = cpu_to_be64(ctrl->association_id);
> -
> -	discon_rqst->discon_cmd.desc_tag = cpu_to_be32(
> -						FCNVME_LSDESC_DISCONN_CMD);
> -	discon_rqst->discon_cmd.desc_len =
> -			fcnvme_lsdesc_len(
> -				sizeof(struct fcnvme_lsdesc_disconn_cmd));
> -
> -	lsreq->rqstaddr = discon_rqst;
> -	lsreq->rqstlen = sizeof(*discon_rqst);
> -	lsreq->rspaddr = discon_acc;
> -	lsreq->rsplen = sizeof(*discon_acc);
> -	lsreq->timeout = NVME_FC_LS_TIMEOUT_SEC;
> +	nvmefc_fmt_lsreq_discon_assoc(lsreq, discon_rqst, discon_acc,
> +				ctrl->association_id);
>   
>   	ret = nvme_fc_send_ls_req_async(ctrl->rport, lsop,
>   				nvme_fc_disconnect_assoc_done);
> diff --git a/drivers/nvme/host/fc.h b/drivers/nvme/host/fc.h
> index 08fa88381d45..05ce566f2caf 100644
> --- a/drivers/nvme/host/fc.h
> +++ b/drivers/nvme/host/fc.h
> @@ -17,6 +17,7 @@
>    */
>   
>   union nvmefc_ls_requests {
> +	struct fcnvme_ls_rqst_w0		w0;
>   	struct fcnvme_ls_cr_assoc_rqst		rq_cr_assoc;
>   	struct fcnvme_ls_cr_conn_rqst		rq_cr_conn;
>   	struct fcnvme_ls_disconnect_assoc_rqst	rq_dis_assoc;
> @@ -145,4 +146,82 @@ static char *validation_errors[] = {
>   	"Bad Disconnect ACC Length",
>   };
>   
> +#define NVME_FC_LAST_LS_CMD_VALUE	FCNVME_LS_DISCONNECT_CONN
> +
> +static char *nvmefc_ls_names[] = {
> +	"Reserved (0)",
> +	"RJT (1)",
> +	"ACC (2)",
> +	"Create Association",
> +	"Create Connection",
> +	"Disconnect Association",
> +	"Disconnect Connection",
> +};
> +
> +static inline void
> +nvmefc_fmt_lsreq_discon_assoc(struct nvmefc_ls_req *lsreq,
> +	struct fcnvme_ls_disconnect_assoc_rqst *discon_rqst,
> +	struct fcnvme_ls_disconnect_assoc_acc *discon_acc,
> +	u64 association_id)
> +{
> +	lsreq->rqstaddr = discon_rqst;
> +	lsreq->rqstlen = sizeof(*discon_rqst);
> +	lsreq->rspaddr = discon_acc;
> +	lsreq->rsplen = sizeof(*discon_acc);
> +	lsreq->timeout = NVME_FC_LS_TIMEOUT_SEC;
> +
> +	discon_rqst->w0.ls_cmd = FCNVME_LS_DISCONNECT_ASSOC;
> +	discon_rqst->desc_list_len = cpu_to_be32(
> +				sizeof(struct fcnvme_lsdesc_assoc_id) +
> +				sizeof(struct fcnvme_lsdesc_disconn_cmd));
> +
> +	discon_rqst->associd.desc_tag = cpu_to_be32(FCNVME_LSDESC_ASSOC_ID);
> +	discon_rqst->associd.desc_len =
> +			fcnvme_lsdesc_len(
> +				sizeof(struct fcnvme_lsdesc_assoc_id));
> +
> +	discon_rqst->associd.association_id = cpu_to_be64(association_id);
> +
> +	discon_rqst->discon_cmd.desc_tag = cpu_to_be32(
> +						FCNVME_LSDESC_DISCONN_CMD);
> +	discon_rqst->discon_cmd.desc_len =
> +			fcnvme_lsdesc_len(
> +				sizeof(struct fcnvme_lsdesc_disconn_cmd));
> +}
> +
> +static inline int
> +nvmefc_vldt_lsreq_discon_assoc(u32 rqstlen,
> +	struct fcnvme_ls_disconnect_assoc_rqst *rqst)
> +{
> +	int ret = 0;
> +
> +	if (rqstlen < sizeof(struct fcnvme_ls_disconnect_assoc_rqst))
> +		ret = VERR_DISCONN_LEN;
> +	else if (rqst->desc_list_len !=
> +			fcnvme_lsdesc_len(
> +				sizeof(struct fcnvme_ls_disconnect_assoc_rqst)))
> +		ret = VERR_DISCONN_RQST_LEN;
> +	else if (rqst->associd.desc_tag != cpu_to_be32(FCNVME_LSDESC_ASSOC_ID))
> +		ret = VERR_ASSOC_ID;
> +	else if (rqst->associd.desc_len !=
> +			fcnvme_lsdesc_len(
> +				sizeof(struct fcnvme_lsdesc_assoc_id)))
> +		ret = VERR_ASSOC_ID_LEN;
> +	else if (rqst->discon_cmd.desc_tag !=
> +			cpu_to_be32(FCNVME_LSDESC_DISCONN_CMD))
> +		ret = VERR_DISCONN_CMD;
> +	else if (rqst->discon_cmd.desc_len !=
> +			fcnvme_lsdesc_len(
> +				sizeof(struct fcnvme_lsdesc_disconn_cmd)))
> +		ret = VERR_DISCONN_CMD_LEN;
> +	/*
> +	 * As the standard changed on the LS, check if old format and scope
> +	 * something other than Association (e.g. 0).
> +	 */
> +	else if (rqst->discon_cmd.rsvd8[0])
> +		ret = VERR_DISCONN_SCOPE;
> +
> +	return ret;
> +}
> +
>   #endif /* _NVME_FC_TRANSPORT_H */
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index 66a60a218994..5739df7edc59 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -1442,32 +1442,8 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
>   
>   	memset(acc, 0, sizeof(*acc));
>   
> -	if (iod->rqstdatalen < sizeof(struct fcnvme_ls_disconnect_assoc_rqst))
> -		ret = VERR_DISCONN_LEN;
> -	else if (rqst->desc_list_len !=
> -			fcnvme_lsdesc_len(
> -				sizeof(struct fcnvme_ls_disconnect_assoc_rqst)))
> -		ret = VERR_DISCONN_RQST_LEN;
> -	else if (rqst->associd.desc_tag != cpu_to_be32(FCNVME_LSDESC_ASSOC_ID))
> -		ret = VERR_ASSOC_ID;
> -	else if (rqst->associd.desc_len !=
> -			fcnvme_lsdesc_len(
> -				sizeof(struct fcnvme_lsdesc_assoc_id)))
> -		ret = VERR_ASSOC_ID_LEN;
> -	else if (rqst->discon_cmd.desc_tag !=
> -			cpu_to_be32(FCNVME_LSDESC_DISCONN_CMD))
> -		ret = VERR_DISCONN_CMD;
> -	else if (rqst->discon_cmd.desc_len !=
> -			fcnvme_lsdesc_len(
> -				sizeof(struct fcnvme_lsdesc_disconn_cmd)))
> -		ret = VERR_DISCONN_CMD_LEN;
> -	/*
> -	 * As the standard changed on the LS, check if old format and scope
> -	 * something other than Association (e.g. 0).
> -	 */
> -	else if (rqst->discon_cmd.rsvd8[0])
> -		ret = VERR_DISCONN_SCOPE;
> -	else {
> +	ret = nvmefc_vldt_lsreq_discon_assoc(iod->rqstdatalen, rqst);
> +	if (!ret) {
>   		/* match an active association */
>   		assoc = nvmet_fc_find_target_assoc(tgtport,
>   				be64_to_cpu(rqst->associd.association_id));
> 

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic
  2020-02-05 18:37 ` [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic James Smart
  2020-02-28 21:08   ` Sagi Grimberg
  2020-03-06  8:47   ` Hannes Reinecke
@ 2020-03-26 19:16   ` Himanshu Madhani
  2 siblings, 0 replies; 80+ messages in thread
From: Himanshu Madhani @ 2020-03-26 19:16 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: martin.petersen

On 2/5/2020 12:37 PM, James Smart wrote:
> Convert the assoc_active flag to an atomic to remove any small
> race conditions on transitioning to active and back.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   drivers/nvme/host/fc.c | 23 ++++++++++++++++-------
>   1 file changed, 16 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 8fed69504c38..40e1141c76db 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -131,6 +131,11 @@ enum nvme_fcctrl_flags {
>   	FCCTRL_TERMIO		= (1 << 0),
>   };
>   
> +enum {
> +	ASSOC_INACTIVE		= 0,
> +	ASSOC_ACTIVE		= 1,
> +};
> +
>   struct nvme_fc_ctrl {
>   	spinlock_t		lock;
>   	struct nvme_fc_queue	*queues;
> @@ -140,7 +145,7 @@ struct nvme_fc_ctrl {
>   	u32			cnum;
>   
>   	bool			ioq_live;
> -	bool			assoc_active;
> +	atomic_t		assoc_active;
>   	atomic_t		err_work_active;
>   	u64			association_id;
>   
> @@ -2584,12 +2589,14 @@ static int
>   nvme_fc_ctlr_active_on_rport(struct nvme_fc_ctrl *ctrl)
>   {
>   	struct nvme_fc_rport *rport = ctrl->rport;
> +	int priorstate;
>   	u32 cnt;
>   
> -	if (ctrl->assoc_active)
> +	priorstate = atomic_cmpxchg(&ctrl->assoc_active,
> +					ASSOC_INACTIVE, ASSOC_ACTIVE);
> +	if (priorstate != ASSOC_INACTIVE)
>   		return 1;
>   
> -	ctrl->assoc_active = true;
>   	cnt = atomic_inc_return(&rport->act_ctrl_cnt);
>   	if (cnt == 1)
>   		nvme_fc_rport_active_on_lport(rport);
> @@ -2746,7 +2753,7 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
>   	__nvme_fc_delete_hw_queue(ctrl, &ctrl->queues[0], 0);
>   out_free_queue:
>   	nvme_fc_free_queue(&ctrl->queues[0]);
> -	ctrl->assoc_active = false;
> +	atomic_set(&ctrl->assoc_active, ASSOC_INACTIVE);
>   	nvme_fc_ctlr_inactive_on_rport(ctrl);
>   
>   	return ret;
> @@ -2762,10 +2769,12 @@ static void
>   nvme_fc_delete_association(struct nvme_fc_ctrl *ctrl)
>   {
>   	unsigned long flags;
> +	int priorstate;
>   
> -	if (!ctrl->assoc_active)
> +	priorstate = atomic_cmpxchg(&ctrl->assoc_active,
> +					ASSOC_ACTIVE, ASSOC_INACTIVE);
> +	if (priorstate != ASSOC_ACTIVE)
>   		return;
> -	ctrl->assoc_active = false;
>   
>   	spin_lock_irqsave(&ctrl->lock, flags);
>   	ctrl->flags |= FCCTRL_TERMIO;
> @@ -3096,7 +3105,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
>   	ctrl->dev = lport->dev;
>   	ctrl->cnum = idx;
>   	ctrl->ioq_live = false;
> -	ctrl->assoc_active = false;
> +	atomic_set(&ctrl->assoc_active, ASSOC_INACTIVE);
>   	atomic_set(&ctrl->err_work_active, 0);
>   	init_waitqueue_head(&ctrl->ioabort_wait);
>   
> 

Looks Good

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support
  2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
                   ` (29 preceding siblings ...)
  2020-03-06  9:26 ` [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support Hannes Reinecke
@ 2020-03-31 14:29 ` Christoph Hellwig
  30 siblings, 0 replies; 80+ messages in thread
From: Christoph Hellwig @ 2020-03-31 14:29 UTC (permalink / raw)
  To: James Smart; +Cc: martin.petersen, linux-nvme

James,

can you resend this with the nitpicks address and the reviewed-bys
collected?

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 80+ messages in thread

end of thread, other threads:[~2020-03-31 15:55 UTC | newest]

Thread overview: 80+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
2020-02-05 18:37 ` [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08 James Smart
2020-02-28 20:36   ` Sagi Grimberg
2020-03-06  8:16   ` Hannes Reinecke
2020-03-26 16:10   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 02/29] nvmet-fc: fix typo in comment James Smart
2020-02-28 20:36   ` Sagi Grimberg
2020-03-06  8:17   ` Hannes Reinecke
2020-03-26 16:10   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request James Smart
2020-02-28 20:38   ` Sagi Grimberg
2020-03-06  8:19   ` Hannes Reinecke
2020-03-26 16:16   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header James Smart
2020-02-28 20:40   ` Sagi Grimberg
2020-03-06  8:21   ` Hannes Reinecke
2020-03-26 16:26   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 05/29] lpfc: " James Smart
2020-02-28 20:40   ` Sagi Grimberg
2020-03-06  8:25   ` Hannes Reinecke
2020-03-26 16:30   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 06/29] nvme-fcloop: Fix deallocation of working context James Smart
2020-02-28 20:43   ` Sagi Grimberg
2020-03-06  8:34   ` Hannes Reinecke
2020-03-26 16:35   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 07/29] nvme-fc nvmet-fc: refactor for common LS definitions James Smart
2020-03-06  8:35   ` Hannes Reinecke
2020-03-26 16:36   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 08/29] nvmet-fc: Better size LS buffers James Smart
2020-02-28 21:04   ` Sagi Grimberg
2020-03-06  8:36   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data James Smart
2020-02-28 21:05   ` Sagi Grimberg
2020-03-06  8:44   ` Hannes Reinecke
2020-03-26 16:39   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 10/29] nvmefc: Use common definitions for LS names, formatting, and validation James Smart
2020-03-06  8:44   ` Hannes Reinecke
2020-03-26 16:41   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic James Smart
2020-02-28 21:08   ` Sagi Grimberg
2020-03-06  8:47   ` Hannes Reinecke
2020-03-26 19:16   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 12/29] nvme-fc: Add Disconnect Association Rcv support James Smart
2020-03-06  9:00   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 13/29] nvmet-fc: add LS failure messages James Smart
2020-03-06  9:01   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 14/29] nvmet-fc: perform small cleanups on unneeded checks James Smart
2020-03-06  9:01   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 15/29] nvmet-fc: track hostport handle for associations James Smart
2020-03-06  9:02   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 16/29] nvmet-fc: rename ls_list to ls_rcv_list James Smart
2020-03-06  9:03   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 17/29] nvmet-fc: Add Disconnect Association Xmt support James Smart
2020-03-06  9:04   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 18/29] nvme-fcloop: refactor to enable target to host LS James Smart
2020-03-06  9:06   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 19/29] nvme-fcloop: add target to host LS request support James Smart
2020-03-06  9:07   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 20/29] lpfc: Refactor lpfc nvme headers James Smart
2020-03-06  9:18   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 21/29] lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx James Smart
2020-03-06  9:19   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 22/29] lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions James Smart
2020-03-06  9:19   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 23/29] lpfc: Refactor NVME LS receive handling James Smart
2020-03-06  9:20   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 24/29] lpfc: Refactor Send LS Request support James Smart
2020-03-06  9:20   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 25/29] lpfc: Refactor Send LS Abort support James Smart
2020-03-06  9:21   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 26/29] lpfc: Refactor Send LS Response support James Smart
2020-03-06  9:21   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 27/29] lpfc: nvme: Add Receive LS Request and Send LS Response support to nvme James Smart
2020-03-06  9:23   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 28/29] lpfc: nvmet: Add support for NVME LS request hosthandle James Smart
2020-03-06  9:23   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 29/29] lpfc: nvmet: Add Send LS Request and Abort LS Request support James Smart
2020-03-06  9:24   ` Hannes Reinecke
2020-03-06  9:26 ` [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support Hannes Reinecke
2020-03-31 14:29 ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).