All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] lpfc updates for 11.4.0.6
@ 2017-12-09  1:18 James Smart
  2017-12-09  1:18 ` [PATCH 1/9] lpfc: Fix random heartbeat timeouts during heavy IO James Smart
                   ` (9 more replies)
  0 siblings, 10 replies; 20+ messages in thread
From: James Smart @ 2017-12-09  1:18 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart

This patch set provides a number of bug fixes and 1 addition to
the driver.

The patches were cut against the Martin's 4.16/scsi-queue tree.
There are no outside dependencies and are expected to be pulled
via Martins tree.

James Smart (9):
  lpfc: Fix random heartbeat timeouts during heavy IO
  lpfc: Fix -EOVERFLOW behavior for NVMET and defer_rcv
  lpfc: Fix receive PRLI handling
  lpfc: Increase SCSI CQ and WQ sizes.
  lpfc: Fix SCSI LUN discovery when SCSI and NVME enabled
  lpfc: Fix issues connecting with nvme initiator
  lpfc: Fix infinite wait when driver unregisters a remote NVME port.
  lpfc: Beef up stat counters for debug
  lpfc: update driver version to 11.4.0.6

 drivers/scsi/lpfc/lpfc.h           |   2 +
 drivers/scsi/lpfc/lpfc_attr.c      |  46 ++++++-
 drivers/scsi/lpfc/lpfc_ct.c        |   1 +
 drivers/scsi/lpfc/lpfc_debugfs.c   |  49 +++++++-
 drivers/scsi/lpfc/lpfc_els.c       |  37 +++---
 drivers/scsi/lpfc/lpfc_init.c      |  81 ++++++++----
 drivers/scsi/lpfc/lpfc_nportdisc.c |  88 +++++++++----
 drivers/scsi/lpfc/lpfc_nvme.c      | 244 ++++++++++++++++++++-----------------
 drivers/scsi/lpfc/lpfc_nvme.h      |  16 ++-
 drivers/scsi/lpfc/lpfc_nvmet.c     |  58 +++++++--
 drivers/scsi/lpfc/lpfc_nvmet.h     |   6 +
 drivers/scsi/lpfc/lpfc_sli.c       |  24 ++--
 drivers/scsi/lpfc/lpfc_sli4.h      |   6 +-
 drivers/scsi/lpfc/lpfc_version.h   |   2 +-
 14 files changed, 451 insertions(+), 209 deletions(-)

-- 
2.13.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/9] lpfc: Fix random heartbeat timeouts during heavy IO
  2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
@ 2017-12-09  1:18 ` James Smart
  2017-12-13 11:33   ` Hannes Reinecke
  2017-12-09  1:18 ` [PATCH 2/9] lpfc: Fix -EOVERFLOW behavior for NVMET and defer_rcv James Smart
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: James Smart @ 2017-12-09  1:18 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Dick Kennedy, James Smart

NVME targets appear to randomly disconnect from the initiator
when running heavy IO.

The error is due to the host aggregate (across all controllers)
io load was beyond the maximum exchange count for nvme on the
adapter. The driver was properly returning a resource busy status,
but the io load was so great heartbeat commands would be bounced
and not have a successful retry within the fuzz amount for the
nvme heartbeat (yes, a very high io load!). Thus the target was
terminating the controller due to a keep alive failure.

Resolve by reserving a few exchanges (by counters) which can be
used when the adapter is out of normal exchanges and the command
is a NVME heartbeat command. As counters are used, while the
reserved command is outstanding, as soon as any other exchange
completes, the counters are adjusted and the reserved count is
replenished. The heartbeat completes execution in a normal fashion.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/lpfc/lpfc.h      |  2 ++
 drivers/scsi/lpfc/lpfc_init.c | 16 ++++++++++-
 drivers/scsi/lpfc/lpfc_nvme.c | 66 +++++++++++++++++++++++++++++--------------
 drivers/scsi/lpfc/lpfc_nvme.h |  1 +
 4 files changed, 63 insertions(+), 22 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
index dd2191c83052..61fb46da05d4 100644
--- a/drivers/scsi/lpfc/lpfc.h
+++ b/drivers/scsi/lpfc/lpfc.h
@@ -945,6 +945,8 @@ struct lpfc_hba {
 	struct list_head lpfc_nvme_buf_list_get;
 	struct list_head lpfc_nvme_buf_list_put;
 	uint32_t total_nvme_bufs;
+	uint32_t get_nvme_bufs;
+	uint32_t put_nvme_bufs;
 	struct list_head lpfc_iocb_list;
 	uint32_t total_iocbq_bufs;
 	struct list_head active_rrq_list;
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index fa211550a32a..44a98bc913f5 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -1034,6 +1034,7 @@ lpfc_hba_down_post_s4(struct lpfc_hba *phba)
 	LIST_HEAD(nvmet_aborts);
 	unsigned long iflag = 0;
 	struct lpfc_sglq *sglq_entry = NULL;
+	int cnt;
 
 
 	lpfc_sli_hbqbuf_free_all(phba);
@@ -1090,11 +1091,14 @@ lpfc_hba_down_post_s4(struct lpfc_hba *phba)
 	spin_unlock_irqrestore(&phba->scsi_buf_list_put_lock, iflag);
 
 	if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) {
+		cnt = 0;
 		list_for_each_entry_safe(psb, psb_next, &nvme_aborts, list) {
 			psb->pCmd = NULL;
 			psb->status = IOSTAT_SUCCESS;
+			cnt++;
 		}
 		spin_lock_irqsave(&phba->nvme_buf_list_put_lock, iflag);
+		phba->put_nvme_bufs += cnt;
 		list_splice(&nvme_aborts, &phba->lpfc_nvme_buf_list_put);
 		spin_unlock_irqrestore(&phba->nvme_buf_list_put_lock, iflag);
 
@@ -3339,6 +3343,7 @@ lpfc_nvme_free(struct lpfc_hba *phba)
 	list_for_each_entry_safe(lpfc_ncmd, lpfc_ncmd_next,
 				 &phba->lpfc_nvme_buf_list_put, list) {
 		list_del(&lpfc_ncmd->list);
+		phba->put_nvme_bufs--;
 		dma_pool_free(phba->lpfc_sg_dma_buf_pool, lpfc_ncmd->data,
 			      lpfc_ncmd->dma_handle);
 		kfree(lpfc_ncmd);
@@ -3350,6 +3355,7 @@ lpfc_nvme_free(struct lpfc_hba *phba)
 	list_for_each_entry_safe(lpfc_ncmd, lpfc_ncmd_next,
 				 &phba->lpfc_nvme_buf_list_get, list) {
 		list_del(&lpfc_ncmd->list);
+		phba->get_nvme_bufs--;
 		dma_pool_free(phba->lpfc_sg_dma_buf_pool, lpfc_ncmd->data,
 			      lpfc_ncmd->dma_handle);
 		kfree(lpfc_ncmd);
@@ -3754,9 +3760,11 @@ lpfc_sli4_nvme_sgl_update(struct lpfc_hba *phba)
 	uint16_t i, lxri, els_xri_cnt;
 	uint16_t nvme_xri_cnt, nvme_xri_max;
 	LIST_HEAD(nvme_sgl_list);
-	int rc;
+	int rc, cnt;
 
 	phba->total_nvme_bufs = 0;
+	phba->get_nvme_bufs = 0;
+	phba->put_nvme_bufs = 0;
 
 	if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME))
 		return 0;
@@ -3780,6 +3788,9 @@ lpfc_sli4_nvme_sgl_update(struct lpfc_hba *phba)
 	spin_lock(&phba->nvme_buf_list_put_lock);
 	list_splice_init(&phba->lpfc_nvme_buf_list_get, &nvme_sgl_list);
 	list_splice(&phba->lpfc_nvme_buf_list_put, &nvme_sgl_list);
+	cnt = phba->get_nvme_bufs + phba->put_nvme_bufs;
+	phba->get_nvme_bufs = 0;
+	phba->put_nvme_bufs = 0;
 	spin_unlock(&phba->nvme_buf_list_put_lock);
 	spin_unlock_irq(&phba->nvme_buf_list_get_lock);
 
@@ -3824,6 +3835,7 @@ lpfc_sli4_nvme_sgl_update(struct lpfc_hba *phba)
 	spin_lock_irq(&phba->nvme_buf_list_get_lock);
 	spin_lock(&phba->nvme_buf_list_put_lock);
 	list_splice_init(&nvme_sgl_list, &phba->lpfc_nvme_buf_list_get);
+	phba->get_nvme_bufs = cnt;
 	INIT_LIST_HEAD(&phba->lpfc_nvme_buf_list_put);
 	spin_unlock(&phba->nvme_buf_list_put_lock);
 	spin_unlock_irq(&phba->nvme_buf_list_get_lock);
@@ -5609,8 +5621,10 @@ lpfc_setup_driver_resource_phase1(struct lpfc_hba *phba)
 		/* Initialize the NVME buffer list used by driver for NVME IO */
 		spin_lock_init(&phba->nvme_buf_list_get_lock);
 		INIT_LIST_HEAD(&phba->lpfc_nvme_buf_list_get);
+		phba->get_nvme_bufs = 0;
 		spin_lock_init(&phba->nvme_buf_list_put_lock);
 		INIT_LIST_HEAD(&phba->lpfc_nvme_buf_list_put);
+		phba->put_nvme_bufs = 0;
 	}
 
 	/* Initialize the fabric iocb list */
diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index c9945ed4b791..1097ca5a7a8e 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -57,7 +57,8 @@
 /* NVME initiator-based functions */
 
 static struct lpfc_nvme_buf *
-lpfc_get_nvme_buf(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp);
+lpfc_get_nvme_buf(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
+		  int expedite);
 
 static void
 lpfc_release_nvme_buf(struct lpfc_hba *, struct lpfc_nvme_buf *);
@@ -1265,6 +1266,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
 			struct nvmefc_fcp_req *pnvme_fcreq)
 {
 	int ret = 0;
+	int expedite = 0;
 	struct lpfc_nvme_lport *lport;
 	struct lpfc_vport *vport;
 	struct lpfc_hba *phba;
@@ -1273,6 +1275,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
 	struct lpfc_nvme_rport *rport;
 	struct lpfc_nvme_qhandle *lpfc_queue_info;
 	struct lpfc_nvme_fcpreq_priv *freqpriv;
+	struct nvme_common_command *sqe;
 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
 	uint64_t start = 0;
 #endif
@@ -1354,15 +1357,27 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
 
 	}
 
+	/* Currently only NVME Keep alive commands should be expedited
+	 * if the driver runs out of a resource. These should only be
+	 * issued on the admin queue, qidx 0
+	 */
+	if (!lpfc_queue_info->qidx && !pnvme_fcreq->sg_cnt) {
+		sqe = &((struct nvme_fc_cmd_iu *)
+			pnvme_fcreq->cmdaddr)->sqe.common;
+		if (sqe->opcode == nvme_admin_keep_alive)
+			expedite = 1;
+	}
+
 	/* The node is shared with FCP IO, make sure the IO pending count does
 	 * not exceed the programmed depth.
 	 */
-	if (atomic_read(&ndlp->cmd_pending) >= ndlp->cmd_qdepth) {
+	if ((atomic_read(&ndlp->cmd_pending) >= ndlp->cmd_qdepth) &&
+	    !expedite) {
 		ret = -EBUSY;
 		goto out_fail;
 	}
 
-	lpfc_ncmd = lpfc_get_nvme_buf(phba, ndlp);
+	lpfc_ncmd = lpfc_get_nvme_buf(phba, ndlp, expedite);
 	if (lpfc_ncmd == NULL) {
 		lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR,
 				 "6065 driver's buffer pool is empty, "
@@ -1991,6 +2006,8 @@ lpfc_repost_nvme_sgl_list(struct lpfc_hba *phba)
 	spin_lock(&phba->nvme_buf_list_put_lock);
 	list_splice_init(&phba->lpfc_nvme_buf_list_get, &post_nblist);
 	list_splice(&phba->lpfc_nvme_buf_list_put, &post_nblist);
+	phba->get_nvme_bufs = 0;
+	phba->put_nvme_bufs = 0;
 	spin_unlock(&phba->nvme_buf_list_put_lock);
 	spin_unlock_irq(&phba->nvme_buf_list_get_lock);
 
@@ -2127,6 +2144,20 @@ lpfc_new_nvme_buf(struct lpfc_vport *vport, int num_to_alloc)
 	return num_posted;
 }
 
+static inline struct lpfc_nvme_buf *
+lpfc_nvme_buf(struct lpfc_hba *phba)
+{
+	struct lpfc_nvme_buf *lpfc_ncmd, *lpfc_ncmd_next;
+
+	list_for_each_entry_safe(lpfc_ncmd, lpfc_ncmd_next,
+				 &phba->lpfc_nvme_buf_list_get, list) {
+		list_del_init(&lpfc_ncmd->list);
+		phba->get_nvme_bufs--;
+		return lpfc_ncmd;
+	}
+	return NULL;
+}
+
 /**
  * lpfc_get_nvme_buf - Get a nvme buffer from lpfc_nvme_buf_list of the HBA
  * @phba: The HBA for which this call is being executed.
@@ -2139,35 +2170,27 @@ lpfc_new_nvme_buf(struct lpfc_vport *vport, int num_to_alloc)
  *   Pointer to lpfc_nvme_buf - Success
  **/
 static struct lpfc_nvme_buf *
-lpfc_get_nvme_buf(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
+lpfc_get_nvme_buf(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
+		  int expedite)
 {
-	struct lpfc_nvme_buf *lpfc_ncmd, *lpfc_ncmd_next;
+	struct lpfc_nvme_buf *lpfc_ncmd = NULL;
 	unsigned long iflag = 0;
-	int found = 0;
 
 	spin_lock_irqsave(&phba->nvme_buf_list_get_lock, iflag);
-	list_for_each_entry_safe(lpfc_ncmd, lpfc_ncmd_next,
-				 &phba->lpfc_nvme_buf_list_get, list) {
-		list_del_init(&lpfc_ncmd->list);
-		found = 1;
-		break;
-	}
-	if (!found) {
+	if (phba->get_nvme_bufs > LPFC_NVME_EXPEDITE_XRICNT || expedite)
+		lpfc_ncmd = lpfc_nvme_buf(phba);
+	if (!lpfc_ncmd) {
 		spin_lock(&phba->nvme_buf_list_put_lock);
 		list_splice(&phba->lpfc_nvme_buf_list_put,
 			    &phba->lpfc_nvme_buf_list_get);
+		phba->get_nvme_bufs += phba->put_nvme_bufs;
 		INIT_LIST_HEAD(&phba->lpfc_nvme_buf_list_put);
+		phba->put_nvme_bufs = 0;
 		spin_unlock(&phba->nvme_buf_list_put_lock);
-		list_for_each_entry_safe(lpfc_ncmd, lpfc_ncmd_next,
-					 &phba->lpfc_nvme_buf_list_get, list) {
-			list_del_init(&lpfc_ncmd->list);
-			found = 1;
-			break;
-		}
+		if (phba->get_nvme_bufs > LPFC_NVME_EXPEDITE_XRICNT || expedite)
+			lpfc_ncmd = lpfc_nvme_buf(phba);
 	}
 	spin_unlock_irqrestore(&phba->nvme_buf_list_get_lock, iflag);
-	if (!found)
-		return NULL;
 	return  lpfc_ncmd;
 }
 
@@ -2205,6 +2228,7 @@ lpfc_release_nvme_buf(struct lpfc_hba *phba, struct lpfc_nvme_buf *lpfc_ncmd)
 		lpfc_ncmd->cur_iocbq.iocb_flag = LPFC_IO_NVME;
 		spin_lock_irqsave(&phba->nvme_buf_list_put_lock, iflag);
 		list_add_tail(&lpfc_ncmd->list, &phba->lpfc_nvme_buf_list_put);
+		phba->put_nvme_bufs++;
 		spin_unlock_irqrestore(&phba->nvme_buf_list_put_lock, iflag);
 	}
 }
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index 903ec37f465f..c0833e469b7c 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -28,6 +28,7 @@
 #define LPFC_NVME_ERSP_LEN		0x20
 
 #define LPFC_NVME_WAIT_TMO              10
+#define LPFC_NVME_EXPEDITE_XRICNT	8
 
 struct lpfc_nvme_qhandle {
 	uint32_t index;		/* WQ index to use */
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/9] lpfc: Fix -EOVERFLOW behavior for NVMET and defer_rcv
  2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
  2017-12-09  1:18 ` [PATCH 1/9] lpfc: Fix random heartbeat timeouts during heavy IO James Smart
@ 2017-12-09  1:18 ` James Smart
  2017-12-13 14:41   ` Hannes Reinecke
  2017-12-09  1:18 ` [PATCH 3/9] lpfc: Fix receive PRLI handling James Smart
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: James Smart @ 2017-12-09  1:18 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Dick Kennedy, James Smart

The driver is all set to handle the defer_rcv api for the
nvmet_fc transport, yet didn't properly recognize the return
status when the defer_rcv occurred. The driver treated it simply
as an error and aborted the io. Several residual issues occurred
at that point.

Finish the defer_rcv support: recognize the return status when
the io request is being handled in a deferred style. This stops
the rogue aborts; Replenish the async cmd rcv buffer in the
deferred receive if needed.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_nvmet.c | 24 ++++++++++++++++++++++--
 drivers/scsi/lpfc/lpfc_nvmet.h |  1 +
 drivers/scsi/lpfc/lpfc_sli.c   | 24 +++++++++++++-----------
 3 files changed, 36 insertions(+), 13 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index d80cd1def3b9..02a1cfa10f72 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -38,6 +38,7 @@
 
 #include <../drivers/nvme/host/nvme.h>
 #include <linux/nvme-fc-driver.h>
+#include <linux/nvme-fc.h>
 
 #include "lpfc_version.h"
 #include "lpfc_hw4.h"
@@ -218,6 +219,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
 		ctxp->entry_cnt = 1;
 		ctxp->flag = 0;
 		ctxp->ctxbuf = ctx_buf;
+		ctxp->rqb_buffer = (void *)nvmebuf;
 		spin_lock_init(&ctxp->ctxlock);
 
 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
@@ -253,6 +255,17 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
 			return;
 		}
 
+		/* Processing of FCP command is deferred */
+		if (rc == -EOVERFLOW) {
+			lpfc_nvmeio_data(phba,
+					 "NVMET RCV BUSY: xri x%x sz %d "
+					 "from %06x\n",
+					 oxid, size, sid);
+			/* defer repost rcv buffer till .defer_rcv callback */
+			ctxp->flag &= ~LPFC_NVMET_DEFER_RCV_REPOST;
+			atomic_inc(&tgtp->rcv_fcp_cmd_out);
+			return;
+		}
 		atomic_inc(&tgtp->rcv_fcp_cmd_drop);
 		lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
 				"2582 FCP Drop IO x%x: err x%x: x%x x%x x%x\n",
@@ -921,7 +934,11 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
 
 	tgtp = phba->targetport->private;
 	atomic_inc(&tgtp->rcv_fcp_cmd_defer);
-	lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */
+	if (ctxp->flag & LPFC_NVMET_DEFER_RCV_REPOST)
+		lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */
+	else
+		nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
+	ctxp->flag &= ~LPFC_NVMET_DEFER_RCV_REPOST;
 }
 
 static struct nvmet_fc_target_template lpfc_tgttemplate = {
@@ -1693,6 +1710,7 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
 	ctxp->entry_cnt = 1;
 	ctxp->flag = 0;
 	ctxp->ctxbuf = ctx_buf;
+	ctxp->rqb_buffer = (void *)nvmebuf;
 	spin_lock_init(&ctxp->ctxlock);
 
 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
@@ -1726,6 +1744,7 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
 
 	/* Process FCP command */
 	if (rc == 0) {
+		ctxp->rqb_buffer = NULL;
 		atomic_inc(&tgtp->rcv_fcp_cmd_out);
 		lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */
 		return;
@@ -1737,10 +1756,11 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
 				 "NVMET RCV BUSY: xri x%x sz %d from %06x\n",
 				 oxid, size, sid);
 		/* defer reposting rcv buffer till .defer_rcv callback */
-		ctxp->rqb_buffer = nvmebuf;
+		ctxp->flag |= LPFC_NVMET_DEFER_RCV_REPOST;
 		atomic_inc(&tgtp->rcv_fcp_cmd_out);
 		return;
 	}
+	ctxp->rqb_buffer = nvmebuf;
 
 	atomic_inc(&tgtp->rcv_fcp_cmd_drop);
 	lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.h b/drivers/scsi/lpfc/lpfc_nvmet.h
index 6723e7b81946..03096024e073 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.h
+++ b/drivers/scsi/lpfc/lpfc_nvmet.h
@@ -126,6 +126,7 @@ struct lpfc_nvmet_rcv_ctx {
 #define LPFC_NVMET_XBUSY		0x4  /* XB bit set on IO cmpl */
 #define LPFC_NVMET_CTX_RLS		0x8  /* ctx free requested */
 #define LPFC_NVMET_ABTS_RCV		0x10  /* ABTS received on exchange */
+#define LPFC_NVMET_DEFER_RCV_REPOST	0x20  /* repost to RQ on defer rcv */
 	struct rqb_dmabuf *rqb_buffer;
 	struct lpfc_nvmet_ctxbuf *ctxbuf;
 
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index 1d489b89954e..5f5528a12308 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -475,28 +475,30 @@ lpfc_sli4_rq_put(struct lpfc_queue *hq, struct lpfc_queue *dq,
 	struct lpfc_rqe *temp_hrqe;
 	struct lpfc_rqe *temp_drqe;
 	struct lpfc_register doorbell;
-	int put_index;
+	int hq_put_index;
+	int dq_put_index;
 
 	/* sanity check on queue memory */
 	if (unlikely(!hq) || unlikely(!dq))
 		return -ENOMEM;
-	put_index = hq->host_index;
-	temp_hrqe = hq->qe[put_index].rqe;
-	temp_drqe = dq->qe[dq->host_index].rqe;
+	hq_put_index = hq->host_index;
+	dq_put_index = dq->host_index;
+	temp_hrqe = hq->qe[hq_put_index].rqe;
+	temp_drqe = dq->qe[dq_put_index].rqe;
 
 	if (hq->type != LPFC_HRQ || dq->type != LPFC_DRQ)
 		return -EINVAL;
-	if (put_index != dq->host_index)
+	if (hq_put_index != dq_put_index)
 		return -EINVAL;
 	/* If the host has not yet processed the next entry then we are done */
-	if (((put_index + 1) % hq->entry_count) == hq->hba_index)
+	if (((hq_put_index + 1) % hq->entry_count) == hq->hba_index)
 		return -EBUSY;
 	lpfc_sli_pcimem_bcopy(hrqe, temp_hrqe, hq->entry_size);
 	lpfc_sli_pcimem_bcopy(drqe, temp_drqe, dq->entry_size);
 
 	/* Update the host index to point to the next slot */
-	hq->host_index = ((put_index + 1) % hq->entry_count);
-	dq->host_index = ((dq->host_index + 1) % dq->entry_count);
+	hq->host_index = ((hq_put_index + 1) % hq->entry_count);
+	dq->host_index = ((dq_put_index + 1) % dq->entry_count);
 	hq->RQ_buf_posted++;
 
 	/* Ring The Header Receive Queue Doorbell */
@@ -517,7 +519,7 @@ lpfc_sli4_rq_put(struct lpfc_queue *hq, struct lpfc_queue *dq,
 		}
 		writel(doorbell.word0, hq->db_regaddr);
 	}
-	return put_index;
+	return hq_put_index;
 }
 
 /**
@@ -12887,8 +12889,8 @@ lpfc_sli4_sp_handle_rcqe(struct lpfc_hba *phba, struct lpfc_rcqe *rcqe)
 		lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
 				"2537 Receive Frame Truncated!!\n");
 	case FC_STATUS_RQ_SUCCESS:
-		lpfc_sli4_rq_release(hrq, drq);
 		spin_lock_irqsave(&phba->hbalock, iflags);
+		lpfc_sli4_rq_release(hrq, drq);
 		dma_buf = lpfc_sli_hbqbuf_get(&phba->hbqs[0].hbq_buffer_list);
 		if (!dma_buf) {
 			hrq->RQ_no_buf_found++;
@@ -13290,8 +13292,8 @@ lpfc_sli4_nvmet_handle_rcqe(struct lpfc_hba *phba, struct lpfc_queue *cq,
 				"6126 Receive Frame Truncated!!\n");
 		/* Drop thru */
 	case FC_STATUS_RQ_SUCCESS:
-		lpfc_sli4_rq_release(hrq, drq);
 		spin_lock_irqsave(&phba->hbalock, iflags);
+		lpfc_sli4_rq_release(hrq, drq);
 		dma_buf = lpfc_sli_rqbuf_get(phba, hrq);
 		if (!dma_buf) {
 			hrq->RQ_no_buf_found++;
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/9] lpfc: Fix receive PRLI handling
  2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
  2017-12-09  1:18 ` [PATCH 1/9] lpfc: Fix random heartbeat timeouts during heavy IO James Smart
  2017-12-09  1:18 ` [PATCH 2/9] lpfc: Fix -EOVERFLOW behavior for NVMET and defer_rcv James Smart
@ 2017-12-09  1:18 ` James Smart
  2017-12-13 14:42   ` Hannes Reinecke
  2017-12-09  1:18 ` [PATCH 4/9] lpfc: Increase SCSI CQ and WQ sizes James Smart
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: James Smart @ 2017-12-09  1:18 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Dick Kennedy, James Smart

Handling a rcv'ed PRLI incorrectly can cause the ndlp to end up
in the wrong state or the driver to ACC and PRLI when it should
send LS_RJT.

The cause was due to the driver not properly looking at the PRLI
type and taking the multiple protocol support into consideration.

Resolved by adding checks in the various PRLI receive points to
validate PRLI type and reject if not valid for the enabled protocols
and mode (host vs target).

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_els.c       |  7 -----
 drivers/scsi/lpfc/lpfc_nportdisc.c | 54 +++++++++++++++++++++++++++++++++-----
 2 files changed, 47 insertions(+), 14 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
index 4a14f3c82a07..6ffd65a935c4 100644
--- a/drivers/scsi/lpfc/lpfc_els.c
+++ b/drivers/scsi/lpfc/lpfc_els.c
@@ -8063,13 +8063,6 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 			rjt_exp = LSEXP_NOTHING_MORE;
 			break;
 		}
-
-		/* NVMET accepts NVME PRLI only.  Reject FCP PRLI */
-		if (cmd == ELS_CMD_PRLI && phba->nvmet_support) {
-			rjt_err = LSRJT_CMD_UNSUPPORTED;
-			rjt_exp = LSEXP_REQ_UNSUPPORTED;
-			break;
-		}
 		lpfc_disc_state_machine(vport, ndlp, elsiocb, NLP_EVT_RCV_PRLI);
 		break;
 	case ELS_CMD_LIRR:
diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
index b6957d944b9a..df050b211e0b 100644
--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
+++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
@@ -727,6 +727,41 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 	return 0;
 }
 
+static uint32_t
+lpfc_rcv_prli_support_check(struct lpfc_vport *vport,
+			    struct lpfc_nodelist *ndlp,
+			    struct lpfc_iocbq *cmdiocb)
+{
+	struct ls_rjt stat;
+	uint32_t *payload;
+	uint32_t cmd;
+
+	payload = ((struct lpfc_dmabuf *)cmdiocb->context2)->virt;
+	cmd = *payload;
+	if (vport->phba->nvmet_support) {
+		/* Must be a NVME PRLI */
+		if (cmd ==  ELS_CMD_PRLI)
+			goto out;
+	} else {
+		/* Initiator mode. */
+		if (!vport->nvmei_support && (cmd == ELS_CMD_NVMEPRLI))
+			goto out;
+	}
+	return 1;
+out:
+	lpfc_printf_vlog(vport, KERN_WARNING, LOG_NVME_DISC,
+			 "6115 Rcv PRLI (%x) check failed: ndlp rpi %d "
+			 "state x%x flags x%x\n",
+			 cmd, ndlp->nlp_rpi, ndlp->nlp_state,
+			 ndlp->nlp_flag);
+	memset(&stat, 0, sizeof(struct ls_rjt));
+	stat.un.b.lsRjtRsnCode = LSRJT_CMD_UNSUPPORTED;
+	stat.un.b.lsRjtRsnCodeExp = LSEXP_REQ_UNSUPPORTED;
+	lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb,
+			    ndlp, NULL);
+	return 0;
+}
+
 static void
 lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 	      struct lpfc_iocbq *cmdiocb)
@@ -1373,7 +1408,8 @@ lpfc_rcv_prli_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 {
 	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
+	if (lpfc_rcv_prli_support_check(vport, ndlp, cmdiocb))
+		lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
 	return ndlp->nlp_state;
 }
 
@@ -1544,6 +1580,9 @@ lpfc_rcv_prli_reglogin_issue(struct lpfc_vport *vport,
 	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 	struct ls_rjt     stat;
 
+	if (!lpfc_rcv_prli_support_check(vport, ndlp, cmdiocb)) {
+		return ndlp->nlp_state;
+	}
 	if (vport->phba->nvmet_support) {
 		/* NVME Target mode.  Handle and respond to the PRLI and
 		 * transition to UNMAPPED provided the RPI has completed
@@ -1558,11 +1597,6 @@ lpfc_rcv_prli_reglogin_issue(struct lpfc_vport *vport,
 			 * to prevent an illegal state transition when the
 			 * rpi registration does complete.
 			 */
-			lpfc_printf_vlog(vport, KERN_WARNING, LOG_NVME_DISC,
-					 "6115 NVMET ndlp rpi %d state "
-					 "unknown, state x%x flags x%08x\n",
-					 ndlp->nlp_rpi, ndlp->nlp_state,
-					 ndlp->nlp_flag);
 			memset(&stat, 0, sizeof(struct ls_rjt));
 			stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 			stat.un.b.lsRjtRsnCodeExp = LSEXP_CMD_IN_PROGRESS;
@@ -1573,7 +1607,6 @@ lpfc_rcv_prli_reglogin_issue(struct lpfc_vport *vport,
 		/* Initiator mode. */
 		lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
 	}
-
 	return ndlp->nlp_state;
 }
 
@@ -1819,6 +1852,8 @@ lpfc_rcv_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 {
 	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
+	if (!lpfc_rcv_prli_support_check(vport, ndlp, cmdiocb))
+		return ndlp->nlp_state;
 	lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
 	return ndlp->nlp_state;
 }
@@ -2241,6 +2276,9 @@ lpfc_rcv_prli_unmap_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 {
 	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
+	if (!lpfc_rcv_prli_support_check(vport, ndlp, cmdiocb))
+		return ndlp->nlp_state;
+
 	lpfc_rcv_prli(vport, ndlp, cmdiocb);
 	lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
 	return ndlp->nlp_state;
@@ -2310,6 +2348,8 @@ lpfc_rcv_prli_mapped_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 {
 	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
+	if (!lpfc_rcv_prli_support_check(vport, ndlp, cmdiocb))
+		return ndlp->nlp_state;
 	lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
 	return ndlp->nlp_state;
 }
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/9] lpfc: Increase SCSI CQ and WQ sizes.
  2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
                   ` (2 preceding siblings ...)
  2017-12-09  1:18 ` [PATCH 3/9] lpfc: Fix receive PRLI handling James Smart
@ 2017-12-09  1:18 ` James Smart
  2017-12-13 14:44   ` Hannes Reinecke
  2017-12-09  1:18 ` [PATCH 5/9] lpfc: Fix SCSI LUN discovery when SCSI and NVME enabled James Smart
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: James Smart @ 2017-12-09  1:18 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Dick Kennedy, James Smart

Increased the sizes of the SCSI WQ's and CQ's so that SCSI operation is
similar to that used by NVME. However, size increase restricted only to
those newer adapters that can support the larger WQE size, thus bigger
queue sizes.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_init.c | 65 ++++++++++++++++++++++++++++---------------
 drivers/scsi/lpfc/lpfc_nvme.h |  2 --
 drivers/scsi/lpfc/lpfc_sli4.h |  6 ++--
 3 files changed, 46 insertions(+), 27 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 44a98bc913f5..f539c554588c 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -7983,9 +7983,9 @@ lpfc_alloc_nvme_wq_cq(struct lpfc_hba *phba, int wqidx)
 {
 	struct lpfc_queue *qdesc;
 
-	qdesc = lpfc_sli4_queue_alloc(phba, LPFC_NVME_PAGE_SIZE,
+	qdesc = lpfc_sli4_queue_alloc(phba, LPFC_EXPANDED_PAGE_SIZE,
 				      phba->sli4_hba.cq_esize,
-				      LPFC_NVME_CQSIZE);
+				      LPFC_CQE_EXP_COUNT);
 	if (!qdesc) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
 				"0508 Failed allocate fast-path NVME CQ (%d)\n",
@@ -7994,8 +7994,8 @@ lpfc_alloc_nvme_wq_cq(struct lpfc_hba *phba, int wqidx)
 	}
 	phba->sli4_hba.nvme_cq[wqidx] = qdesc;
 
-	qdesc = lpfc_sli4_queue_alloc(phba, LPFC_NVME_PAGE_SIZE,
-				      LPFC_WQE128_SIZE, LPFC_NVME_WQSIZE);
+	qdesc = lpfc_sli4_queue_alloc(phba, LPFC_EXPANDED_PAGE_SIZE,
+				      LPFC_WQE128_SIZE, LPFC_WQE_EXP_COUNT);
 	if (!qdesc) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
 				"0509 Failed allocate fast-path NVME WQ (%d)\n",
@@ -8011,12 +8011,18 @@ static int
 lpfc_alloc_fcp_wq_cq(struct lpfc_hba *phba, int wqidx)
 {
 	struct lpfc_queue *qdesc;
-	uint32_t wqesize;
 
 	/* Create Fast Path FCP CQs */
-	qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
-				      phba->sli4_hba.cq_esize,
-				      phba->sli4_hba.cq_ecount);
+	if (phba->fcp_embed_io)
+		/* Increase the CQ size when WQEs contain an embedded cdb */
+		qdesc = lpfc_sli4_queue_alloc(phba, LPFC_EXPANDED_PAGE_SIZE,
+					      phba->sli4_hba.cq_esize,
+					      LPFC_CQE_EXP_COUNT);
+
+	else
+		qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
+					      phba->sli4_hba.cq_esize,
+					      phba->sli4_hba.cq_ecount);
 	if (!qdesc) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
 			"0499 Failed allocate fast-path FCP CQ (%d)\n", wqidx);
@@ -8025,10 +8031,15 @@ lpfc_alloc_fcp_wq_cq(struct lpfc_hba *phba, int wqidx)
 	phba->sli4_hba.fcp_cq[wqidx] = qdesc;
 
 	/* Create Fast Path FCP WQs */
-	wqesize = (phba->fcp_embed_io) ?
-		LPFC_WQE128_SIZE : phba->sli4_hba.wq_esize;
-	qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
-				      wqesize, phba->sli4_hba.wq_ecount);
+	if (phba->fcp_embed_io)
+		/* Increase the WQ size when WQEs contain an embedded cdb */
+		qdesc = lpfc_sli4_queue_alloc(phba, LPFC_EXPANDED_PAGE_SIZE,
+					      LPFC_WQE128_SIZE,
+					      LPFC_WQE_EXP_COUNT);
+	else
+		qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
+					      phba->sli4_hba.wq_esize,
+					      phba->sli4_hba.wq_ecount);
 	if (!qdesc) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
 				"0503 Failed allocate fast-path FCP WQ (%d)\n",
@@ -12216,7 +12227,6 @@ int
 lpfc_fof_queue_create(struct lpfc_hba *phba)
 {
 	struct lpfc_queue *qdesc;
-	uint32_t wqesize;
 
 	/* Create FOF EQ */
 	qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
@@ -12230,21 +12240,32 @@ lpfc_fof_queue_create(struct lpfc_hba *phba)
 	if (phba->cfg_fof) {
 
 		/* Create OAS CQ */
-		qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
-					      phba->sli4_hba.cq_esize,
-					      phba->sli4_hba.cq_ecount);
+		if (phba->fcp_embed_io)
+			qdesc = lpfc_sli4_queue_alloc(phba,
+						      LPFC_EXPANDED_PAGE_SIZE,
+						      phba->sli4_hba.cq_esize,
+						      LPFC_CQE_EXP_COUNT);
+		else
+			qdesc = lpfc_sli4_queue_alloc(phba,
+						      LPFC_DEFAULT_PAGE_SIZE,
+						      phba->sli4_hba.cq_esize,
+						      phba->sli4_hba.cq_ecount);
 		if (!qdesc)
 			goto out_error;
 
 		phba->sli4_hba.oas_cq = qdesc;
 
 		/* Create OAS WQ */
-		wqesize = (phba->fcp_embed_io) ?
-				LPFC_WQE128_SIZE : phba->sli4_hba.wq_esize;
-		qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
-					      wqesize,
-					      phba->sli4_hba.wq_ecount);
-
+		if (phba->fcp_embed_io)
+			qdesc = lpfc_sli4_queue_alloc(phba,
+						      LPFC_EXPANDED_PAGE_SIZE,
+						      LPFC_WQE128_SIZE,
+						      LPFC_WQE_EXP_COUNT);
+		else
+			qdesc = lpfc_sli4_queue_alloc(phba,
+						      LPFC_DEFAULT_PAGE_SIZE,
+						      phba->sli4_hba.wq_esize,
+						      phba->sli4_hba.wq_ecount);
 		if (!qdesc)
 			goto out_error;
 
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index c0833e469b7c..03b0e8471ef4 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -22,8 +22,6 @@
  ********************************************************************/
 
 #define LPFC_NVME_DEFAULT_SEGS		(64 + 1)	/* 256K IOs */
-#define LPFC_NVME_WQSIZE		1024
-#define LPFC_NVME_CQSIZE		4096
 
 #define LPFC_NVME_ERSP_LEN		0x20
 
diff --git a/drivers/scsi/lpfc/lpfc_sli4.h b/drivers/scsi/lpfc/lpfc_sli4.h
index da302bfb0223..81fb58e59e60 100644
--- a/drivers/scsi/lpfc/lpfc_sli4.h
+++ b/drivers/scsi/lpfc/lpfc_sli4.h
@@ -170,7 +170,7 @@ struct lpfc_queue {
 	uint32_t q_mode;
 	uint16_t page_count;	/* Number of pages allocated for this queue */
 	uint16_t page_size;	/* size of page allocated for this queue */
-#define LPFC_NVME_PAGE_SIZE	16384
+#define LPFC_EXPANDED_PAGE_SIZE	16384
 #define LPFC_DEFAULT_PAGE_SIZE	4096
 	uint16_t chann;		/* IO channel this queue is associated with */
 	uint16_t db_format;
@@ -370,9 +370,9 @@ struct lpfc_bmbx {
 
 #define LPFC_EQE_DEF_COUNT	1024
 #define LPFC_CQE_DEF_COUNT      1024
+#define LPFC_CQE_EXP_COUNT      4096
 #define LPFC_WQE_DEF_COUNT      256
-#define LPFC_WQE128_DEF_COUNT   128
-#define LPFC_WQE128_MAX_COUNT   256
+#define LPFC_WQE_EXP_COUNT      1024
 #define LPFC_MQE_DEF_COUNT      16
 #define LPFC_RQE_DEF_COUNT	512
 
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/9] lpfc: Fix SCSI LUN discovery when SCSI and NVME enabled
  2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
                   ` (3 preceding siblings ...)
  2017-12-09  1:18 ` [PATCH 4/9] lpfc: Increase SCSI CQ and WQ sizes James Smart
@ 2017-12-09  1:18 ` James Smart
  2017-12-13 14:47   ` Hannes Reinecke
  2017-12-09  1:18 ` [PATCH 6/9] lpfc: Fix issues connecting with nvme initiator James Smart
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: James Smart @ 2017-12-09  1:18 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Dick Kennedy, James Smart

When enabled for both SCSI and NVME support, and connected pt2pt to a
SCSI only target, the driver nodelist entry for the remote port is
left in PRLI_ISSUE state and no SCSI LUNs are discovered. Works fine
if only configured for SCSI support.

Error was due to some of the prli points still reflecting the need
to send only 1 PRLI. On a lot of fabric configs, targets were NVME
only, which meant the fabric-reported protocol attributes were only
telling the driver one protocol or the other. Thus things worked
fine. With pt2pt, the driver must send a PRLI for both protocols as
there are no hints on what the target supports. Thus pt2pt targets
were hitting the multiple PRLI issues.

Complete the dual PRLI support. Track explicitly whether scsi (fcp)
or nvme prli's have been sent. Accurately track protocol support
detected on each node as reported by the fabric or probed by PRLI
traffic.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_ct.c        |  1 +
 drivers/scsi/lpfc/lpfc_els.c       | 30 ++++++++++++++++++++----------
 drivers/scsi/lpfc/lpfc_nportdisc.c | 30 +++++++++++++-----------------
 3 files changed, 34 insertions(+), 27 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
index 2c1fe5ab3128..9d20d2c208c7 100644
--- a/drivers/scsi/lpfc/lpfc_ct.c
+++ b/drivers/scsi/lpfc/lpfc_ct.c
@@ -471,6 +471,7 @@ lpfc_prep_node_fc4type(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type)
 				"Parse GID_FTrsp: did:x%x flg:x%x x%x",
 				Did, ndlp->nlp_flag, vport->fc_flag);
 
+			ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME);
 			/* By default, the driver expects to support FCP FC4 */
 			if (fc4_type == FC_TYPE_FCP)
 				ndlp->nlp_fc4_type |= NLP_FC4_FCP;
diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
index 6ffd65a935c4..dfb21d9efb0d 100644
--- a/drivers/scsi/lpfc/lpfc_els.c
+++ b/drivers/scsi/lpfc/lpfc_els.c
@@ -2094,6 +2094,10 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
 	ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
 	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~NLP_PRLI_SND;
+
+	/* Driver supports multiple FC4 types.  Counters matter. */
+	vport->fc_prli_sent--;
+	ndlp->fc4_prli_sent--;
 	spin_unlock_irq(shost->host_lock);
 
 	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
@@ -2101,9 +2105,6 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
 		irsp->ulpStatus, irsp->un.ulpWord[4],
 		ndlp->nlp_DID);
 
-	/* Ddriver supports multiple FC4 types.  Counters matter. */
-	vport->fc_prli_sent--;
-
 	/* PRLI completes to NPort <nlp_DID> */
 	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
 			 "0103 PRLI completes to NPort x%06x "
@@ -2117,7 +2118,6 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
 
 	if (irsp->ulpStatus) {
 		/* Check for retry */
-		ndlp->fc4_prli_sent--;
 		if (lpfc_els_retry(phba, cmdiocb, rspiocb)) {
 			/* ELS command is being retried */
 			goto out;
@@ -2196,6 +2196,15 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		ndlp->nlp_fc4_type |= NLP_FC4_NVME;
 	local_nlp_type = ndlp->nlp_fc4_type;
 
+	/* This routine will issue 1 or 2 PRLIs, so zero all the ndlp
+	 * fields here before any of them can complete.
+	 */
+	ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
+	ndlp->nlp_type &= ~(NLP_NVME_TARGET | NLP_NVME_INITIATOR);
+	ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE;
+	ndlp->nlp_flag &= ~NLP_FIRSTBURST;
+	ndlp->nvme_fb_size = 0;
+
  send_next_prli:
 	if (local_nlp_type & NLP_FC4_FCP) {
 		/* Payload is 4 + 16 = 20 x14 bytes. */
@@ -2304,6 +2313,13 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 	elsiocb->iocb_cmpl = lpfc_cmpl_els_prli;
 	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag |= NLP_PRLI_SND;
+
+	/* The vport counters are used for lpfc_scan_finished, but
+	 * the ndlp is used to track outstanding PRLIs for different
+	 * FC4 types.
+	 */
+	vport->fc_prli_sent++;
+	ndlp->fc4_prli_sent++;
 	spin_unlock_irq(shost->host_lock);
 	if (lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0) ==
 	    IOCB_ERROR) {
@@ -2314,12 +2330,6 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		return 1;
 	}
 
-	/* The vport counters are used for lpfc_scan_finished, but
-	 * the ndlp is used to track outstanding PRLIs for different
-	 * FC4 types.
-	 */
-	vport->fc_prli_sent++;
-	ndlp->fc4_prli_sent++;
 
 	/* The driver supports 2 FC4 types.  Make sure
 	 * a PRLI is issued for all types before exiting.
diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
index df050b211e0b..283382ac0456 100644
--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
+++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
@@ -390,6 +390,11 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		break;
 	}
 
+	ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
+	ndlp->nlp_type &= ~(NLP_NVME_TARGET | NLP_NVME_INITIATOR);
+	ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE;
+	ndlp->nlp_flag &= ~NLP_FIRSTBURST;
+
 	/* Check for Nport to NPort pt2pt protocol */
 	if ((vport->fc_flag & FC_PT2PT) &&
 	    !(vport->fc_flag & FC_PT2PT_PLOGI)) {
@@ -777,9 +782,6 @@ lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 	lp = (uint32_t *) pcmd->virt;
 	npr = (PRLI *) ((uint8_t *) lp + sizeof (uint32_t));
 
-	ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
-	ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE;
-	ndlp->nlp_flag &= ~NLP_FIRSTBURST;
 	if ((npr->prliType == PRLI_FCP_TYPE) ||
 	    (npr->prliType == PRLI_NVME_TYPE)) {
 		if (npr->initiatorFunc) {
@@ -804,8 +806,12 @@ lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		 * type.  Target mode does not issue gft_id so doesn't get
 		 * the fc4 type set until now.
 		 */
-		if ((phba->nvmet_support) && (npr->prliType == PRLI_NVME_TYPE))
+		if (phba->nvmet_support && (npr->prliType == PRLI_NVME_TYPE)) {
 			ndlp->nlp_fc4_type |= NLP_FC4_NVME;
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
+		}
+		if (npr->prliType == PRLI_FCP_TYPE)
+			ndlp->nlp_fc4_type |= NLP_FC4_FCP;
 	}
 	if (rport) {
 		/* We need to update the rport role values */
@@ -1591,7 +1597,6 @@ lpfc_rcv_prli_reglogin_issue(struct lpfc_vport *vport,
 		if (ndlp->nlp_flag & NLP_RPI_REGISTERED) {
 			lpfc_rcv_prli(vport, ndlp, cmdiocb);
 			lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
-			lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
 		} else {
 			/* RPI registration has not completed. Reject the PRLI
 			 * to prevent an illegal state transition when the
@@ -1602,6 +1607,7 @@ lpfc_rcv_prli_reglogin_issue(struct lpfc_vport *vport,
 			stat.un.b.lsRjtRsnCodeExp = LSEXP_CMD_IN_PROGRESS;
 			lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb,
 					    ndlp, NULL);
+			return ndlp->nlp_state;
 		}
 	} else {
 		/* Initiator mode. */
@@ -1957,13 +1963,6 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		return ndlp->nlp_state;
 	}
 
-	/* Check out PRLI rsp */
-	ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
-	ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE;
-
-	/* NVME or FCP first burst must be negotiated for each PRLI. */
-	ndlp->nlp_flag &= ~NLP_FIRSTBURST;
-	ndlp->nvme_fb_size = 0;
 	if (npr && (npr->acceptRspCode == PRLI_REQ_EXECUTED) &&
 	    (npr->prliType == PRLI_FCP_TYPE)) {
 		lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
@@ -1980,8 +1979,6 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		if (npr->Retry)
 			ndlp->nlp_fcp_info |= NLP_FCP_2_DEVICE;
 
-		/* PRLI completed.  Decrement count. */
-		ndlp->fc4_prli_sent--;
 	} else if (nvpr &&
 		   (bf_get_be32(prli_acc_rsp_code, nvpr) ==
 		    PRLI_REQ_EXECUTED) &&
@@ -2026,8 +2023,6 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 				 be32_to_cpu(nvpr->word5),
 				 ndlp->nlp_flag, ndlp->nlp_fcp_info,
 				 ndlp->nlp_type);
-		/* PRLI completed.  Decrement count. */
-		ndlp->fc4_prli_sent--;
 	}
 	if (!(ndlp->nlp_type & NLP_FCP_TARGET) &&
 	    (vport->port_type == LPFC_NPIV_PORT) &&
@@ -2051,7 +2046,8 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE;
 		if (ndlp->nlp_type & (NLP_FCP_TARGET | NLP_NVME_TARGET))
 			lpfc_nlp_set_state(vport, ndlp, NLP_STE_MAPPED_NODE);
-		else
+		else if (ndlp->nlp_type &
+			 (NLP_FCP_INITIATOR | NLP_NVME_INITIATOR))
 			lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
 	} else
 		lpfc_printf_vlog(vport,
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 6/9] lpfc: Fix issues connecting with nvme initiator
  2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
                   ` (4 preceding siblings ...)
  2017-12-09  1:18 ` [PATCH 5/9] lpfc: Fix SCSI LUN discovery when SCSI and NVME enabled James Smart
@ 2017-12-09  1:18 ` James Smart
  2017-12-13 14:50   ` Hannes Reinecke
  2017-12-09  1:18 ` [PATCH 7/9] lpfc: Fix infinite wait when driver unregisters a remote NVME port James Smart
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: James Smart @ 2017-12-09  1:18 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Dick Kennedy, James Smart

In the lpfc discovery engine, when as a nvme target, where the
driver was performing mailbox io with the adapter for port login
when a NVME PRLI is received from the host. Rather than queue and
eventually get back to sending a response after the mailbox traffic,
the driver rejected the io with an error response.

Turns out this particular initiator didn't like the rejection values
(unable to process command/command in progress) so it never attempted
a retry of the PRLI. Thus the host never established nvme connectivity
with the lpfc target.

By changing the rejection values (to Logical Busy/nothing more), the
initiator accepted the response and would retry the PRLI, resulting in
nvme connectivity.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_nportdisc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
index 283382ac0456..d841aa42f607 100644
--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
+++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
@@ -1603,8 +1603,8 @@ lpfc_rcv_prli_reglogin_issue(struct lpfc_vport *vport,
 			 * rpi registration does complete.
 			 */
 			memset(&stat, 0, sizeof(struct ls_rjt));
-			stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
-			stat.un.b.lsRjtRsnCodeExp = LSEXP_CMD_IN_PROGRESS;
+			stat.un.b.lsRjtRsnCode = LSRJT_LOGICAL_BSY;
+			stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
 			lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb,
 					    ndlp, NULL);
 			return ndlp->nlp_state;
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 7/9] lpfc: Fix infinite wait when driver unregisters a remote NVME port.
  2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
                   ` (5 preceding siblings ...)
  2017-12-09  1:18 ` [PATCH 6/9] lpfc: Fix issues connecting with nvme initiator James Smart
@ 2017-12-09  1:18 ` James Smart
  2017-12-13 14:53   ` Hannes Reinecke
  2017-12-09  1:18 ` [PATCH 8/9] lpfc: Beef up stat counters for debug James Smart
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: James Smart @ 2017-12-09  1:18 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Dick Kennedy, James Smart

When unregistering a remote port the lpfc driver would eventually
wait for the remoteport_unreg done callback. But the driver never
completed the io aborts that would allow the connections to terminate
thus the unreg done callback was never issued.  Turns out the coding
style of the driver allowed for the wait to occur on the same cpu
that the deferred isr is called on. The blocking for the wait, blocked
the isr, and as the isr didn't run, the io aborts wouldn't finish.

Turns out there was never a good reason to block waiting for the
unreg done in the first place. The driver can continue execution and
the ref counting within the driver will do the right thing.

Resolve by removing the wait and patching up a few cases where the
ref counting didn't look right - mainly cases where the remote port
comes back before the aborts had completed and the unreg done had
been called. Additionally, a few places which used pointer values
to guide driver actions weren't protected by lock, so correct those.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_nvme.c | 135 ++++++++++++++++--------------------------
 1 file changed, 51 insertions(+), 84 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index 1097ca5a7a8e..4b2a73ebd116 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -201,16 +201,19 @@ lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
 	 * calling state machine to remove the node.
 	 */
 	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
-			"6146 remoteport delete complete %p\n",
+			"6146 remoteport delete of remoteport %p\n",
 			remoteport);
+	spin_lock_irq(&vport->phba->hbalock);
 	ndlp->nrport = NULL;
+	spin_unlock_irq(&vport->phba->hbalock);
+
+	/* Remove original register reference. The host transport
+	 * won't reference this rport/remoteport any further.
+	 */
 	lpfc_nlp_put(ndlp);
 
  rport_err:
-	/* This call has to execute as long as the rport is valid.
-	 * Release any threads waiting for the unreg to complete.
-	 */
-	complete(&rport->rport_unreg_done);
+	return;
 }
 
 static void
@@ -966,16 +969,10 @@ lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
 	/* NVME targets need completion held off until the abort exchange
 	 * completes unless the NVME Rport is getting unregistered.
 	 */
-	if (!(lpfc_ncmd->flags & LPFC_SBUF_XBUSY) ||
-	    ndlp->upcall_flags & NLP_WAIT_FOR_UNREG) {
-		/* Clear the XBUSY flag to prevent double completions.
-		 * The nvme rport is getting unregistered and there is
-		 * no need to defer the IO.
-		 */
-		if (lpfc_ncmd->flags & LPFC_SBUF_XBUSY)
-			lpfc_ncmd->flags &= ~LPFC_SBUF_XBUSY;
 
+	if (!(lpfc_ncmd->flags & LPFC_SBUF_XBUSY)) {
 		nCmd->done(nCmd);
+		lpfc_ncmd->nvmeCmd = NULL;
 	}
 
 	spin_lock_irqsave(&phba->hbalock, flags);
@@ -2494,6 +2491,9 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
 
 	rpinfo.port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn);
 	rpinfo.node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn);
+	if (!ndlp->nrport)
+		lpfc_nlp_get(ndlp);
+
 	ret = nvme_fc_register_remoteport(localport, &rpinfo, &remote_port);
 	if (!ret) {
 		/* If the ndlp already has an nrport, this is just
@@ -2502,23 +2502,33 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
 		 */
 		rport = remote_port->private;
 		if (ndlp->nrport) {
-			lpfc_printf_vlog(ndlp->vport, KERN_INFO,
-					 LOG_NVME_DISC,
-					 "6014 Rebinding lport to "
-					 "rport wwpn 0x%llx, "
-					 "Data: x%x x%x x%x x%06x\n",
-					 remote_port->port_name,
-					 remote_port->port_id,
-					 remote_port->port_role,
-					 ndlp->nlp_type,
-					 ndlp->nlp_DID);
+			if (ndlp->nrport == remote_port->private) {
+				/* Same remoteport.  Just reuse. */
+				lpfc_printf_vlog(ndlp->vport, KERN_INFO,
+						 LOG_NVME_DISC,
+						 "6014 Rebinding lport to "
+						 "remoteport %p wwpn 0x%llx, "
+						 "Data: x%x x%x %p x%x x%06x\n",
+						 remote_port,
+						 remote_port->port_name,
+						 remote_port->port_id,
+						 remote_port->port_role,
+						 ndlp,
+						 ndlp->nlp_type,
+						 ndlp->nlp_DID);
+				return 0;
+			}
 			prev_ndlp = rport->ndlp;
 
-			/* Sever the ndlp<->rport connection before dropping
-			 * the ndlp ref from register.
+			/* Sever the ndlp<->rport association
+			 * before dropping the ndlp ref from
+			 * register.
 			 */
+			spin_lock_irq(&vport->phba->hbalock);
 			ndlp->nrport = NULL;
+			spin_unlock_irq(&vport->phba->hbalock);
 			rport->ndlp = NULL;
+			rport->remoteport = NULL;
 			if (prev_ndlp)
 				lpfc_nlp_put(ndlp);
 		}
@@ -2526,19 +2536,20 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
 		/* Clean bind the rport to the ndlp. */
 		rport->remoteport = remote_port;
 		rport->lport = lport;
-		rport->ndlp = lpfc_nlp_get(ndlp);
-		if (!rport->ndlp)
-			return -1;
+		rport->ndlp = ndlp;
+		spin_lock_irq(&vport->phba->hbalock);
 		ndlp->nrport = rport;
+		spin_unlock_irq(&vport->phba->hbalock);
 		lpfc_printf_vlog(vport, KERN_INFO,
 				 LOG_NVME_DISC | LOG_NODE,
 				 "6022 Binding new rport to "
-				 "lport %p Rport WWNN 0x%llx, "
+				 "lport %p Remoteport %p  WWNN 0x%llx, "
 				 "Rport WWPN 0x%llx DID "
-				 "x%06x Role x%x\n",
-				 lport,
+				 "x%06x Role x%x, ndlp %p\n",
+				 lport, remote_port,
 				 rpinfo.node_name, rpinfo.port_name,
-				 rpinfo.port_id, rpinfo.port_role);
+				 rpinfo.port_id, rpinfo.port_role,
+				 ndlp);
 	} else {
 		lpfc_printf_vlog(vport, KERN_ERR,
 				 LOG_NVME_DISC | LOG_NODE,
@@ -2553,47 +2564,6 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
 #endif
 }
 
-/* lpfc_nvme_rport_unreg_wait - Wait for the host to complete an rport unreg.
- *
- * The driver has to wait for the host nvme transport to callback
- * indicating the remoteport has successfully unregistered all
- * resources.  Since this is an uninterruptible wait, loop every ten
- * seconds and print a message indicating no progress.
- *
- * An uninterruptible wait is used because of the risk of transport-to-
- * driver state mismatch.
- */
-void
-lpfc_nvme_rport_unreg_wait(struct lpfc_vport *vport,
-			   struct lpfc_nvme_rport *rport)
-{
-#if (IS_ENABLED(CONFIG_NVME_FC))
-	u32 wait_tmo;
-	int ret;
-
-	/* Host transport has to clean up and confirm requiring an indefinite
-	 * wait. Print a message if a 10 second wait expires and renew the
-	 * wait. This is unexpected.
-	 */
-	wait_tmo = msecs_to_jiffies(LPFC_NVME_WAIT_TMO * 1000);
-	while (true) {
-		ret = wait_for_completion_timeout(&rport->rport_unreg_done,
-						  wait_tmo);
-		if (unlikely(!ret)) {
-			lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_IOERR,
-					 "6174 Rport %p Remoteport %p wait "
-					 "timed out. Renewing.\n",
-					 rport, rport->remoteport);
-			continue;
-		}
-		break;
-	}
-	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR,
-			 "6175 Rport %p Remoteport %p Complete Success\n",
-			 rport, rport->remoteport);
-#endif
-}
-
 /* lpfc_nvme_unregister_port - unbind the DID and port_role from this rport.
  *
  * There is no notion of Devloss or rport recovery from the current
@@ -2645,24 +2615,18 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
 	 */
 
 	if (ndlp->nlp_type & NLP_NVME_TARGET) {
-		init_completion(&rport->rport_unreg_done);
-
 		/* No concern about the role change on the nvme remoteport.
 		 * The transport will update it.
 		 */
 		ndlp->upcall_flags |= NLP_WAIT_FOR_UNREG;
 		ret = nvme_fc_unregister_remoteport(remoteport);
-		if (ret != 0)
+		if (ret != 0) {
+			lpfc_nlp_put(ndlp);
 			lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC,
 					 "6167 NVME unregister failed %d "
 					 "port_state x%x\n",
 					 ret, remoteport->port_state);
-		else
-			/* Wait for completion.  This either blocks
-			 * indefinitely or succeeds
-			 */
-			lpfc_nvme_rport_unreg_wait(vport, rport);
-		ndlp->upcall_flags &= ~NLP_WAIT_FOR_UNREG;
+		}
 	}
 	return;
 
@@ -2721,8 +2685,11 @@ lpfc_sli4_nvme_xri_aborted(struct lpfc_hba *phba,
 			 * before the abort exchange command fully completes.
 			 * Once completed, it is available via the put list.
 			 */
-			nvme_cmd = lpfc_ncmd->nvmeCmd;
-			nvme_cmd->done(nvme_cmd);
+			if (lpfc_ncmd->nvmeCmd) {
+				nvme_cmd = lpfc_ncmd->nvmeCmd;
+				nvme_cmd->done(nvme_cmd);
+				lpfc_ncmd->nvmeCmd = NULL;
+			}
 			lpfc_release_nvme_buf(phba, lpfc_ncmd);
 			return;
 		}
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 8/9] lpfc: Beef up stat counters for debug
  2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
                   ` (6 preceding siblings ...)
  2017-12-09  1:18 ` [PATCH 7/9] lpfc: Fix infinite wait when driver unregisters a remote NVME port James Smart
@ 2017-12-09  1:18 ` James Smart
  2017-12-13 14:54   ` Hannes Reinecke
  2017-12-09  1:18 ` [PATCH 9/9] lpfc: update driver version to 11.4.0.6 James Smart
  2017-12-15  2:35 ` [PATCH 0/9] lpfc updates for 11.4.0.6 Martin K. Petersen
  9 siblings, 1 reply; 20+ messages in thread
From: James Smart @ 2017-12-09  1:18 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Dick Kennedy, James Smart

If log verbose in not turned on, its hard to tell when
certain error paths get hit. Add stats counters and
corresponding logic to debugfs/sysfs to aid understanding
what paths were traversed.

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_attr.c    | 46 ++++++++++++++++++++++++++++++++-----
 drivers/scsi/lpfc/lpfc_debugfs.c | 49 +++++++++++++++++++++++++++++++++++++---
 drivers/scsi/lpfc/lpfc_nvme.c    | 43 +++++++++++++++++++++++++++++++----
 drivers/scsi/lpfc/lpfc_nvme.h    | 13 ++++++++++-
 drivers/scsi/lpfc/lpfc_nvmet.c   | 34 ++++++++++++++++++++++++----
 drivers/scsi/lpfc/lpfc_nvmet.h   |  5 ++++
 6 files changed, 171 insertions(+), 19 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
index 0eef5aa52fc0..797bb42a6306 100644
--- a/drivers/scsi/lpfc/lpfc_attr.c
+++ b/drivers/scsi/lpfc/lpfc_attr.c
@@ -148,6 +148,7 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
 	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_nvmet_tgtport *tgtp;
 	struct nvme_fc_local_port *localport;
+	struct lpfc_nvme_lport *lport;
 	struct lpfc_nodelist *ndlp;
 	struct nvme_fc_remote_port *nrport;
 	uint64_t data1, data2, data3, tot;
@@ -198,10 +199,15 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
 		}
 
 		len += snprintf(buf+len, PAGE_SIZE-len,
-				"LS: Xmt %08x Drop %08x Cmpl %08x Err %08x\n",
+				"LS: Xmt %08x Drop %08x Cmpl %08x\n",
 				atomic_read(&tgtp->xmt_ls_rsp),
 				atomic_read(&tgtp->xmt_ls_drop),
-				atomic_read(&tgtp->xmt_ls_rsp_cmpl),
+				atomic_read(&tgtp->xmt_ls_rsp_cmpl));
+
+		len += snprintf(buf + len, PAGE_SIZE - len,
+				"LS: RSP Abort %08x xb %08x Err %08x\n",
+				atomic_read(&tgtp->xmt_ls_rsp_aborted),
+				atomic_read(&tgtp->xmt_ls_rsp_xb_set),
 				atomic_read(&tgtp->xmt_ls_rsp_error));
 
 		len += snprintf(buf+len, PAGE_SIZE-len,
@@ -236,6 +242,12 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
 				atomic_read(&tgtp->xmt_fcp_rsp_drop));
 
 		len += snprintf(buf+len, PAGE_SIZE-len,
+				"FCP Rsp Abort: %08x xb %08x xricqe  %08x\n",
+				atomic_read(&tgtp->xmt_fcp_rsp_aborted),
+				atomic_read(&tgtp->xmt_fcp_rsp_xb_set),
+				atomic_read(&tgtp->xmt_fcp_xri_abort_cqe));
+
+		len += snprintf(buf + len, PAGE_SIZE - len,
 				"ABORT: Xmt %08x Cmpl %08x\n",
 				atomic_read(&tgtp->xmt_fcp_abort),
 				atomic_read(&tgtp->xmt_fcp_abort_cmpl));
@@ -265,6 +277,7 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
 	}
 
 	localport = vport->localport;
+	lport = (struct lpfc_nvme_lport *)localport->private;
 	if (!localport) {
 		len = snprintf(buf, PAGE_SIZE,
 				"NVME Initiator x%llx is not allocated\n",
@@ -347,9 +360,16 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
 
 	len += snprintf(buf + len, PAGE_SIZE - len, "\nNVME Statistics\n");
 	len += snprintf(buf+len, PAGE_SIZE-len,
-			"LS: Xmt %016x Cmpl %016x\n",
+			"LS: Xmt %010x Cmpl %010x Abort %08x\n",
 			atomic_read(&phba->fc4NvmeLsRequests),
-			atomic_read(&phba->fc4NvmeLsCmpls));
+			atomic_read(&phba->fc4NvmeLsCmpls),
+			atomic_read(&lport->xmt_ls_abort));
+
+	len += snprintf(buf + len, PAGE_SIZE - len,
+			"LS XMIT: Err %08x  CMPL: xb %08x Err %08x\n",
+			atomic_read(&lport->xmt_ls_err),
+			atomic_read(&lport->cmpl_ls_xb),
+			atomic_read(&lport->cmpl_ls_err));
 
 	tot = atomic_read(&phba->fc4NvmeIoCmpls);
 	data1 = atomic_read(&phba->fc4NvmeInputRequests);
@@ -360,8 +380,22 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
 			data1, data2, data3);
 
 	len += snprintf(buf+len, PAGE_SIZE-len,
-			"    Cmpl %016llx Outstanding %016llx\n",
-			tot, (data1 + data2 + data3) - tot);
+			"    noxri %08x nondlp %08x qdepth %08x "
+			"wqerr %08x\n",
+			atomic_read(&lport->xmt_fcp_noxri),
+			atomic_read(&lport->xmt_fcp_bad_ndlp),
+			atomic_read(&lport->xmt_fcp_qdepth),
+			atomic_read(&lport->xmt_fcp_wqerr));
+
+	len += snprintf(buf + len, PAGE_SIZE - len,
+			"    Cmpl %016llx Outstanding %016llx Abort %08x\n",
+			tot, ((data1 + data2 + data3) - tot),
+			atomic_read(&lport->xmt_fcp_abort));
+
+	len += snprintf(buf + len, PAGE_SIZE - len,
+			"FCP CMPL: xb %08x Err %08x\n",
+			atomic_read(&lport->cmpl_fcp_xb),
+			atomic_read(&lport->cmpl_fcp_err));
 	return len;
 }
 
diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
index b7f57492aefc..17ea3bb04266 100644
--- a/drivers/scsi/lpfc/lpfc_debugfs.c
+++ b/drivers/scsi/lpfc/lpfc_debugfs.c
@@ -750,6 +750,8 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
 	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_nvmet_tgtport *tgtp;
 	struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp;
+	struct nvme_fc_local_port *localport;
+	struct lpfc_nvme_lport *lport;
 	uint64_t tot, data1, data2, data3;
 	int len = 0;
 	int cnt;
@@ -775,10 +777,15 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
 		}
 
 		len += snprintf(buf + len, size - len,
-				"LS: Xmt %08x Drop %08x Cmpl %08x Err %08x\n",
+				"LS: Xmt %08x Drop %08x Cmpl %08x\n",
 				atomic_read(&tgtp->xmt_ls_rsp),
 				atomic_read(&tgtp->xmt_ls_drop),
-				atomic_read(&tgtp->xmt_ls_rsp_cmpl),
+				atomic_read(&tgtp->xmt_ls_rsp_cmpl));
+
+		len += snprintf(buf + len, size - len,
+				"LS: RSP Abort %08x xb %08x Err %08x\n",
+				atomic_read(&tgtp->xmt_ls_rsp_aborted),
+				atomic_read(&tgtp->xmt_ls_rsp_xb_set),
 				atomic_read(&tgtp->xmt_ls_rsp_error));
 
 		len += snprintf(buf + len, size - len,
@@ -812,6 +819,12 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
 				atomic_read(&tgtp->xmt_fcp_rsp_drop));
 
 		len += snprintf(buf + len, size - len,
+				"FCP Rsp Abort: %08x xb %08x xricqe  %08x\n",
+				atomic_read(&tgtp->xmt_fcp_rsp_aborted),
+				atomic_read(&tgtp->xmt_fcp_rsp_xb_set),
+				atomic_read(&tgtp->xmt_fcp_xri_abort_cqe));
+
+		len += snprintf(buf + len, size - len,
 				"ABORT: Xmt %08x Cmpl %08x\n",
 				atomic_read(&tgtp->xmt_fcp_abort),
 				atomic_read(&tgtp->xmt_fcp_abort_cmpl));
@@ -885,8 +898,38 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
 				data1, data2, data3);
 
 		len += snprintf(buf + len, size - len,
-				"    Cmpl %016llx Outstanding %016llx\n",
+				"   Cmpl %016llx Outstanding %016llx\n",
 				tot, (data1 + data2 + data3) - tot);
+
+		localport = vport->localport;
+		if (!localport)
+			return len;
+		lport = (struct lpfc_nvme_lport *)localport->private;
+		if (!lport)
+			return len;
+
+		len += snprintf(buf + len, size - len,
+				"LS Xmt Err: Abrt %08x Err %08x  "
+				"Cmpl Err: xb %08x Err %08x\n",
+				atomic_read(&lport->xmt_ls_abort),
+				atomic_read(&lport->xmt_ls_err),
+				atomic_read(&lport->cmpl_ls_xb),
+				atomic_read(&lport->cmpl_ls_err));
+
+		len += snprintf(buf + len, size - len,
+				"FCP Xmt Err: noxri %06x nondlp %06x "
+				"qdepth %06x wqerr %06x Abrt %06x\n",
+				atomic_read(&lport->xmt_fcp_noxri),
+				atomic_read(&lport->xmt_fcp_bad_ndlp),
+				atomic_read(&lport->xmt_fcp_qdepth),
+				atomic_read(&lport->xmt_fcp_wqerr),
+				atomic_read(&lport->xmt_fcp_abort));
+
+		len += snprintf(buf + len, size - len,
+				"FCP Cmpl Err: xb %08x Err %08x\n",
+				atomic_read(&lport->cmpl_fcp_xb),
+				atomic_read(&lport->cmpl_fcp_err));
+
 	}
 
 	return len;
diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index 4b2a73ebd116..81e3a4f10c3c 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -221,6 +221,7 @@ lpfc_nvme_cmpl_gen_req(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 		       struct lpfc_wcqe_complete *wcqe)
 {
 	struct lpfc_vport *vport = cmdwqe->vport;
+	struct lpfc_nvme_lport *lport;
 	uint32_t status;
 	struct nvmefc_ls_req *pnvme_lsreq;
 	struct lpfc_dmabuf *buf_ptr;
@@ -230,6 +231,13 @@ lpfc_nvme_cmpl_gen_req(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 
 	pnvme_lsreq = (struct nvmefc_ls_req *)cmdwqe->context2;
 	status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK;
+	if (status) {
+		lport = (struct lpfc_nvme_lport *)vport->localport->private;
+		if (bf_get(lpfc_wcqe_c_xb, wcqe))
+			atomic_inc(&lport->cmpl_ls_xb);
+		atomic_inc(&lport->cmpl_ls_err);
+	}
+
 	ndlp = (struct lpfc_nodelist *)cmdwqe->context1;
 	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
 			 "6047 nvme cmpl Enter "
@@ -508,6 +516,7 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport,
 				pnvme_lsreq, lpfc_nvme_cmpl_gen_req,
 				ndlp, 2, 30, 0);
 	if (ret != WQE_SUCCESS) {
+		atomic_inc(&lport->xmt_ls_err);
 		lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
 				 "6052 EXIT. issue ls wqe failed lport %p, "
 				 "rport %p lsreq%p Status %x DID %x\n",
@@ -592,6 +601,7 @@ lpfc_nvme_ls_abort(struct nvme_fc_local_port *pnvme_lport,
 
 	/* Abort the targeted IOs and remove them from the abort list. */
 	list_for_each_entry_safe(wqe, next_wqe, &abort_list, dlist) {
+		atomic_inc(&lport->xmt_ls_abort);
 		spin_lock_irq(&phba->hbalock);
 		list_del_init(&wqe->dlist);
 		lpfc_sli_issue_abort_iotag(phba, pring, wqe);
@@ -795,8 +805,9 @@ lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
 	struct lpfc_nvme_rport *rport;
 	struct lpfc_nodelist *ndlp;
 	struct lpfc_nvme_fcpreq_priv *freqpriv;
+	struct lpfc_nvme_lport *lport;
 	unsigned long flags;
-	uint32_t code;
+	uint32_t code, status;
 	uint16_t cid, sqhd, data;
 	uint32_t *ptr;
 
@@ -811,10 +822,17 @@ lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
 
 	nCmd = lpfc_ncmd->nvmeCmd;
 	rport = lpfc_ncmd->nrport;
+	status = bf_get(lpfc_wcqe_c_status, wcqe);
+	if (status) {
+		lport = (struct lpfc_nvme_lport *)vport->localport->private;
+		if (bf_get(lpfc_wcqe_c_xb, wcqe))
+			atomic_inc(&lport->cmpl_fcp_xb);
+		atomic_inc(&lport->cmpl_fcp_err);
+	}
 
 	lpfc_nvmeio_data(phba, "NVME FCP CMPL: xri x%x stat x%x parm x%x\n",
 			 lpfc_ncmd->cur_iocbq.sli4_xritag,
-			 bf_get(lpfc_wcqe_c_status, wcqe), wcqe->parameter);
+			 status, wcqe->parameter);
 	/*
 	 * Catch race where our node has transitioned, but the
 	 * transport is still transitioning.
@@ -872,8 +890,7 @@ lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
 		nCmd->rcv_rsplen = LPFC_NVME_ERSP_LEN;
 		nCmd->transferred_length = nCmd->payload_length;
 	} else {
-		lpfc_ncmd->status = (bf_get(lpfc_wcqe_c_status, wcqe) &
-			    LPFC_IOCB_STATUS_MASK);
+		lpfc_ncmd->status = (status & LPFC_IOCB_STATUS_MASK);
 		lpfc_ncmd->result = (wcqe->parameter & IOERR_PARAM_MASK);
 
 		/* For NVME, the only failure path that results in an
@@ -1336,6 +1353,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
 			lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_IOERR,
 					 "6066 Missing node for DID %x\n",
 					 pnvme_rport->port_id);
+			atomic_inc(&lport->xmt_fcp_bad_ndlp);
 			ret = -ENODEV;
 			goto out_fail;
 		}
@@ -1349,6 +1367,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
 				 "IO. State x%x, Type x%x\n",
 				 rport, pnvme_rport->port_id,
 				 ndlp->nlp_state, ndlp->nlp_type);
+		atomic_inc(&lport->xmt_fcp_bad_ndlp);
 		ret = -ENODEV;
 		goto out_fail;
 
@@ -1370,12 +1389,14 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
 	 */
 	if ((atomic_read(&ndlp->cmd_pending) >= ndlp->cmd_qdepth) &&
 	    !expedite) {
+		atomic_inc(&lport->xmt_fcp_qdepth);
 		ret = -EBUSY;
 		goto out_fail;
 	}
 
 	lpfc_ncmd = lpfc_get_nvme_buf(phba, ndlp, expedite);
 	if (lpfc_ncmd == NULL) {
+		atomic_inc(&lport->xmt_fcp_noxri);
 		lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR,
 				 "6065 driver's buffer pool is empty, "
 				 "IO failed\n");
@@ -1428,6 +1449,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
 
 	ret = lpfc_sli4_issue_wqe(phba, LPFC_FCP_RING, &lpfc_ncmd->cur_iocbq);
 	if (ret) {
+		atomic_inc(&lport->xmt_fcp_wqerr);
 		atomic_dec(&ndlp->cmd_pending);
 		lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR,
 				 "6113 FCP could not issue WQE err %x "
@@ -1624,6 +1646,7 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport,
 		return;
 	}
 
+	atomic_inc(&lport->xmt_fcp_abort);
 	lpfc_nvmeio_data(phba, "NVME FCP ABORT: xri x%x idx %d to %06x\n",
 			 nvmereq_wqe->sli4_xritag,
 			 nvmereq_wqe->hba_wqidx, pnvme_rport->port_id);
@@ -2302,6 +2325,18 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport)
 		lport->vport = vport;
 		vport->nvmei_support = 1;
 
+		atomic_set(&lport->xmt_fcp_noxri, 0);
+		atomic_set(&lport->xmt_fcp_bad_ndlp, 0);
+		atomic_set(&lport->xmt_fcp_qdepth, 0);
+		atomic_set(&lport->xmt_fcp_wqerr, 0);
+		atomic_set(&lport->xmt_fcp_abort, 0);
+		atomic_set(&lport->xmt_ls_abort, 0);
+		atomic_set(&lport->xmt_ls_err, 0);
+		atomic_set(&lport->cmpl_fcp_xb, 0);
+		atomic_set(&lport->cmpl_fcp_err, 0);
+		atomic_set(&lport->cmpl_ls_xb, 0);
+		atomic_set(&lport->cmpl_ls_err, 0);
+
 		/* Don't post more new bufs if repost already recovered
 		 * the nvme sgls.
 		 */
diff --git a/drivers/scsi/lpfc/lpfc_nvme.h b/drivers/scsi/lpfc/lpfc_nvme.h
index 03b0e8471ef4..e79f8f75758c 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.h
+++ b/drivers/scsi/lpfc/lpfc_nvme.h
@@ -38,7 +38,18 @@ struct lpfc_nvme_qhandle {
 struct lpfc_nvme_lport {
 	struct lpfc_vport *vport;
 	struct completion lport_unreg_done;
-	/* Add sttats counters here */
+	/* Add stats counters here */
+	atomic_t xmt_fcp_noxri;
+	atomic_t xmt_fcp_bad_ndlp;
+	atomic_t xmt_fcp_qdepth;
+	atomic_t xmt_fcp_wqerr;
+	atomic_t xmt_fcp_abort;
+	atomic_t xmt_ls_abort;
+	atomic_t xmt_ls_err;
+	atomic_t cmpl_fcp_xb;
+	atomic_t cmpl_fcp_err;
+	atomic_t cmpl_ls_xb;
+	atomic_t cmpl_ls_err;
 };
 
 struct lpfc_nvme_rport {
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index 02a1cfa10f72..8dbf5c9d51aa 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -127,10 +127,17 @@ lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 
 	tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
 
-	if (status)
-		atomic_inc(&tgtp->xmt_ls_rsp_error);
-	else
-		atomic_inc(&tgtp->xmt_ls_rsp_cmpl);
+	if (tgtp) {
+		if (status) {
+			atomic_inc(&tgtp->xmt_ls_rsp_error);
+			if (status == IOERR_ABORT_REQUESTED)
+				atomic_inc(&tgtp->xmt_ls_rsp_aborted);
+			if (bf_get(lpfc_wcqe_c_xb, wcqe))
+				atomic_inc(&tgtp->xmt_ls_rsp_xb_set);
+		} else {
+			atomic_inc(&tgtp->xmt_ls_rsp_cmpl);
+		}
+	}
 
 out:
 	rsp = &ctxp->ctx.ls_req;
@@ -532,8 +539,11 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 	if (status) {
 		rsp->fcp_error = NVME_SC_DATA_XFER_ERROR;
 		rsp->transferred_length = 0;
-		if (tgtp)
+		if (tgtp) {
 			atomic_inc(&tgtp->xmt_fcp_rsp_error);
+			if (status == IOERR_ABORT_REQUESTED)
+				atomic_inc(&tgtp->xmt_fcp_rsp_aborted);
+		}
 
 		logerr = LOG_NVME_IOERR;
 
@@ -541,6 +551,8 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
 		if (bf_get(lpfc_wcqe_c_xb, wcqe)) {
 			ctxp->flag |= LPFC_NVMET_XBUSY;
 			logerr |= LOG_NVME_ABTS;
+			if (tgtp)
+				atomic_inc(&tgtp->xmt_fcp_rsp_xb_set);
 
 		} else {
 			ctxp->flag &= ~LPFC_NVMET_XBUSY;
@@ -1244,6 +1256,8 @@ lpfc_nvmet_create_targetport(struct lpfc_hba *phba)
 		atomic_set(&tgtp->xmt_ls_rsp, 0);
 		atomic_set(&tgtp->xmt_ls_drop, 0);
 		atomic_set(&tgtp->xmt_ls_rsp_error, 0);
+		atomic_set(&tgtp->xmt_ls_rsp_xb_set, 0);
+		atomic_set(&tgtp->xmt_ls_rsp_aborted, 0);
 		atomic_set(&tgtp->xmt_ls_rsp_cmpl, 0);
 		atomic_set(&tgtp->rcv_fcp_cmd_in, 0);
 		atomic_set(&tgtp->rcv_fcp_cmd_out, 0);
@@ -1256,7 +1270,10 @@ lpfc_nvmet_create_targetport(struct lpfc_hba *phba)
 		atomic_set(&tgtp->xmt_fcp_release, 0);
 		atomic_set(&tgtp->xmt_fcp_rsp_cmpl, 0);
 		atomic_set(&tgtp->xmt_fcp_rsp_error, 0);
+		atomic_set(&tgtp->xmt_fcp_rsp_xb_set, 0);
+		atomic_set(&tgtp->xmt_fcp_rsp_aborted, 0);
 		atomic_set(&tgtp->xmt_fcp_rsp_drop, 0);
+		atomic_set(&tgtp->xmt_fcp_xri_abort_cqe, 0);
 		atomic_set(&tgtp->xmt_fcp_abort, 0);
 		atomic_set(&tgtp->xmt_fcp_abort_cmpl, 0);
 		atomic_set(&tgtp->xmt_abort_unsol, 0);
@@ -1298,6 +1315,7 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
 	uint16_t xri = bf_get(lpfc_wcqe_xa_xri, axri);
 	uint16_t rxid = bf_get(lpfc_wcqe_xa_remote_xid, axri);
 	struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp;
+	struct lpfc_nvmet_tgtport *tgtp;
 	struct lpfc_nodelist *ndlp;
 	unsigned long iflag = 0;
 	int rrq_empty = 0;
@@ -1308,6 +1326,12 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
 
 	if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME))
 		return;
+
+	if (phba->targetport) {
+		tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
+		atomic_inc(&tgtp->xmt_fcp_xri_abort_cqe);
+	}
+
 	spin_lock_irqsave(&phba->hbalock, iflag);
 	spin_lock(&phba->sli4_hba.abts_nvme_buf_list_lock);
 	list_for_each_entry_safe(ctxp, next_ctxp,
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.h b/drivers/scsi/lpfc/lpfc_nvmet.h
index 03096024e073..5b32c9e4d4ef 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.h
+++ b/drivers/scsi/lpfc/lpfc_nvmet.h
@@ -47,6 +47,8 @@ struct lpfc_nvmet_tgtport {
 
 	/* Stats counters - lpfc_nvmet_xmt_ls_rsp_cmp */
 	atomic_t xmt_ls_rsp_error;
+	atomic_t xmt_ls_rsp_aborted;
+	atomic_t xmt_ls_rsp_xb_set;
 	atomic_t xmt_ls_rsp_cmpl;
 
 	/* Stats counters - lpfc_nvmet_unsol_fcp_buffer */
@@ -64,12 +66,15 @@ struct lpfc_nvmet_tgtport {
 	atomic_t xmt_fcp_rsp;
 
 	/* Stats counters - lpfc_nvmet_xmt_fcp_op_cmp */
+	atomic_t xmt_fcp_rsp_xb_set;
 	atomic_t xmt_fcp_rsp_cmpl;
 	atomic_t xmt_fcp_rsp_error;
+	atomic_t xmt_fcp_rsp_aborted;
 	atomic_t xmt_fcp_rsp_drop;
 
 
 	/* Stats counters - lpfc_nvmet_xmt_fcp_abort */
+	atomic_t xmt_fcp_xri_abort_cqe;
 	atomic_t xmt_fcp_abort;
 	atomic_t xmt_fcp_abort_cmpl;
 	atomic_t xmt_abort_sol;
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 9/9] lpfc: update driver version to 11.4.0.6
  2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
                   ` (7 preceding siblings ...)
  2017-12-09  1:18 ` [PATCH 8/9] lpfc: Beef up stat counters for debug James Smart
@ 2017-12-09  1:18 ` James Smart
  2017-12-13 14:55   ` Hannes Reinecke
  2017-12-15  2:35 ` [PATCH 0/9] lpfc updates for 11.4.0.6 Martin K. Petersen
  9 siblings, 1 reply; 20+ messages in thread
From: James Smart @ 2017-12-09  1:18 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Dick Kennedy, James Smart

Update the driver version to 11.4.0.6

Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_version.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/scsi/lpfc/lpfc_version.h b/drivers/scsi/lpfc/lpfc_version.h
index cc2f5cec98c5..c232bf0e8998 100644
--- a/drivers/scsi/lpfc/lpfc_version.h
+++ b/drivers/scsi/lpfc/lpfc_version.h
@@ -20,7 +20,7 @@
  * included with this package.                                     *
  *******************************************************************/
 
-#define LPFC_DRIVER_VERSION "11.4.0.5"
+#define LPFC_DRIVER_VERSION "11.4.0.6"
 #define LPFC_DRIVER_NAME		"lpfc"
 
 /* Used for SLI 2/3 */
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/9] lpfc: Fix random heartbeat timeouts during heavy IO
  2017-12-09  1:18 ` [PATCH 1/9] lpfc: Fix random heartbeat timeouts during heavy IO James Smart
@ 2017-12-13 11:33   ` Hannes Reinecke
  0 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2017-12-13 11:33 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Dick Kennedy, James Smart

On 12/09/2017 02:18 AM, James Smart wrote:
> NVME targets appear to randomly disconnect from the initiator
> when running heavy IO.
> 
> The error is due to the host aggregate (across all controllers)
> io load was beyond the maximum exchange count for nvme on the
> adapter. The driver was properly returning a resource busy status,
> but the io load was so great heartbeat commands would be bounced
> and not have a successful retry within the fuzz amount for the
> nvme heartbeat (yes, a very high io load!). Thus the target was
> terminating the controller due to a keep alive failure.
> 
> Resolve by reserving a few exchanges (by counters) which can be
> used when the adapter is out of normal exchanges and the command
> is a NVME heartbeat command. As counters are used, while the
> reserved command is outstanding, as soon as any other exchange
> completes, the counters are adjusted and the reserved count is
> replenished. The heartbeat completes execution in a normal fashion.
> 
> Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
> Signed-off-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/lpfc/lpfc.h      |  2 ++
>  drivers/scsi/lpfc/lpfc_init.c | 16 ++++++++++-
>  drivers/scsi/lpfc/lpfc_nvme.c | 66 +++++++++++++++++++++++++++++--------------
>  drivers/scsi/lpfc/lpfc_nvme.h |  1 +
>  4 files changed, 63 insertions(+), 22 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/9] lpfc: Fix -EOVERFLOW behavior for NVMET and defer_rcv
  2017-12-09  1:18 ` [PATCH 2/9] lpfc: Fix -EOVERFLOW behavior for NVMET and defer_rcv James Smart
@ 2017-12-13 14:41   ` Hannes Reinecke
  0 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2017-12-13 14:41 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Dick Kennedy, James Smart

On 12/09/2017 02:18 AM, James Smart wrote:
> The driver is all set to handle the defer_rcv api for the
> nvmet_fc transport, yet didn't properly recognize the return
> status when the defer_rcv occurred. The driver treated it simply
> as an error and aborted the io. Several residual issues occurred
> at that point.
> 
> Finish the defer_rcv support: recognize the return status when
> the io request is being handled in a deferred style. This stops
> the rogue aborts; Replenish the async cmd rcv buffer in the
> deferred receive if needed.
> 
> Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
> Signed-off-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/lpfc/lpfc_nvmet.c | 24 ++++++++++++++++++++++--
>  drivers/scsi/lpfc/lpfc_nvmet.h |  1 +
>  drivers/scsi/lpfc/lpfc_sli.c   | 24 +++++++++++++-----------
>  3 files changed, 36 insertions(+), 13 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/9] lpfc: Fix receive PRLI handling
  2017-12-09  1:18 ` [PATCH 3/9] lpfc: Fix receive PRLI handling James Smart
@ 2017-12-13 14:42   ` Hannes Reinecke
  0 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2017-12-13 14:42 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Dick Kennedy, James Smart

On 12/09/2017 02:18 AM, James Smart wrote:
> Handling a rcv'ed PRLI incorrectly can cause the ndlp to end up
> in the wrong state or the driver to ACC and PRLI when it should
> send LS_RJT.
> 
> The cause was due to the driver not properly looking at the PRLI
> type and taking the multiple protocol support into consideration.
> 
> Resolved by adding checks in the various PRLI receive points to
> validate PRLI type and reject if not valid for the enabled protocols
> and mode (host vs target).
> 
> Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
> Signed-off-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/lpfc/lpfc_els.c       |  7 -----
>  drivers/scsi/lpfc/lpfc_nportdisc.c | 54 +++++++++++++++++++++++++++++++++-----
>  2 files changed, 47 insertions(+), 14 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/9] lpfc: Increase SCSI CQ and WQ sizes.
  2017-12-09  1:18 ` [PATCH 4/9] lpfc: Increase SCSI CQ and WQ sizes James Smart
@ 2017-12-13 14:44   ` Hannes Reinecke
  0 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2017-12-13 14:44 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Dick Kennedy, James Smart

On 12/09/2017 02:18 AM, James Smart wrote:
> Increased the sizes of the SCSI WQ's and CQ's so that SCSI operation is
> similar to that used by NVME. However, size increase restricted only to
> those newer adapters that can support the larger WQE size, thus bigger
> queue sizes.
> 
> Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
> Signed-off-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/lpfc/lpfc_init.c | 65 ++++++++++++++++++++++++++++---------------
>  drivers/scsi/lpfc/lpfc_nvme.h |  2 --
>  drivers/scsi/lpfc/lpfc_sli4.h |  6 ++--
>  3 files changed, 46 insertions(+), 27 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 5/9] lpfc: Fix SCSI LUN discovery when SCSI and NVME enabled
  2017-12-09  1:18 ` [PATCH 5/9] lpfc: Fix SCSI LUN discovery when SCSI and NVME enabled James Smart
@ 2017-12-13 14:47   ` Hannes Reinecke
  0 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2017-12-13 14:47 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Dick Kennedy, James Smart

On 12/09/2017 02:18 AM, James Smart wrote:
> When enabled for both SCSI and NVME support, and connected pt2pt to a
> SCSI only target, the driver nodelist entry for the remote port is
> left in PRLI_ISSUE state and no SCSI LUNs are discovered. Works fine
> if only configured for SCSI support.
> 
> Error was due to some of the prli points still reflecting the need
> to send only 1 PRLI. On a lot of fabric configs, targets were NVME
> only, which meant the fabric-reported protocol attributes were only
> telling the driver one protocol or the other. Thus things worked
> fine. With pt2pt, the driver must send a PRLI for both protocols as
> there are no hints on what the target supports. Thus pt2pt targets
> were hitting the multiple PRLI issues.
> 
> Complete the dual PRLI support. Track explicitly whether scsi (fcp)
> or nvme prli's have been sent. Accurately track protocol support
> detected on each node as reported by the fabric or probed by PRLI
> traffic.
> 
> Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
> Signed-off-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/lpfc/lpfc_ct.c        |  1 +
>  drivers/scsi/lpfc/lpfc_els.c       | 30 ++++++++++++++++++++----------
>  drivers/scsi/lpfc/lpfc_nportdisc.c | 30 +++++++++++++-----------------
>  3 files changed, 34 insertions(+), 27 deletions(-)
> Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/9] lpfc: Fix issues connecting with nvme initiator
  2017-12-09  1:18 ` [PATCH 6/9] lpfc: Fix issues connecting with nvme initiator James Smart
@ 2017-12-13 14:50   ` Hannes Reinecke
  0 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2017-12-13 14:50 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Dick Kennedy, James Smart

On 12/09/2017 02:18 AM, James Smart wrote:
> In the lpfc discovery engine, when as a nvme target, where the
> driver was performing mailbox io with the adapter for port login
> when a NVME PRLI is received from the host. Rather than queue and
> eventually get back to sending a response after the mailbox traffic,
> the driver rejected the io with an error response.
> 
> Turns out this particular initiator didn't like the rejection values
> (unable to process command/command in progress) so it never attempted
> a retry of the PRLI. Thus the host never established nvme connectivity
> with the lpfc target.
> 
> By changing the rejection values (to Logical Busy/nothing more), the
> initiator accepted the response and would retry the PRLI, resulting in
> nvme connectivity.
> 
> Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
> Signed-off-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/lpfc/lpfc_nportdisc.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 7/9] lpfc: Fix infinite wait when driver unregisters a remote NVME port.
  2017-12-09  1:18 ` [PATCH 7/9] lpfc: Fix infinite wait when driver unregisters a remote NVME port James Smart
@ 2017-12-13 14:53   ` Hannes Reinecke
  0 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2017-12-13 14:53 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Dick Kennedy, James Smart

On 12/09/2017 02:18 AM, James Smart wrote:
> When unregistering a remote port the lpfc driver would eventually
> wait for the remoteport_unreg done callback. But the driver never
> completed the io aborts that would allow the connections to terminate
> thus the unreg done callback was never issued.  Turns out the coding
> style of the driver allowed for the wait to occur on the same cpu
> that the deferred isr is called on. The blocking for the wait, blocked
> the isr, and as the isr didn't run, the io aborts wouldn't finish.
> 
> Turns out there was never a good reason to block waiting for the
> unreg done in the first place. The driver can continue execution and
> the ref counting within the driver will do the right thing.
> 
> Resolve by removing the wait and patching up a few cases where the
> ref counting didn't look right - mainly cases where the remote port
> comes back before the aborts had completed and the unreg done had
> been called. Additionally, a few places which used pointer values
> to guide driver actions weren't protected by lock, so correct those.
> 
> Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
> Signed-off-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/lpfc/lpfc_nvme.c | 135 ++++++++++++++++--------------------------
>  1 file changed, 51 insertions(+), 84 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 8/9] lpfc: Beef up stat counters for debug
  2017-12-09  1:18 ` [PATCH 8/9] lpfc: Beef up stat counters for debug James Smart
@ 2017-12-13 14:54   ` Hannes Reinecke
  0 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2017-12-13 14:54 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Dick Kennedy, James Smart

On 12/09/2017 02:18 AM, James Smart wrote:
> If log verbose in not turned on, its hard to tell when
> certain error paths get hit. Add stats counters and
> corresponding logic to debugfs/sysfs to aid understanding
> what paths were traversed.
> 
> Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
> Signed-off-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/lpfc/lpfc_attr.c    | 46 ++++++++++++++++++++++++++++++++-----
>  drivers/scsi/lpfc/lpfc_debugfs.c | 49 +++++++++++++++++++++++++++++++++++++---
>  drivers/scsi/lpfc/lpfc_nvme.c    | 43 +++++++++++++++++++++++++++++++----
>  drivers/scsi/lpfc/lpfc_nvme.h    | 13 ++++++++++-
>  drivers/scsi/lpfc/lpfc_nvmet.c   | 34 ++++++++++++++++++++++++----
>  drivers/scsi/lpfc/lpfc_nvmet.h   |  5 ++++
>  6 files changed, 171 insertions(+), 19 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 9/9] lpfc: update driver version to 11.4.0.6
  2017-12-09  1:18 ` [PATCH 9/9] lpfc: update driver version to 11.4.0.6 James Smart
@ 2017-12-13 14:55   ` Hannes Reinecke
  0 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2017-12-13 14:55 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Dick Kennedy, James Smart

On 12/09/2017 02:18 AM, James Smart wrote:
> Update the driver version to 11.4.0.6
> 
> Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
> Signed-off-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/lpfc/lpfc_version.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/scsi/lpfc/lpfc_version.h b/drivers/scsi/lpfc/lpfc_version.h
> index cc2f5cec98c5..c232bf0e8998 100644
> --- a/drivers/scsi/lpfc/lpfc_version.h
> +++ b/drivers/scsi/lpfc/lpfc_version.h
> @@ -20,7 +20,7 @@
>   * included with this package.                                     *
>   *******************************************************************/
>  
> -#define LPFC_DRIVER_VERSION "11.4.0.5"
> +#define LPFC_DRIVER_VERSION "11.4.0.6"
>  #define LPFC_DRIVER_NAME		"lpfc"
>  
>  /* Used for SLI 2/3 */
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/9] lpfc updates for 11.4.0.6
  2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
                   ` (8 preceding siblings ...)
  2017-12-09  1:18 ` [PATCH 9/9] lpfc: update driver version to 11.4.0.6 James Smart
@ 2017-12-15  2:35 ` Martin K. Petersen
  9 siblings, 0 replies; 20+ messages in thread
From: Martin K. Petersen @ 2017-12-15  2:35 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi


James,

> This patch set provides a number of bug fixes and 1 addition to
> the driver.

Applied to 4.16/scsi-queue. Thank you!

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-12-15  2:40 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-09  1:18 [PATCH 0/9] lpfc updates for 11.4.0.6 James Smart
2017-12-09  1:18 ` [PATCH 1/9] lpfc: Fix random heartbeat timeouts during heavy IO James Smart
2017-12-13 11:33   ` Hannes Reinecke
2017-12-09  1:18 ` [PATCH 2/9] lpfc: Fix -EOVERFLOW behavior for NVMET and defer_rcv James Smart
2017-12-13 14:41   ` Hannes Reinecke
2017-12-09  1:18 ` [PATCH 3/9] lpfc: Fix receive PRLI handling James Smart
2017-12-13 14:42   ` Hannes Reinecke
2017-12-09  1:18 ` [PATCH 4/9] lpfc: Increase SCSI CQ and WQ sizes James Smart
2017-12-13 14:44   ` Hannes Reinecke
2017-12-09  1:18 ` [PATCH 5/9] lpfc: Fix SCSI LUN discovery when SCSI and NVME enabled James Smart
2017-12-13 14:47   ` Hannes Reinecke
2017-12-09  1:18 ` [PATCH 6/9] lpfc: Fix issues connecting with nvme initiator James Smart
2017-12-13 14:50   ` Hannes Reinecke
2017-12-09  1:18 ` [PATCH 7/9] lpfc: Fix infinite wait when driver unregisters a remote NVME port James Smart
2017-12-13 14:53   ` Hannes Reinecke
2017-12-09  1:18 ` [PATCH 8/9] lpfc: Beef up stat counters for debug James Smart
2017-12-13 14:54   ` Hannes Reinecke
2017-12-09  1:18 ` [PATCH 9/9] lpfc: update driver version to 11.4.0.6 James Smart
2017-12-13 14:55   ` Hannes Reinecke
2017-12-15  2:35 ` [PATCH 0/9] lpfc updates for 11.4.0.6 Martin K. Petersen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.