linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver
@ 2019-10-23 21:55 James Smart
  2019-10-23 21:55 ` [PATCH 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
                   ` (32 more replies)
  0 siblings, 33 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart

This patch set is a request to incorporate the new Broadcom
(Emulex) FC target driver, efct, into the kernel source tree.

The driver source has been Announced a couple of times, the last
version on 12/18/2018. The driver has been hosted on gitlab for
review has had contributions from the community.
  gitlab (git@gitlab.com:jsmart/efct-Emulex_FC_Target.git)

The driver integrates into the source tree at the (new) drivers/scsi/elx
subdirectory.

The driver consists of the following components:
- A libefc_sli subdirectory: This subdirectory contains a library that
  encapsulates common definitions and routines for an Emulex SLI-4
  adapter.
- A libefc subdirectory: This subdirectory contains a library of
  common routines. Of major import is a number of routines that
  implement a FC Discovery engine for target mode.
- An efct subdirectory: This subdirectory contains the efct target
  mode device driver. The driver utilizes the above librarys and
  plugs into the SCSI LIO interfaces. The driver is SCSI only at
  this time.

The patches populate the libraries and device driver and can only
be compiled as a complete set.

This driver is completely independent from the lpfc device driver
and there is no overlap on PCI ID's.

The patches have been cut against the 5.5/scsi-queue branch.

Thank you to those that have contributed to the driver in the past.

Review comments welcome!

-- james


James Smart (32):
  elx: libefc_sli: SLI-4 register offsets and field definitions
  elx: libefc_sli: SLI Descriptors and Queue entries
  elx: libefc_sli: Data structures and defines for mbox commands
  elx: libefc_sli: queue create/destroy/parse routines
  elx: libefc_sli: Populate and post different WQEs
  elx: libefc_sli: bmbx routines and SLI config commands
  elx: libefc_sli: APIs to setup SLI library
  elx: libefc: Generic state machine framework
  elx: libefc: Emulex FC discovery library APIs and definitions
  elx: libefc: FC Domain state machine interfaces
  elx: libefc: SLI and FC PORT state machine interfaces
  elx: libefc: Remote node state machine interfaces
  elx: libefc: Fabric node state machine interfaces
  elx: libefc: FC node ELS and state handling
  elx: efct: Data structures and defines for hw operations
  elx: efct: Driver initialization routines
  elx: efct: Hardware queues creation and deletion
  elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  elx: efct: Hardware IO and SGL initialization
  elx: efct: Hardware queues processing
  elx: efct: Unsolicited FC frame processing routines
  elx: efct: Extended link Service IO handling
  elx: efct: SCSI IO handling routines
  elx: efct: LIO backend interface routines
  elx: efct: Hardware IO submission routines
  elx: efct: link statistics and SFP data
  elx: efct: xport and hardware teardown routines
  elx: efct: IO timeout handling routines
  elx: efct: Firmware update, async link processing
  elx: efct: scsi_transport_fc host interface support
  elx: efct: Add Makefile and Kconfig for efct driver
  elx: efct: Tie into kernel Kconfig and build process

 MAINTAINERS                            |    8 +
 drivers/scsi/Kconfig                   |    2 +
 drivers/scsi/Makefile                  |    1 +
 drivers/scsi/elx/Kconfig               |    8 +
 drivers/scsi/elx/Makefile              |   30 +
 drivers/scsi/elx/efct/efct_driver.c    | 1243 +++++
 drivers/scsi/elx/efct/efct_driver.h    |  154 +
 drivers/scsi/elx/efct/efct_els.c       | 2676 +++++++++++
 drivers/scsi/elx/efct/efct_els.h       |  139 +
 drivers/scsi/elx/efct/efct_hw.c        | 7866 ++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h        | 1275 ++++++
 drivers/scsi/elx/efct/efct_hw_queues.c | 1964 ++++++++
 drivers/scsi/elx/efct/efct_hw_queues.h |   66 +
 drivers/scsi/elx/efct/efct_io.c        |  288 ++
 drivers/scsi/elx/efct/efct_io.h        |  219 +
 drivers/scsi/elx/efct/efct_lio.c       | 2643 +++++++++++
 drivers/scsi/elx/efct/efct_lio.h       |  371 ++
 drivers/scsi/elx/efct/efct_scsi.c      | 1970 ++++++++
 drivers/scsi/elx/efct/efct_scsi.h      |  401 ++
 drivers/scsi/elx/efct/efct_unsol.c     | 1156 +++++
 drivers/scsi/elx/efct/efct_unsol.h     |   49 +
 drivers/scsi/elx/efct/efct_utils.c     |  662 +++
 drivers/scsi/elx/efct/efct_utils.h     |  113 +
 drivers/scsi/elx/efct/efct_xport.c     | 1728 +++++++
 drivers/scsi/elx/efct/efct_xport.h     |  216 +
 drivers/scsi/elx/include/efc_common.h  |   44 +
 drivers/scsi/elx/libefc/efc.h          |  188 +
 drivers/scsi/elx/libefc/efc_device.c   | 1977 ++++++++
 drivers/scsi/elx/libefc/efc_device.h   |   72 +
 drivers/scsi/elx/libefc/efc_domain.c   | 1393 ++++++
 drivers/scsi/elx/libefc/efc_domain.h   |   57 +
 drivers/scsi/elx/libefc/efc_fabric.c   | 2252 +++++++++
 drivers/scsi/elx/libefc/efc_fabric.h   |  116 +
 drivers/scsi/elx/libefc/efc_lib.c      |  263 ++
 drivers/scsi/elx/libefc/efc_node.c     | 1878 ++++++++
 drivers/scsi/elx/libefc/efc_node.h     |  196 +
 drivers/scsi/elx/libefc/efc_sm.c       |  275 ++
 drivers/scsi/elx/libefc/efc_sm.h       |  171 +
 drivers/scsi/elx/libefc/efc_sport.c    | 1157 +++++
 drivers/scsi/elx/libefc/efc_sport.h    |   52 +
 drivers/scsi/elx/libefc/efclib.h       |  796 ++++
 drivers/scsi/elx/libefc_sli/sli4.c     | 7522 ++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h     | 4845 ++++++++++++++++++++
 43 files changed, 48502 insertions(+)
 create mode 100644 drivers/scsi/elx/Kconfig
 create mode 100644 drivers/scsi/elx/Makefile
 create mode 100644 drivers/scsi/elx/efct/efct_driver.c
 create mode 100644 drivers/scsi/elx/efct/efct_driver.h
 create mode 100644 drivers/scsi/elx/efct/efct_els.c
 create mode 100644 drivers/scsi/elx/efct/efct_els.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw.c
 create mode 100644 drivers/scsi/elx/efct/efct_hw.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.h
 create mode 100644 drivers/scsi/elx/efct/efct_io.c
 create mode 100644 drivers/scsi/elx/efct/efct_io.h
 create mode 100644 drivers/scsi/elx/efct/efct_lio.c
 create mode 100644 drivers/scsi/elx/efct/efct_lio.h
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.c
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.h
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.c
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.h
 create mode 100644 drivers/scsi/elx/efct/efct_utils.c
 create mode 100644 drivers/scsi/elx/efct/efct_utils.h
 create mode 100644 drivers/scsi/elx/efct/efct_xport.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.h
 create mode 100644 drivers/scsi/elx/include/efc_common.h
 create mode 100644 drivers/scsi/elx/libefc/efc.h
 create mode 100644 drivers/scsi/elx/libefc/efc_device.c
 create mode 100644 drivers/scsi/elx/libefc/efc_device.h
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.h
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h
 create mode 100644 drivers/scsi/elx/libefc/efc_lib.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.h
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.h
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.h
 create mode 100644 drivers/scsi/elx/libefc/efclib.h
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h

-- 
2.13.7


^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-24 16:22   ` Daniel Wagner
  2019-10-23 21:55 ` [PATCH 02/32] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
                   ` (31 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This is the initial patch for the new Emulex target mode SCSI
driver sources.

This patch:
- Creates the new Emulex source level directory drivers/scsi/elx
  and adds the directory to the MAINTAINERS file.
- Creates the first library subdirectory drivers/scsi/elx/libefc_sli.
  This library is a SLI-4 interface library.
- Starts the population of the libefc_sli library with definitions
  of SLI-4 hardware register offsets and definitions.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 MAINTAINERS                        |   8 ++
 drivers/scsi/elx/libefc_sli/sli4.c |  26 ++++
 drivers/scsi/elx/libefc_sli/sli4.h | 252 +++++++++++++++++++++++++++++++++++++
 3 files changed, 286 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h

diff --git a/MAINTAINERS b/MAINTAINERS
index b9c0ca414a74..8c9dd55769df 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6088,6 +6088,14 @@ W:	http://www.broadcom.com
 S:	Supported
 F:	drivers/scsi/lpfc/
 
+EMULEX/BROADCOM EFCT FC/FCOE SCSI TARGET DRIVER
+M:	James Smart <james.smart@broadcom.com>
+M:	Ram Vegesna <ram.vegesna@broadcom.com>
+L:	linux-scsi@vger.kernel.org
+W:	http://www.broadcom.com
+S:	Supported
+F:	drivers/scsi/elx/
+
 ENE CB710 FLASH CARD READER DRIVER
 M:	Michał Mirosław <mirq-linux@rere.qmqm.pl>
 S:	Maintained
diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
new file mode 100644
index 000000000000..68ccd3ad8ac8
--- /dev/null
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/**
+ * All common (i.e. transport-independent) SLI-4 functions are implemented
+ * in this file.
+ */
+#include "sli4.h"
+
+struct sli4_asic_entry_t {
+	u32 rev_id;
+	u32 family;	/* generation */
+};
+
+static struct sli4_asic_entry_t sli4_asic_table[] = {
+	{ SLI4_ASIC_REV_B0, SLI4_ASIC_GEN_5},
+	{ SLI4_ASIC_REV_D0, SLI4_ASIC_GEN_5},
+	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A0, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
+};
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
new file mode 100644
index 000000000000..1efbd874301a
--- /dev/null
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -0,0 +1,252 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ *
+ */
+
+/*
+ * All common SLI-4 structures and function prototypes.
+ */
+
+#ifndef _SLI4_H
+#define _SLI4_H
+
+/*************************************************************************
+ * Common SLI-4 register offsets and field definitions
+ */
+
+/* SLI_INTF - SLI Interface Definition Register */
+#define SLI4_INTF_REG		0x0058
+enum {
+	SLI4_INTF_REV_SHIFT = 4,
+	SLI4_INTF_REV_MASK = 0x0F << SLI4_INTF_REV_SHIFT,
+
+	SLI4_INTF_REV_S3 = 3 << SLI4_INTF_REV_SHIFT,
+	SLI4_INTF_REV_S4 = 4 << SLI4_INTF_REV_SHIFT,
+
+	SLI4_INTF_FAMILY_SHIFT = 8,
+	SLI4_INTF_FAMILY_MASK  = 0x0F << SLI4_INTF_FAMILY_SHIFT,
+
+	SLI4_FAMILY_CHECK_ASIC_TYPE = 0xf << SLI4_INTF_FAMILY_SHIFT,
+
+	SLI4_INTF_IF_TYPE_SHIFT = 12,
+	SLI4_INTF_IF_TYPE_MASK = 0x0F << SLI4_INTF_IF_TYPE_SHIFT,
+
+	SLI4_INTF_IF_TYPE_2 = 2 << SLI4_INTF_IF_TYPE_SHIFT,
+	SLI4_INTF_IF_TYPE_6 = 6 << SLI4_INTF_IF_TYPE_SHIFT,
+
+	SLI4_INTF_VALID_SHIFT = 29,
+	SLI4_INTF_VALID_MASK = 0x0F << SLI4_INTF_VALID_SHIFT,
+
+	SLI4_INTF_VALID_VALUE = 6 << SLI4_INTF_VALID_SHIFT,
+};
+
+/* ASIC_ID - SLI ASIC Type and Revision Register */
+#define SLI4_ASIC_ID_REG	0x009c
+enum {
+	SLI4_ASIC_GEN_SHIFT = 8,
+	SLI4_ASIC_GEN_MASK = 0xFF << SLI4_ASIC_GEN_SHIFT,
+	SLI4_ASIC_GEN_5 = 0x0b << SLI4_ASIC_GEN_SHIFT,
+	SLI4_ASIC_GEN_6 = 0x0c << SLI4_ASIC_GEN_SHIFT,
+	SLI4_ASIC_GEN_7 = 0x0d << SLI4_ASIC_GEN_SHIFT,
+};
+
+enum {
+	SLI4_ASIC_REV_A0 = 0x00,
+	SLI4_ASIC_REV_A1 = 0x01,
+	SLI4_ASIC_REV_A2 = 0x02,
+	SLI4_ASIC_REV_A3 = 0x03,
+	SLI4_ASIC_REV_B0 = 0x10,
+	SLI4_ASIC_REV_B1 = 0x11,
+	SLI4_ASIC_REV_B2 = 0x12,
+	SLI4_ASIC_REV_C0 = 0x20,
+	SLI4_ASIC_REV_C1 = 0x21,
+	SLI4_ASIC_REV_C2 = 0x22,
+	SLI4_ASIC_REV_D0 = 0x30,
+};
+
+/* BMBX - Bootstrap Mailbox Register */
+#define SLI4_BMBX_REG		0x0160
+#define SLI4_BMBX_MASK_HI	0x3
+#define SLI4_BMBX_MASK_LO	0xf
+#define SLI4_BMBX_RDY		(1 << 0)
+#define SLI4_BMBX_HI		(1 << 1)
+#define SLI4_BMBX_WRITE_HI(r)	((upper_32_bits(r) & ~SLI4_BMBX_MASK_HI) | \
+					SLI4_BMBX_HI)
+#define SLI4_BMBX_WRITE_LO(r)	(((upper_32_bits(r) & SLI4_BMBX_MASK_HI) \
+				<< 30) | (((r) & ~SLI4_BMBX_MASK_LO) >> 2))
+#define SLI4_BMBX_SIZE				256
+
+/* SLIPORT_CONTROL - SLI Port Control Register */
+#define SLI4_PORT_CTRL_REG		0x0408
+#define SLI4_PORT_CTRL_IP		(1 << 27)
+#define SLI4_PORT_CTRL_IDIS		(1 << 22)
+#define SLI4_PORT_CTRL_FDD		(1 << 31)
+
+/* SLI4_SLIPORT_ERROR - SLI Port Error Register */
+#define SLI4_PORT_ERROR1		0x040c
+#define SLI4_PORT_ERROR2		0x0410
+
+/* EQCQ_DOORBELL - EQ and CQ Doorbell Register */
+#define SLI4_EQCQ_DB_REG		0x120
+enum {
+	SLI4_EQ_ID_LO_MASK = 0x01FF,
+
+	SLI4_CQ_ID_LO_MASK = 0x03FF,
+
+	SLI4_EQCQ_CI_EQ = 0x0200,
+
+	SLI4_EQCQ_QT_EQ = 0x00000400,
+	SLI4_EQCQ_QT_CQ = 0x00000000,
+
+	SLI4_EQCQ_ID_HI_SHIFT = 11,
+	SLI4_EQCQ_ID_HI_MASK = 0xF800,
+
+	SLI4_EQCQ_NUM_SHIFT = 16,
+	SLI4_EQCQ_NUM_MASK = 0x1FFF0000,
+
+	SLI4_EQCQ_ARM = 0x20000000,
+	SLI4_EQCQ_UNARM = 0x00000000,
+
+};
+
+#define SLI4_EQ_DOORBELL(n, id, a)\
+	((id & SLI4_EQ_ID_LO_MASK) | SLI4_EQCQ_QT_EQ |\
+	(((id >> 9) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
+	((n << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | \
+	a | SLI4_EQCQ_CI_EQ)
+
+#define SLI4_CQ_DOORBELL(n, id, a)\
+	((id & SLI4_CQ_ID_LO_MASK) | SLI4_EQCQ_QT_CQ |\
+	(((id >> 10) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
+	((n << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | a)
+
+/* EQ_DOORBELL - EQ Doorbell Register for IF_TYPE = 6*/
+#define SLI4_IF6_EQ_DB_REG	0x120
+enum {
+	SLI4_IF6_EQ_ID_MASK = 0x0FFF,
+
+	SLI4_IF6_EQ_NUM_SHIFT = 16,
+	SLI4_IF6_EQ_NUM_MASK = 0x1FFF0000,
+};
+
+#define SLI4_IF6_EQ_DOORBELL(n, id, a)\
+	((id & SLI4_IF6_EQ_ID_MASK) | \
+	((n << SLI4_IF6_EQ_NUM_SHIFT) & SLI4_IF6_EQ_NUM_MASK) | a)
+
+/* CQ_DOORBELL - CQ Doorbell Register for IF_TYPE = 6*/
+#define SLI4_IF6_CQ_DB_REG	0xC0
+enum {
+	SLI4_IF6_CQ_ID_MASK = 0xFFFF,
+
+	SLI4_IF6_CQ_NUM_SHIFT = 16,
+	SLI4_IF6_CQ_NUM_MASK = 0x1FFF0000,
+};
+
+#define SLI4_IF6_CQ_DOORBELL(n, id, a)\
+	((id & SLI4_IF6_CQ_ID_MASK) | \
+	((n << SLI4_IF6_CQ_NUM_SHIFT) & SLI4_IF6_CQ_NUM_MASK) | a)
+
+/**
+ * @brief MQ_DOORBELL - MQ Doorbell Register
+ */
+#define SLI4_MQ_DB_REG		0x0140	/* register offset */
+#define SLI4_IF6_MQ_DB_REG	0x0160	/* if_type = 6*/
+enum {
+	SLI4_MQ_ID_MASK = 0xFFFF,
+
+	SLI4_MQ_NUM_SHIFT = 16,
+	SLI4_MQ_NUM_MASK = 0x3FFF0000,
+};
+
+#define SLI4_MQ_DOORBELL(n, i)\
+	((i & SLI4_MQ_ID_MASK) | \
+	((n << SLI4_MQ_NUM_SHIFT) & SLI4_MQ_NUM_MASK))
+
+/**
+ * @brief RQ_DOORBELL - RQ Doorbell Register
+ */
+#define SLI4_RQ_DB_REG		0x0a0	/* register offset */
+#define SLI4_IF6_RQ_DB_REG	0x0080	/* if_type = 6 */
+enum {
+	SLI4_RQ_DB_ID_MASK = 0xFFFF,
+
+	SLI4_RQ_DB_NUM_SHIFT = 16,
+	SLI4_RQ_DB_NUM_MASK = 0x3FFF0000,
+};
+
+#define SLI4_RQ_DOORBELL(n, i)\
+	((i & SLI4_RQ_DB_ID_MASK) | \
+	((n << SLI4_RQ_DB_NUM_SHIFT) & SLI4_RQ_DB_NUM_MASK))
+
+/**
+ * @brief WQ_DOORBELL - WQ Doorbell Register
+ */
+#define SLI4_IO_WQ_DB_REG	0x040	/* register offset */
+#define SLI4_IF6_WQ_DB_REG	0x040	/* if_type = 6 */
+enum {
+	SLI4_WQ_ID_MASK = 0xFFFF,
+
+	SLI4_WQ_IDX_SHIFT = 16,
+	SLI4_WQ_IDX_MASK = 0xFF << SLI4_WQ_IDX_SHIFT,
+
+	SLI4_WQ_NUM_SHIFT = 24,
+	SLI4_WQ_NUM_MASK = 0xFF << SLI4_WQ_NUM_SHIFT,
+};
+
+#define SLI4_WQ_DOORBELL(n, x, i)\
+	((i & SLI4_WQ_ID_MASK) | \
+	((x << SLI4_WQ_IDX_SHIFT) & SLI4_WQ_IDX_MASK) | \
+	((n << SLI4_WQ_NUM_SHIFT) & SLI4_WQ_NUM_MASK))
+
+/**
+ * @brief SLIPORT_SEMAPHORE - SLI Port Host and Port Status Register
+ */
+#define SLI4_PORT_SEMP_REG	0x0400	/* Type 2 + 3 + 6*/
+enum {
+	SLI4_PORT_SEMP_ERR_MASK = 0xF000,
+	SLI4_PORT_SEMP_UNRECOV_ERR = 0xF000,
+};
+
+/**
+ * @brief SLIPORT_STATUS - SLI Port Status Register
+ */
+#define SLI4_PORT_STATUS_REGOFF	0x0404	 /* Type 2 + 3 + 6*/
+#define SLI4_PORT_STATUS_FDP	(1 << 21)/* func specific dump present */
+#define SLI4_PORT_STATUS_RDY	(1 << 23)/* ready */
+#define SLI4_PORT_STATUS_RN	(1 << 24)/* reset needed */
+#define SLI4_PORT_STATUS_DIP	(1 << 25)/* dump present */
+#define SLI4_PORT_STATUS_OTI	(1 << 29)/* over temp indicator */
+#define SLI4_PORT_STATUS_END	(1 << 30)/* endianness */
+#define SLI4_PORT_STATUS_ERR	(1 << 31)/* SLI port error */
+
+#define SLI4_PHYDEV_CTRL_REG	0x0414	/* Type 2 + 3 + 6 */
+#define SLI4_PHYDEV_CTRL_FRST	(1 << 1)/* firmware reset */
+#define SLI4_PHYDEV_CTRL_DD	(1 << 2)/* diagnostic dump */
+
+/**
+ * @brief Register name enums
+ */
+enum sli4_regname_en {
+	SLI4_REG_BMBX,
+	SLI4_REG_EQ_DOORBELL,
+	SLI4_REG_CQ_DOORBELL,
+	SLI4_REG_RQ_DOORBELL,
+	SLI4_REG_IO_WQ_DOORBELL,
+	SLI4_REG_MQ_DOORBELL,
+	SLI4_REG_PHYSDEV_CONTROL,
+	SLI4_REG_PORT_CONTROL,
+	SLI4_REG_PORT_ERROR1,
+	SLI4_REG_PORT_ERROR2,
+	SLI4_REG_PORT_SEMAPHORE,
+	SLI4_REG_PORT_STATUS,
+	SLI4_REG_MAX			/* must be last */
+};
+
+struct sli4_reg_s {
+	u32	rset;
+	u32	off;
+};
+
+#endif /* !_SLI4_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 02/32] elx: libefc_sli: SLI Descriptors and Queue entries
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
  2019-10-23 21:55 ` [PATCH 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-25  9:59   ` Daniel Wagner
  2019-10-23 21:55 ` [PATCH 03/32] elx: libefc_sli: Data structures and defines for mbox commands James Smart
                   ` (30 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch add SLI-4 Data structures and defines for:
- Buffer Descriptors (BDEs)
- Scatter/Gather List elements (SGEs)
- Queues and their Entry Descriptions for:
   Event Queues (EQs), Completion Queues (CQs),
   Receive Queues (RQs), and the Mailbox Queue (MQ).

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/include/efc_common.h |   26 +
 drivers/scsi/elx/libefc_sli/sli4.h    | 2015 +++++++++++++++++++++++++++++++++
 2 files changed, 2041 insertions(+)
 create mode 100644 drivers/scsi/elx/include/efc_common.h

diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
new file mode 100644
index 000000000000..dbabc4f6ee5e
--- /dev/null
+++ b/drivers/scsi/elx/include/efc_common.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFC_COMMON_H__)
+#define __EFC_COMMON_H__
+
+#define EFC_SUCCESS 0
+#define EFC_FAIL 1
+
+struct efc_dma_s {
+	void		*virt;	/* virtual address of the memory
+				 * used by the CPU
+				 */
+	void            *alloc;
+	dma_addr_t	phys;	/* physical or bus address of the memory used
+				 * by the hardware
+				 */
+	size_t		size;	/* size in bytes of the memory */
+	size_t          len;
+	struct pci_dev	*pdev;
+};
+
+#endif /* __EFC_COMMON_H__ */
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 1efbd874301a..ebc6a67e9c8c 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -12,6 +12,8 @@
 #ifndef _SLI4_H
 #define _SLI4_H
 
+#include "../include/efc_common.h"
+
 /*************************************************************************
  * Common SLI-4 register offsets and field definitions
  */
@@ -249,4 +251,2017 @@ struct sli4_reg_s {
 	u32	off;
 };
 
+struct sli4_dmaaddr_s {
+	__le32 low;
+	__le32 high;
+};
+
+/* a 3-word BDE with address 1st 2 words, length last word */
+struct sli4_bufptr_s {
+	struct sli4_dmaaddr_s addr;
+	__le32 length;
+};
+
+/* a 3-word BDE with length as first word, address last 2 words */
+struct sli4_bufptr_len1st_s {
+	__le32 length0;		/* note byte offset suffix as a sanity check */
+	struct sli4_dmaaddr_s addr;
+};
+
+/**
+ * @brief Buffer Descriptor Entry (BDE)
+ */
+enum {
+	SLI4_BDE_MASK_BUFFER_LEN	= 0x00ffffff,
+	SLI4_BDE_MASK_BDE_TYPE		= 0xff000000,
+};
+
+struct sli4_bde_s {
+	__le32		bde_type_buflen;
+	union {
+		struct sli4_dmaaddr_s data;
+		struct {
+			__le32	offset;
+			__le32	rsvd2;
+		} imm;
+		struct sli4_dmaaddr_s blp;
+	} u;
+};
+
+/* Buffer Descriptors */
+enum {
+	BDE_TYPE_SHIFT		= 24,	/* Generic 64-bit data */
+	BDE_TYPE_BDE_64		= 0x00,	/* Generic 64-bit data */
+	BDE_TYPE_BDE_IMM	= 0x01,	/* Immediate data */
+	BDE_TYPE_BLP		= 0x40,	/* Buffer List Pointer */
+};
+
+/**
+ * @brief Scatter-Gather Entry (SGE)
+ */
+
+#define SLI4_SGE_MAX_RESERVED			3
+
+enum {
+	/* DW2 */
+	SLI4_SGE_DATA_OFFSET_MASK	= 0x07FFFFFF,	/* DW2 */
+	/*DW2W1*/
+	SLI4_SGE_TYPE_SHIFT		= 27,
+	SLI4_SGE_TYPE_MASK		= 0xf << SLI4_SGE_TYPE_SHIFT,
+	/*SGE Types*/
+	SLI4_SGE_TYPE_DATA		= 0x00,
+	SLI4_SGE_TYPE_DIF		= 0x04,	/* Data Integrity Field */
+	SLI4_SGE_TYPE_LSP		= 0x05,	/* List Segment Pointer */
+	SLI4_SGE_TYPE_PEDIF		= 0x06,	/* Post Encryption Engine DIF */
+	SLI4_SGE_TYPE_PESEED		= 0x07,	/* Post Encryption DIF Seed */
+	SLI4_SGE_TYPE_DISEED		= 0x08,	/* DIF Seed */
+	SLI4_SGE_TYPE_ENC		= 0x09,	/* Encryption */
+	SLI4_SGE_TYPE_ATM		= 0x0a,	/* DIF Application Tag Mask */
+	SLI4_SGE_TYPE_SKIP		= 0x0c,	/* SKIP */
+
+	SLI4_SGE_LAST			= (1 << 31),
+};
+
+struct sli4_sge_s {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		buffer_length;
+};
+
+/**
+ * @brief T10 DIF Scatter-Gather Entry (SGE)
+ */
+struct sli4_dif_sge_s {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		rsvd12;
+};
+
+/**
+ * @brief Data Integrity Seed (DISEED) SGE
+ */
+enum {
+	/* DW2W1 */
+	DISEED_SGE_HS			= (1 << 2),
+	DISEED_SGE_WS			= (1 << 3),
+	DISEED_SGE_IC			= (1 << 4),
+	DISEED_SGE_ICS			= (1 << 5),
+	DISEED_SGE_ATRT			= (1 << 6),
+	DISEED_SGE_AT			= (1 << 7),
+	DISEED_SGE_FAT			= (1 << 8),
+	DISEED_SGE_NA			= (1 << 9),
+	DISEED_SGE_HI			= (1 << 10),
+
+	/* DW3W1 */
+	DISEED_SGE_BS_MASK		= 0x0007,
+	DISEED_SGE_AI			= (1 << 3),
+	DISEED_SGE_ME			= (1 << 4),
+	DISEED_SGE_RE			= (1 << 5),
+	DISEED_SGE_CE			= (1 << 6),
+	DISEED_SGE_NR			= (1 << 7),
+
+	DISEED_SGE_OP_RX_SHIFT		= 8,
+	DISEED_SGE_OP_RX_MASK		= (0xf << DISEED_SGE_OP_RX_SHIFT),
+	DISEED_SGE_OP_TX_SHIFT		= 12,
+	DISEED_SGE_OP_TX_MASK		= (0xf << DISEED_SGE_OP_TX_SHIFT),
+
+	/* Opcode values */
+	DISEED_SGE_OP_IN_NODIF_OUT_CRC	= 0x00,
+	DISEED_SGE_OP_IN_CRC_OUT_NODIF	= 0x01,
+	DISEED_SGE_OP_IN_NODIF_OUT_CSUM	= 0x02,
+	DISEED_SGE_OP_IN_CSUM_OUT_NODIF	= 0x03,
+	DISEED_SGE_OP_IN_CRC_OUT_CRC	= 0x04,
+	DISEED_SGE_OP_IN_CSUM_OUT_CSUM	= 0x05,
+	DISEED_SGE_OP_IN_CRC_OUT_CSUM	= 0x06,
+	DISEED_SGE_OP_IN_CSUM_OUT_CRC	= 0x07,
+	DISEED_SGE_OP_IN_RAW_OUT_RAW	= 0x08,
+
+};
+
+#define DISEED_SGE_OP_RX_VALUE(stype)	\
+	(DISEED_SGE_OP_##stype << DISEED_SGE_OP_RX_SHIFT)
+#define DISEED_SGE_OP_TX_VALUE(stype)	\
+	(DISEED_SGE_OP_##stype << DISEED_SGE_OP_TX_SHIFT)
+
+struct sli4_diseed_sge_s {
+	__le32		ref_tag_cmp;
+	__le32		ref_tag_repl;
+	__le16		app_tag_repl;
+	__le16		dw2w1_flags;
+	__le16		app_tag_cmp;
+	__le16		dw3w1_flags;
+};
+
+/**
+ * @brief List Segment Pointer Scatter-Gather Entry (SGE)
+ */
+enum {
+	SLI4_LSP_SGE_SEGLEN	= 0x00ffffff,		/* DW3 */
+};
+
+struct sli4_lsp_sge_s {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		dw3_seglen;
+};
+
+/**
+ * @brief Event Queue Entry
+ */
+enum {
+	SLI4_EQE_VALID	= 1,
+	SLI4_EQE_MJCODE	= 0xe,
+	SLI4_EQE_MNCODE	= 0xfff0,
+};
+
+struct sli4_eqe_s {
+	__le16		dw0w0_flags;
+	__le16		resource_id;
+};
+
+#define SLI4_MAJOR_CODE_STANDARD	0
+#define SLI4_MAJOR_CODE_SENTINEL	1
+
+/**
+ * @brief Mailbox Completion Queue Entry
+ *
+ * A CQE generated on the completion of a MQE from a MQ.
+ */
+enum {
+	SLI4_MCQE_CONSUMED	= (1 << 27),
+	SLI4_MCQE_COMPLETED	= (1 << 28),
+	SLI4_MCQE_AE		= (1 << 30),
+	SLI4_MCQE_VALID		= (1 << 31),
+};
+
+struct sli4_mcqe_s {
+	__le16		completion_status;
+	__le16		extended_status;
+	__le32		mqe_tag_low;
+	__le32		mqe_tag_high;
+	__le32		dw3_flags;
+};
+
+/**
+ * @brief Asynchronous Completion Queue Entry
+ *
+ * A CQE generated asynchronously in response
+ * to the link or other internal events.
+ */
+enum {
+	SLI4_ACQE_AE	= (1 << 6), /** async event - this is an ACQE */
+	SLI4_ACQE_VAL	= (1 << 7), /** valid - contents of CQE are valid */
+};
+
+struct sli4_acqe_s {
+	__le32		event_data[3];
+	u8		rsvd12;
+	u8		event_code;
+	u8		event_type;	/* values are protocol specific */
+	u8		ae_val;
+};
+
+#define SLI4_ACQE_EVENT_CODE_LINK_STATE		0x01
+#define SLI4_ACQE_EVENT_CODE_FIP		0x02
+#define SLI4_ACQE_EVENT_CODE_DCBX		0x03
+#define SLI4_ACQE_EVENT_CODE_ISCSI		0x04
+#define SLI4_ACQE_EVENT_CODE_GRP_5		0x05
+#define SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT	0x10
+#define SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT	0x11
+#define SLI4_ACQE_EVENT_CODE_VF_EVENT		0x12
+#define SLI4_ACQE_EVENT_CODE_MR_EVENT		0x13
+
+enum sli4_qtype_e {
+	SLI_QTYPE_EQ,
+	SLI_QTYPE_CQ,
+	SLI_QTYPE_MQ,
+	SLI_QTYPE_WQ,
+	SLI_QTYPE_RQ,
+	SLI_QTYPE_MAX,			/* must be last */
+};
+
+#define SLI_USER_MQ_COUNT	1	/** User specified max mail queues */
+#define SLI_MAX_CQ_SET_COUNT	16
+#define SLI_MAX_RQ_SET_COUNT	16
+
+enum sli4_qentry_e {
+	SLI_QENTRY_ASYNC,
+	SLI_QENTRY_MQ,
+	SLI_QENTRY_RQ,
+	SLI_QENTRY_WQ,
+	SLI_QENTRY_WQ_RELEASE,
+	SLI_QENTRY_OPT_WRITE_CMD,
+	SLI_QENTRY_OPT_WRITE_DATA,
+	SLI_QENTRY_XABT,
+	SLI_QENTRY_MAX			/* must be last */
+};
+
+enum {
+	/* CQ has MQ/Async completion */
+	SLI4_QUEUE_FLAG_MQ	= (1 << 0),
+
+	/* RQ for packet headers */
+	SLI4_QUEUE_FLAG_HDR	= (1 << 1),
+
+	/* RQ index increment by 8 */
+	SLI4_QUEUE_FLAG_RQBATCH	= (1 << 2),
+};
+
+struct sli4_queue_s {
+	/* Common to all queue types */
+	struct efc_dma_s	dma;
+	spinlock_t	lock;	/* protect the queue operations */
+	u32	index;		/* current host entry index */
+	u16	size;		/* entry size */
+	u16	length;		/* number of entries */
+	u16	n_posted;	/* number entries posted */
+	u16	id;		/* Port assigned xQ_ID */
+	u16	ulp;		/* ULP assigned to this queue */
+	void __iomem    *db_regaddr;	/* register address for the doorbell */
+	u8		type;		/* queue type ie EQ, CQ, ... */
+	u32	proc_limit;	/* limit CQE processed per iteration */
+	u32	posted_limit;	/* CQE/EQE process before ring doorbel */
+	u32	max_num_processed;
+	time_t		max_process_time;
+	u16	phase;		/* For if_type = 6, this value toggle
+				 * for each iteration of the queue,
+				 * a queue entry is valid when a cqe
+				 * valid bit matches this value
+				 */
+
+	/* Type specific gunk */
+	union {
+		u32	r_idx;	/** "read" index (MQ only) */
+		struct {
+			u32	dword;
+		} flag;
+	} u;
+};
+
+/**
+ * @brief Generic Command Request header
+ */
+enum {
+	CMD_V0 = 0x00,
+	CMD_V1 = 0x01,
+	CMD_V2 = 0x02,
+};
+
+struct sli4_rqst_hdr_s {
+	u8		opcode;
+	u8		subsystem;
+	__le16		rsvd2;
+	__le32		timeout;
+	__le32		request_length;
+	u32		dw3_version;
+};
+
+/**
+ * @brief Generic Command Response header
+ */
+struct sli4_rsp_hdr_s {
+	u8		opcode;
+	u8		subsystem;
+	__le16		rsvd2;
+	u8		status;
+	u8		additional_status;
+	__le16		rsvd6;
+	__le32		response_length;
+	__le32		actual_response_length;
+};
+
+#define SLI4_QUEUE_DEFAULT_CQ	U16_MAX /** Use the default CQ */
+
+#define SLI4_QUEUE_RQ_BATCH	8
+
+#define CFG_RQST_CMDSZ(stype)    sizeof(struct sli4_rqst_##stype##_s)
+
+#define CFG_RQST_PYLD_LEN(stype)	\
+		cpu_to_le32(sizeof(struct sli4_rqst_##stype##_s) -	\
+			sizeof(struct sli4_rqst_hdr_s))
+
+#define CFG_RQST_PYLD_LEN_VAR(stype, varpyld)	\
+		cpu_to_le32((sizeof(struct sli4_rqst_##stype##_s) +	\
+			varpyld) - sizeof(struct sli4_rqst_hdr_s))
+
+#define SZ_DMAADDR              sizeof(struct sli4_dmaaddr_s)
+
+/* Payload length must accommodate both request and response */
+#define SLI_CONFIG_PYLD_LENGTH(stype)	\
+	max(sizeof(struct sli4_rqst_##stype##_s),		\
+		sizeof(struct sli4_rsp_##stype##_s))
+
+/**
+ * @brief COMMON_CREATE_CQ_V2
+ *
+ * Create a Completion Queue.
+ */
+enum {
+	/* DW5_flags values*/
+	CREATE_CQV2_CLSWM_MASK	= 0x00003000,
+	CREATE_CQV2_NODELAY	= 0x00004000,
+	CREATE_CQV2_AUTOVALID	= 0x00008000,
+	CREATE_CQV2_CQECNT_MASK	= 0x18000000,
+	CREATE_CQV2_VALID	= 0x20000000,
+	CREATE_CQV2_EVT		= 0x80000000,
+	/* DW6W1_flags values*/
+	CREATE_CQV2_ARM		= 0x8000,
+};
+
+struct sli4_rqst_cmn_create_cq_v2_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		num_pages;
+	u8		page_size;
+	u8		rsvd19;
+	__le32		dw5_flags;
+	__le16		eq_id;
+	__le16		dw6w1_arm;
+	__le16		cqe_count;
+	__le16		rsvd30;
+	__le32		rsvd32;
+	struct sli4_dmaaddr_s page_phys_addr[0];
+};
+
+/**
+ * @brief COMMON_CREATE_CQ_SET_V0
+ *
+ * Create a set of Completion Queues.
+ */
+enum {
+	/* DW5_flags values*/
+	CREATE_CQSETV0_CLSWM_MASK  = 0x00003000,
+	CREATE_CQSETV0_NODELAY	   = 0x00004000,
+	CREATE_CQSETV0_AUTOVALID   = 0x00008000,
+	CREATE_CQSETV0_CQECNT_MASK = 0x18000000,
+	CREATE_CQSETV0_VALID	   = 0x20000000,
+	CREATE_CQSETV0_EVT	   = 0x80000000,
+	/* DW5W1_flags values */
+	CREATE_CQSETV0_CQE_COUNT   = 0x7fff,
+	CREATE_CQSETV0_ARM	   = 0x8000,
+};
+
+struct sli4_rqst_cmn_create_cq_set_v0_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		num_pages;
+	u8		page_size;
+	u8		rsvd19;
+	__le32		dw5_flags;
+	__le16		num_cq_req;
+	__le16		dw6w1_flags;
+	__le16		eq_id[16];
+	struct sli4_dmaaddr_s page_phys_addr[0];
+};
+
+/**
+ * CQE count.
+ */
+enum {
+	CQ_CNT_SHIFT	= 27,
+
+	CQ_CNT_256	= 0,
+	CQ_CNT_512	= 1,
+	CQ_CNT_1024	= 2,
+	CQ_CNT_LARGE	= 3,
+};
+#define CQ_CNT_VAL(type) (CQ_CNT_##type << CQ_CNT_SHIFT)
+
+#define SLI4_CQE_BYTES			(4 * sizeof(u32))
+
+#define SLI4_COMMON_CREATE_CQ_V2_MAX_PAGES 8
+
+/**
+ * @brief Generic Common Create EQ/CQ/MQ/WQ/RQ Queue completion
+ */
+struct sli4_rsp_cmn_create_queue_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le16	q_id;
+	u8	rsvd18;
+	u8	ulp;
+	__le32	db_offset;
+	__le16	db_rs;
+	__le16	db_fmt;
+};
+
+struct sli4_rsp_cmn_create_queue_set_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le16	q_id;
+	__le16	num_q_allocated;
+};
+
+/**
+ * @brief Common Destroy CQ
+ */
+struct sli4_rqst_cmn_destroy_cq_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16	cq_id;
+	__le16	rsvd14;
+};
+
+struct sli4_rsp_cmn_destroy_cq_s {
+	struct sli4_rsp_hdr_s	hdr;
+};
+
+/**
+ * @brief COMMON_MODIFY_EQ_DELAY
+ *
+ * Modify the delay multiplier for EQs
+ */
+struct sli4_rqst_cmn_modify_eq_delay_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32	num_eq;
+	struct {
+		__le32	eq_id;
+		__le32	phase;
+		__le32	delay_multiplier;
+	} eq_delay_record[8];
+};
+
+struct sli4_rsp_cmn_modify_eq_delay_s {
+	struct sli4_rsp_hdr_s	hdr;
+};
+
+/**
+ * @brief COMMON_CREATE_EQ
+ *
+ * Create an Event Queue.
+ */
+enum {
+	/* DW5 */
+	CREATE_EQ_AUTOVALID		= (1 << 28),
+	CREATE_EQ_VALID			= (1 << 29),
+	CREATE_EQ_EQESZ			= (1 << 31),
+	/* DW6 */
+	CREATE_EQ_COUNT			= (7 << 26),
+	CREATE_EQ_ARM			= (1 << 31),
+	/* DW7 */
+	CREATE_EQ_DELAYMULTI_SHIFT	= 13,
+	CREATE_EQ_DELAYMULTI_MASK	= (0x3FF << CREATE_EQ_DELAYMULTI_SHIFT),
+	CREATE_EQ_DELAYMULTI		= (32 << CREATE_EQ_DELAYMULTI_SHIFT),
+};
+
+struct sli4_rqst_cmn_create_eq_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16	num_pages;
+	__le16	rsvd18;
+	__le32	dw5_flags;
+	__le32	dw6_flags;
+	__le32	dw7_delaymulti;
+	__le32	rsvd32;
+	struct sli4_dmaaddr_s page_address[8];
+};
+
+struct sli4_rsp_cmn_create_eq_s {
+	struct sli4_rsp_cmn_create_queue_s q_rsp;
+};
+
+/**
+ * EQ count.
+ */
+enum {
+	EQ_CNT_SHIFT	= 26,
+
+	EQ_CNT_256	= 0,
+	EQ_CNT_512	= 1,
+	EQ_CNT_1024	= 2,
+	EQ_CNT_2048	= 3,
+	EQ_CNT_4096	= 3,
+};
+#define EQ_CNT_VAL(type) (EQ_CNT_##type << EQ_CNT_SHIFT)
+
+#define SLI4_EQE_SIZE_4			0
+#define SLI4_EQE_SIZE_16		1
+
+/**
+ * @brief Common Destroy EQ
+ */
+struct sli4_rqst_cmn_destroy_eq_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		eq_id;
+	__le16		rsvd18;
+};
+
+struct sli4_rsp_cmn_destroy_eq_s {
+	struct sli4_rsp_hdr_s	hdr;
+};
+
+/**
+ * @brief COMMON_CREATE_MQ_EXT
+ *
+ * Create a Mailbox Queue; accommodate v0 and v1 forms.
+ */
+enum {
+	/* DW6W1 */
+	CREATE_MQEXT_RINGSIZE		= 0xf,
+	CREATE_MQEXT_CQID_SHIFT		= 6,
+	CREATE_MQEXT_CQIDV0_MASK	= 0xffc0,
+	/* DW7 */
+	CREATE_MQEXT_VAL		= (1 << 31),
+	/* DW8 */
+	CREATE_MQEXT_ACQV		= (1 << 0),
+	CREATE_MQEXT_ASYNC_CQIDV0	= 0x7fe,
+};
+
+struct sli4_rqst_cmn_create_mq_ext_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		num_pages;
+	__le16		cq_id_v1;
+	__le32		async_event_bitmap;
+	__le16		async_cq_id_v1;
+	__le16		dw6w1_flags;
+	__le32		dw7_val;
+	__le32		dw8_flags;
+	__le32		rsvd36;
+	struct sli4_dmaaddr_s page_phys_addr[0];
+};
+
+
+struct sli4_rsp_cmn_create_mq_ext_s {
+	struct sli4_rsp_cmn_create_queue_s q_rsp;
+};
+
+#define SLI4_MQE_SIZE_16		0x05
+#define SLI4_MQE_SIZE_32		0x06
+#define SLI4_MQE_SIZE_64		0x07
+#define SLI4_MQE_SIZE_128		0x08
+
+#define SLI4_ASYNC_EVT_LINK_STATE	(1 << 1)
+#define SLI4_ASYNC_EVT_FIP		(1 << 2)
+#define SLI4_ASYNC_EVT_GRP5		(1 << 5)
+#define SLI4_ASYNC_EVT_FC		(1 << 16)
+#define SLI4_ASYNC_EVT_SLI_PORT		(1 << 17)
+
+#define	SLI4_ASYNC_EVT_FC_ALL	\
+		(SLI4_ASYNC_EVT_LINK_STATE	| \
+		 SLI4_ASYNC_EVT_FIP		| \
+		 SLI4_ASYNC_EVT_GRP5		| \
+		 SLI4_ASYNC_EVT_FC		| \
+		 SLI4_ASYNC_EVT_SLI_PORT)
+
+/**
+ * @brief Common Destroy MQ
+ */
+struct sli4_rqst_cmn_destroy_mq_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		mq_id;
+	__le16		rsvd18;
+};
+
+struct sli4_rsp_cmn_destroy_mq_s {
+	struct sli4_rsp_hdr_s	hdr;
+};
+
+/**
+ * @brief COMMON_CREATE_CQ_V0
+ *
+ * Create a Completion Queue.
+ */
+struct sli4_rqst_cmn_create_cq_v0_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		num_pages;
+	__le16		rsvd18;
+	__le32		dw5_flags;
+	__le32		dw6_flags;
+	__le32		rsvd28;
+	__le32		rsvd32;
+	struct sli4_dmaaddr_s page_phys_addr[0];
+};
+
+/**
+ * @brief RQ_CREATE
+ *
+ * Create a Receive Queue for FC.
+ */
+enum {
+	SLI4_RQ_CREATE_DUA		= 0x1,
+	SLI4_RQ_CREATE_BQU		= 0x2,
+
+	SLI4_RQE_SIZE			= 8,
+	SLI4_RQE_SIZE_8			= 0x2,
+	SLI4_RQE_SIZE_16		= 0x3,
+	SLI4_RQE_SIZE_32		= 0x4,
+	SLI4_RQE_SIZE_64		= 0x5,
+	SLI4_RQE_SIZE_128		= 0x6,
+
+	SLI4_RQ_PAGE_SIZE_4096		= 0x1,
+	SLI4_RQ_PAGE_SIZE_8192		= 0x2,
+	SLI4_RQ_PAGE_SIZE_16384		= 0x4,
+	SLI4_RQ_PAGE_SIZE_32768		= 0x8,
+	SLI4_RQ_PAGE_SIZE_64536		= 0x10,
+
+	SLI4_RQ_CREATE_V0_MAX_PAGES	= 8,
+	SLI4_RQ_CREATE_V0_MIN_BUF_SIZE	= 128,
+	SLI4_RQ_CREATE_V0_MAX_BUF_SIZE	= 2048,
+};
+
+struct sli4_rqst_rq_create_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		num_pages;
+	u8		dua_bqu_byte;
+	u8		ulp;
+	__le16		rsvd16;
+	u8		rqe_count_byte;
+	u8		rsvd19;
+	__le32		rsvd20;
+	__le16		buffer_size;
+	__le16		cq_id;
+	__le32		rsvd28;
+	struct sli4_dmaaddr_s page_phys_addr[SLI4_RQ_CREATE_V0_MAX_PAGES];
+};
+
+struct sli4_rsp_rq_create_s {
+	struct sli4_rsp_cmn_create_queue_s rsp;
+};
+
+/**
+ * @brief RQ_CREATE_V1
+ *
+ * Create a version 1 Receive Queue for FC.
+ */
+enum {
+	SLI4_RQ_CREATE_V1_DNB		= 0x80,
+	SLI4_RQ_CREATE_V1_MAX_PAGES	= 8,
+	SLI4_RQ_CREATE_V1_MIN_BUF_SIZE	= 64,
+	SLI4_RQ_CREATE_V1_MAX_BUF_SIZE	= 2048,
+};
+
+
+struct sli4_rqst_rq_create_v1_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		num_pages;
+	u8		rsvd14;
+	u8		dim_dfd_dnb;
+	u8		page_size;
+	u8		rqe_size_byte;
+	__le16		rqe_count;
+	__le32		rsvd20;
+	__le16		rsvd24;
+	__le16		cq_id;
+	__le32		buffer_size;
+	struct sli4_dmaaddr_s page_phys_addr[SLI4_RQ_CREATE_V1_MAX_PAGES];
+};
+
+struct sli4_rsp_rq_create_v1_s {
+	struct sli4_rsp_cmn_create_queue_s rsp;
+};
+
+/**
+ * @brief RQ_CREATE_V2
+ *
+ * Create a version 2 Receive Queue for FC use.
+ */
+enum {
+	SLI4_RQCREATEV2_DNB = 0x80,
+};
+
+struct sli4_rqst_rq_create_v2_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		num_pages;
+	u8		rq_count;
+	u8		dim_dfd_dnb;
+	u8		page_size;
+	u8		rqe_size_byte;
+	__le16		rqe_count;
+	__le16		hdr_buffer_size;
+	__le16		payload_buffer_size;
+	__le16		base_cq_id;
+	__le16		rsvd26;
+	__le32		rsvd42;
+	struct sli4_dmaaddr_s page_phys_addr[0];
+};
+
+struct sli4_rsp_rq_create_v2_s {
+	struct sli4_rsp_cmn_create_queue_s rsp;
+};
+
+/**
+ * @brief RQ_DESTROY
+ *
+ * Destroy an FC Receive Queue.
+ */
+struct sli4_rqst_rq_destroy_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		rq_id;
+	__le16		rsvd;
+};
+
+struct sli4_rsp_rq_destroy_s {
+	struct sli4_rsp_hdr_s	hdr;
+};
+
+/**
+ * Code definitions applicable to all FC CQE types.
+ */
+#define SLI4_CQE_CODE_OFFSET		14
+
+#define SLI4_CQE_CODE_WORK_REQUEST_COMPLETION	0x01
+#define SLI4_CQE_CODE_RELEASE_WQE		0x02
+#define SLI4_CQE_CODE_RQ_ASYNC			0x04
+#define SLI4_CQE_CODE_XRI_ABORTED		0x05
+#define SLI4_CQE_CODE_RQ_COALESCING		0x06
+#define SLI4_CQE_CODE_RQ_CONSUMPTION		0x07
+#define SLI4_CQE_CODE_MEASUREMENT_REPORTING	0x08
+#define SLI4_CQE_CODE_RQ_ASYNC_V1		0x09
+#define SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD	0x0B
+#define SLI4_CQE_CODE_OPTIMIZED_WRITE_DATA	0x0C
+
+/**
+ * @brief WQ_CREATE
+ *
+ * Create a Work Queue for FC.
+ */
+#define SLI4_WQ_CREATE_V0_MAX_PAGES	4
+struct sli4_rqst_wq_create_s {
+	struct sli4_rqst_hdr_s	hdr;
+	u8		num_pages;
+	u8		dua_byte;
+	__le16		cq_id;
+	struct sli4_dmaaddr_s page_phys_addr[SLI4_WQ_CREATE_V0_MAX_PAGES];
+	u8		bqu_byte;
+	u8		ulp;
+	__le16		rsvd;
+};
+
+struct sli4_rsp_wq_create_s {
+	struct sli4_rsp_cmn_create_queue_s q_rsp;
+};
+
+/**
+ * @brief WQ_CREATE_V1
+ *
+ * Create a version 1 Work Queue for FC use.
+ */
+#define SLI4_WQ_CREATE_V1_MAX_PAGES	8
+struct sli4_rqst_wq_create_v1_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		num_pages;
+	__le16		cq_id;
+	u8		page_size;
+	u8		wqe_size_byte;
+	__le16		wqe_count;
+	__le32		rsvd;
+	struct	sli4_dmaaddr_s page_phys_addr[SLI4_WQ_CREATE_V1_MAX_PAGES];
+};
+
+struct sli4_rsp_wq_create_v1_s {
+	struct sli4_rsp_cmn_create_queue_s rsp;
+};
+/**
+ * @brief WQ_DESTROY
+ *
+ * Destroy an FC Work Queue.
+ */
+struct sli4_rqst_wq_destroy_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		wq_id;
+	__le16		rsvd;
+};
+
+struct sli4_rsp_wq_destroy_s {
+	struct sli4_rsp_hdr_s	hdr;
+};
+
+/**
+ * @brief Asynchronouse Event :  Link State ACQE.
+ */
+enum {
+	LINK_TYPE_SHIFT		= 6,
+	LINK_TYPE_MASK		= 0x03 << LINK_TYPE_SHIFT,
+	LINK_TYPE_ETHERNET	= 0x00 << LINK_TYPE_SHIFT,
+	LINK_TYPE_FC		= 0x01 << LINK_TYPE_SHIFT,
+
+	PORT_SPEED_NO_LINK	= 0x0,
+	PORT_SPEED_10_MBPS	= 0x1,
+	PORT_SPEED_100_MBP	= 0x2,
+	PORT_SPEED_1_GBPS	= 0x3,
+	PORT_SPEED_10_GBPS	= 0x4,
+	PORT_SPEED_20_GBPS	= 0x5,
+	PORT_SPEED_25_GBPS	= 0x6,
+	PORT_SPEED_40_GBPS	= 0x7,
+	PORT_SPEED_100_GBPS	= 0x8,
+
+	PORT_LINK_STATUS_PHYSICAL_DOWN	= 0x0,
+	PORT_LINK_STATUS_PHYSICAL_UP	= 0x1,
+	PORT_LINK_STATUS_LOGICAL_DOWN	= 0x2,
+	PORT_LINK_STATUS_LOGICAL_UP	= 0x3,
+
+	PORT_DUPLEX_NONE		= 0x0,
+	PORT_DUPLEX_HWF			= 0x1,
+	PORT_DUPLEX_FULL		= 0x2,
+
+	/*Link Event Type*/
+	LINK_STATE_PHYSICAL		= 0x00,
+	LINK_STATE_LOGICAL		= 0x01,
+};
+
+struct sli4_link_state_s {
+	u8		link_num_type;
+	u8		port_link_status;
+	u8		port_duplex;
+	u8		port_speed;
+	u8		port_fault;
+	u8		rsvd5;
+	__le16		logical_link_speed;
+	__le32		event_tag;
+	u8		rsvd12;
+	u8		event_code;
+	u8		event_type;
+	u8		flags;
+};
+
+/**
+ * @brief Asynchronouse Event :  FC Link Attention Event.
+ */
+enum {
+	LINK_ATTN_TYPE_LINK_UP		= 0x01,
+	LINK_ATTN_TYPE_LINK_DOWN	= 0x02,
+	LINK_ATTN_TYPE_NO_HARD_ALPA	= 0x03,
+
+	LINK_ATTN_P2P			= 0x01,
+	LINK_ATTN_FC_AL			= 0x02,
+	LINK_ATTN_INTERNAL_LOOPBACK	= 0x03,
+	LINK_ATTN_SERDES_LOOPBACK	= 0x04,
+
+	LINK_ATTN_1G			= 0x01,
+	LINK_ATTN_2G			= 0x02,
+	LINK_ATTN_4G			= 0x04,
+	LINK_ATTN_8G			= 0x08,
+	LINK_ATTN_10G			= 0x0a,
+	LINK_ATTN_16G			= 0x10,
+
+};
+
+struct sli4_link_attention_s {
+	u8		link_number;
+	u8		attn_type;
+	u8		topology;
+	u8		port_speed;
+	u8		port_fault;
+	u8		shared_link_status;
+	__le16		logical_link_speed;
+	__le32		event_tag;
+	u8		rsvd12;
+	u8		event_code;
+	u8		event_type;
+	u8		flags;
+};
+
+/**
+ * @brief FC event types.
+ */
+enum {
+	FC_EVENT_LINK_ATTENTION		= 0x01,
+	FC_EVENT_SHARED_LINK_ATTENTION	= 0x02,
+};
+
+/**
+ * @brief FC WQ completion queue entry.
+ */
+enum {
+	SLI4_WCQE_XB = 0x10,
+	SLI4_WCQE_QX = 0x80,
+};
+
+struct sli4_fc_wcqe_s {
+	u8		hw_status;
+	u8		status;
+	__le16		request_tag;
+	__le32		wqe_specific_1;
+	__le32		wqe_specific_2;
+	u8		rsvd12;
+	u8		qx_byte;
+	u8		code;
+	u8		flags;
+};
+
+/**
+ * @brief FC WQ consumed CQ queue entry.
+ */
+struct sli4_fc_wqec_s {
+	__le32		rsvd0;
+	__le32		rsvd1;
+	__le16		wqe_index;
+	__le16		wq_id;
+	__le16		rsvd12;
+	u8		code;
+	u8		vld_byte;
+};
+
+/**
+ * @brief FC Completion Status Codes.
+ */
+#define SLI4_FC_WCQE_STATUS_SUCCESS		0x00
+#define SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE	0x01
+#define SLI4_FC_WCQE_STATUS_REMOTE_STOP		0x02
+#define SLI4_FC_WCQE_STATUS_LOCAL_REJECT	0x03
+#define SLI4_FC_WCQE_STATUS_NPORT_RJT		0x04
+#define SLI4_FC_WCQE_STATUS_FABRIC_RJT		0x05
+#define SLI4_FC_WCQE_STATUS_NPORT_BSY		0x06
+#define SLI4_FC_WCQE_STATUS_FABRIC_BSY		0x07
+#define SLI4_FC_WCQE_STATUS_LS_RJT		0x09
+#define SLI4_FC_WCQE_STATUS_CMD_REJECT		0x0b
+#define SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK	0x0c
+#define SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED	0x11
+#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED 0x12
+#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC	0x13
+#define SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE	0x14
+#define SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE	0x15
+#define SLI4_FC_WCQE_STATUS_DI_ERROR		0x16
+#define SLI4_FC_WCQE_STATUS_BA_RJT		0x17
+#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED 0x18
+#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC	0x19
+#define SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT	0x1a
+#define SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST	0x1b
+
+/**
+ * @brief DI_ERROR Extended Status
+ */
+#define SLI4_FC_DI_ERROR_GE	(1 << 0) /* Guard Error */
+#define SLI4_FC_DI_ERROR_AE	(1 << 1) /* Application Tag Error */
+#define SLI4_FC_DI_ERROR_RE	(1 << 2) /* Reference Tag Error */
+#define SLI4_FC_DI_ERROR_TDPV	(1 << 3) /* Total Data Placed Valid */
+#define SLI4_FC_DI_ERROR_UDB	(1 << 4) /* Uninitialized DIF Block */
+#define SLI4_FC_DI_ERROR_EDIR   (1 << 5) /* Error direction */
+
+/* WQE DIF field contents */
+#define SLI4_DIF_DISABLED		0
+#define SLI4_DIF_PASS_THROUGH		1
+#define SLI4_DIF_STRIP			2
+#define SLI4_DIF_INSERT			3
+
+/* driver generated status codes; better not overlap
+ * with chip's status codes!
+ */
+#define SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT  0xff
+#define SLI4_FC_WCQE_STATUS_SHUTDOWN		0xfe
+#define SLI4_FC_WCQE_STATUS_DISPATCH_ERROR	0xfd
+
+/**
+ * Work Queue Entry (WQE) types.
+ */
+#define SLI4_WQE_ABORT			0x0f
+#define SLI4_WQE_ELS_REQUEST64		0x8a
+#define SLI4_WQE_FCP_IBIDIR64		0xac
+#define SLI4_WQE_FCP_IREAD64		0x9a
+#define SLI4_WQE_FCP_IWRITE64		0x98
+#define SLI4_WQE_FCP_ICMND64		0x9c
+#define SLI4_WQE_FCP_TRECEIVE64		0xa1
+#define SLI4_WQE_FCP_CONT_TRECEIVE64	0xe5
+#define SLI4_WQE_FCP_TRSP64		0xa3
+#define SLI4_WQE_FCP_TSEND64		0x9f
+#define SLI4_WQE_GEN_REQUEST64		0xc2
+#define SLI4_WQE_SEND_FRAME		0xe1
+#define SLI4_WQE_XMIT_BCAST64		0X84
+#define SLI4_WQE_XMIT_BLS_RSP		0x97
+#define SLI4_WQE_ELS_RSP64		0x95
+#define SLI4_WQE_XMIT_SEQUENCE64	0x82
+#define SLI4_WQE_REQUEUE_XRI		0x93
+
+/**
+ * WQE command types.
+ */
+#define SLI4_CMD_FCP_IREAD64_WQE	0x00
+#define SLI4_CMD_FCP_ICMND64_WQE	0x00
+#define SLI4_CMD_FCP_IWRITE64_WQE	0x01
+#define SLI4_CMD_FCP_TRECEIVE64_WQE	0x02
+#define SLI4_CMD_FCP_TRSP64_WQE		0x03
+#define SLI4_CMD_FCP_TSEND64_WQE	0x07
+#define SLI4_CMD_GEN_REQUEST64_WQE	0x08
+#define SLI4_CMD_XMIT_BCAST64_WQE	0x08
+#define SLI4_CMD_XMIT_BLS_RSP64_WQE	0x08
+#define SLI4_CMD_ABORT_WQE		0x08
+#define SLI4_CMD_XMIT_SEQUENCE64_WQE	0x08
+#define SLI4_CMD_REQUEUE_XRI_WQE	0x0A
+#define SLI4_CMD_SEND_FRAME_WQE		0x0a
+
+#define SLI4_WQE_SIZE			0x05
+#define SLI4_WQE_EXT_SIZE		0x06
+
+#define SLI4_WQE_BYTES			(16 * sizeof(u32))
+#define SLI4_WQE_EXT_BYTES		(32 * sizeof(u32))
+
+/* Mask for ccp (CS_CTL) */
+#define SLI4_MASK_CCP	0xfe /* Upper 7 bits of CS_CTL is priority */
+
+/**
+ * @brief Generic WQE
+ */
+enum {
+	SLI4_GEN_WQE_EBDECNT	= (0xf << 0),	/* DW10W0 */
+	SLI4_GEN_WQE_LEN_LOC	= (0x3 << 7),
+	SLI4_GEN_WQE_QOSD	= (1 << 9),
+	SLI4_GEN_WQE_XBL	= (1 << 11),
+	SLI4_GEN_WQE_HLM	= (1 << 12),
+	SLI4_GEN_WQE_IOD	= (1 << 13),
+	SLI4_GEN_WQE_DBDE	= (1 << 14),
+	SLI4_GEN_WQE_WQES	= (1 << 15),
+
+	SLI4_GEN_WQE_PRI	= (0x7),
+	SLI4_GEN_WQE_PV		= (1 << 3),
+	SLI4_GEN_WQE_EAT	= (1 << 4),
+	SLI4_GEN_WQE_XC		= (1 << 5),
+	SLI4_GEN_WQE_CCPE	= (1 << 7),
+
+	SLI4_GEN_WQE_CMDTYPE	= (0xf),
+	SLI4_GEN_WQE_WQEC	= (1 << 7),
+};
+
+struct sli4_generic_wqe_s {
+	__le32		cmd_spec0_5[6];
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		dw10w0_flags;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_wqec_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+};
+
+/**
+ * @brief WQE used to abort exchanges.
+ */
+enum {
+	SLI4_ABRT_WQE_IR	= 0x02,
+
+	SLI4_ABRT_WQE_EBDECNT	= (0xf << 0),	/* DW10W0 */
+	SLI4_ABRT_WQE_LEN_LOC	= (0x3 << 7),
+	SLI4_ABRT_WQE_QOSD	= (1 << 9),
+	SLI4_ABRT_WQE_XBL	= (1 << 11),
+	SLI4_ABRT_WQE_IOD	= (1 << 13),
+	SLI4_ABRT_WQE_DBDE	= (1 << 14),
+	SLI4_ABRT_WQE_WQES	= (1 << 15),
+
+	SLI4_ABRT_WQE_PRI	= (0x7),
+	SLI4_ABRT_WQE_PV	= (1 << 3),
+	SLI4_ABRT_WQE_EAT	= (1 << 4),
+	SLI4_ABRT_WQE_XC	= (1 << 5),
+	SLI4_ABRT_WQE_CCPE	= (1 << 7),
+
+	SLI4_ABRT_WQE_CMDTYPE	= (0xf),
+	SLI4_ABRT_WQE_WQEC	= (1 << 7),
+};
+
+struct sli4_abort_wqe_s {
+	__le32		rsvd0;
+	__le32		rsvd4;
+	__le32		ext_t_tag;
+	u8		ia_ir_byte;
+	u8		criteria;
+	__le16		rsvd10;
+	__le32		ext_t_mask;
+	__le32		t_mask;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		t_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		dw10w0_flags;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_wqec_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+};
+
+#define SLI4_ABORT_CRITERIA_XRI_TAG		0x01
+#define SLI4_ABORT_CRITERIA_ABORT_TAG		0x02
+#define SLI4_ABORT_CRITERIA_REQUEST_TAG		0x03
+#define SLI4_ABORT_CRITERIA_EXT_ABORT_TAG	0x04
+
+enum sli4_abort_type_e {
+	SLI_ABORT_XRI,
+	SLI_ABORT_ABORT_ID,
+	SLI_ABORT_REQUEST_ID,
+	SLI_ABORT_MAX,		/* must be last */
+};
+
+/**
+ * @brief WQE used to create an ELS request.
+ */
+enum {
+	SLI4_REQ_WQE_QOSD	= 0x2,
+	SLI4_REQ_WQE_DBDE	= 0x40,
+	SLI4_REQ_WQE_XBL	= 0x8,
+	SLI4_REQ_WQE_XC		= 0x20,
+	SLI4_REQ_WQE_IOD	= 0x20,
+	SLI4_REQ_WQE_HLM	= 0x10,
+	SLI4_REQ_WQE_CCPE	= 0x80,
+	SLI4_REQ_WQE_EAT	= 0x10,
+	SLI4_REQ_WQE_WQES	= 0x80,
+	SLI4_REQ_WQE_PU_SHFT	= 4,
+	SLI4_REQ_WQE_CT_SHFT	= 2,
+	SLI4_REQ_WQE_CT		= 0xc,
+	SLI4_REQ_WQE_ELSID_SHFT	= 4,
+	SLI4_REQ_WQE_SP_SHFT	= 24,
+	SLI4_REQ_WQE_LEN_LOC_BIT1 = 0x80,
+	SLI4_REQ_WQE_LEN_LOC_BIT2 = 0x1,
+};
+
+struct sli4_els_request64_wqe_s {
+	struct sli4_bde_s	els_request_payload;
+	__le32		els_request_payload_length;
+	__le32		sid_sp_dword;
+	__le32		remote_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		temporary_rpi;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_elsid_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	struct sli4_bde_s	els_response_payload_bde;
+	__le32		max_response_payload_length;
+};
+
+/**
+ * @brief WQE used to create an FCP initiator no data command.
+ */
+enum {
+	SLI4_ICMD_WQE_DBDE	= 0x40,
+	SLI4_ICMD_WQE_XBL	= 0x8,
+	SLI4_ICMD_WQE_XC	= 0x20,
+	SLI4_ICMD_WQE_IOD	= 0x20,
+	SLI4_ICMD_WQE_HLM	= 0x10,
+	SLI4_ICMD_WQE_CCPE	= 0x80,
+	SLI4_ICMD_WQE_EAT	= 0x10,
+	SLI4_ICMD_WQE_APPID	= 0x10,
+	SLI4_ICMD_WQE_WQES	= 0x80,
+	SLI4_ICMD_WQE_PU_SHFT	= 4,
+	SLI4_ICMD_WQE_CT_SHFT	= 2,
+	SLI4_ICMD_WQE_BS_SHFT	= 4,
+	SLI4_ICMD_WQE_LEN_LOC_BIT1 = 0x80,
+	SLI4_ICMD_WQE_LEN_LOC_BIT2 = 0x1,
+};
+
+struct sli4_fcp_icmnd64_wqe_s {
+	struct sli4_bde_s	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+	__le32		rsvd12;
+	__le32		remote_n_port_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/**
+ * @brief WQE used to create an FCP initiator read.
+ */
+enum {
+	SLI4_IR_WQE_DBDE	= 0x40,
+	SLI4_IR_WQE_XBL		= 0x8,
+	SLI4_IR_WQE_XC		= 0x20,
+	SLI4_IR_WQE_IOD		= 0x20,
+	SLI4_IR_WQE_HLM		= 0x10,
+	SLI4_IR_WQE_CCPE	= 0x80,
+	SLI4_IR_WQE_EAT		= 0x10,
+	SLI4_IR_WQE_APPID	= 0x10,
+	SLI4_IR_WQE_WQES	= 0x80,
+	SLI4_IR_WQE_PU_SHFT	= 4,
+	SLI4_IR_WQE_CT_SHFT	= 2,
+	SLI4_IR_WQE_BS_SHFT	= 4,
+	SLI4_IR_WQE_LEN_LOC_BIT1 = 0x80,
+	SLI4_IR_WQE_LEN_LOC_BIT2 = 0x1,
+};
+
+struct sli4_fcp_iread64_wqe_s {
+	struct sli4_bde_s	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+
+	__le32		total_transfer_length;
+
+	__le32		remote_n_port_id_dword;
+
+	__le16		xri_tag;
+	__le16		context_tag;
+
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+
+	__le32		abort_tag;
+
+	__le16		request_tag;
+	__le16		rsvd34;
+
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+
+	__le32		rsvd44;
+	/* reserved if performance hints disabled */
+	struct sli4_bde_s	first_data_bde;
+};
+
+/**
+ * @brief WQE used to create an FCP initiator write.
+ */
+enum {
+	SLI4_IWR_WQE_DBDE	= 0x40,
+	SLI4_IWR_WQE_XBL	= 0x8,
+	SLI4_IWR_WQE_XC		= 0x20,
+	SLI4_IWR_WQE_IOD	= 0x20,
+	SLI4_IWR_WQE_HLM	= 0x10,
+	SLI4_IWR_WQE_DNRX	= 0x10,
+	SLI4_IWR_WQE_CCPE	= 0x80,
+	SLI4_IWR_WQE_EAT	= 0x10,
+	SLI4_IWR_WQE_APPID	= 0x10,
+	SLI4_IWR_WQE_WQES	= 0x80,
+	SLI4_IWR_WQE_PU_SHFT	= 4,
+	SLI4_IWR_WQE_CT_SHFT	= 2,
+	SLI4_IWR_WQE_BS_SHFT	= 4,
+	SLI4_IWR_WQE_LEN_LOC_BIT1 = 0x80,
+	SLI4_IWR_WQE_LEN_LOC_BIT2 = 0x1,
+};
+
+struct sli4_fcp_iwrite64_wqe_s {
+	struct sli4_bde_s	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+	__le16		total_transfer_length;
+	__le16		initial_transfer_length;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		remote_n_port_id_dword;
+	struct sli4_bde_s	first_data_bde;
+};
+
+struct sli4_fcp_128byte_wqe_s {
+	u32 dw[32];
+};
+
+/**
+ * @brief WQE used to create an FCP target receive, and FCP target
+ * receive continue.
+ */
+enum {
+	SLI4_TRCV_WQE_DBDE	= 0x40,
+	SLI4_TRCV_WQE_XBL	= 0x8,
+	SLI4_TRCV_WQE_AR	= 0x8,
+	SLI4_TRCV_WQE_XC	= 0x20,
+	SLI4_TRCV_WQE_IOD	= 0x20,
+	SLI4_TRCV_WQE_HLM	= 0x10,
+	SLI4_TRCV_WQE_DNRX	= 0x10,
+	SLI4_TRCV_WQE_CCPE	= 0x80,
+	SLI4_TRCV_WQE_EAT	= 0x10,
+	SLI4_TRCV_WQE_APPID	= 0x10,
+	SLI4_TRCV_WQE_WQES	= 0x80,
+	SLI4_TRCV_WQE_PU_SHFT	= 4,
+	SLI4_TRCV_WQE_CT_SHFT	= 2,
+	SLI4_TRCV_WQE_BS_SHFT	= 4,
+	SLI4_TRCV_WQE_LEN_LOC_BIT2 = 0x1,
+};
+
+struct sli4_fcp_treceive64_wqe_s {
+	struct sli4_bde_s	bde;
+	__le32		payload_offset_length;
+	__le32		relative_offset;
+	/**
+	 * DWord 5 can either be the task retry identifier (HLM=0) or
+	 * the remote N_Port ID (HLM=1),the secondary xri tag
+	 */
+	union {
+		__le16		sec_xri_tag;
+		__le16		rsvd;
+		__le32		dword;
+	} dword5;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_ar_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		lloc1_appid;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		fcp_data_receive_length;
+	struct sli4_bde_s	first_data_bde; /* For performance hints */
+};
+
+/**
+ * @brief WQE used to create an FCP target response.
+ */
+enum {
+	SLI4_TRSP_WQE_AG	= 0x8,
+	SLI4_TRSP_WQE_DBDE	= 0x40,
+	SLI4_TRSP_WQE_XBL	= 0x8,
+	SLI4_TRSP_WQE_XC	= 0x20,
+	SLI4_TRSP_WQE_HLM	= 0x10,
+	SLI4_TRSP_WQE_DNRX	= 0x10,
+	SLI4_TRSP_WQE_CCPE	= 0x80,
+	SLI4_TRSP_WQE_EAT	= 0x10,
+	SLI4_TRSP_WQE_APPID	= 0x10,
+	SLI4_TRSP_WQE_WQES	= 0x80,
+};
+
+struct sli4_fcp_trsp64_wqe_s {
+	struct sli4_bde_s	bde;
+	__le32		fcp_response_length;
+	__le32		rsvd12;
+	/**
+	 * DWord 5 can either be the task retry identifier (HLM=0) or
+	 * the remote N_Port ID (HLM=1)
+	 */
+	__le32		dword5;
+	__le16		xri_tag;
+	__le16		rpi;
+	u8		ct_dnrx_byte;
+	u8		command;
+	u8		class_ag_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		lloc1_appid;
+	u8		qosd_xbl_hlm_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/**
+ * @brief WQE used to create an FCP target send (DATA IN).
+ */
+enum {
+	SLI4_TSEND_WQE_XBL	= 0x8,
+	SLI4_TSEND_WQE_DBDE	= 0x40,
+	SLI4_TSEND_WQE_IOD	= 0x20,
+	SLI4_TSEND_WQE_QOSD	= 0x2,
+	SLI4_TSEND_WQE_HLM	= 0x10,
+	SLI4_TSEND_WQE_PU_SHFT	= 4,
+	SLI4_TSEND_WQE_AR	= 0x8,
+	SLI4_TSEND_CT_SHFT	= 2,
+	SLI4_TSEND_BS_SHFT	= 4,
+	SLI4_TSEND_LEN_LOC_BIT2 = 0x1,
+	SLI4_TSEND_CCPE		= 0x80,
+	SLI4_TSEND_APPID_VALID	= 0x20,
+	SLI4_TSEND_WQES		= 0x80,
+	SLI4_TSEND_XC		= 0x20,
+	SLI4_TSEND_EAT		= 0x10,
+};
+
+struct sli4_fcp_tsend64_wqe_s {
+	struct sli4_bde_s	bde;
+	__le32		payload_offset_length;
+	__le32		relative_offset;
+	/**
+	 * DWord 5 can either be the task retry identifier (HLM=0) or
+	 * the remote N_Port ID (HLM=1)
+	 */
+	__le32		dword5;
+	__le16		xri_tag;
+	__le16		rpi;
+	u8		ct_byte;
+	u8		command;
+	u8		class_pu_ar_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		dw10byte0;
+	u8		ll_qd_xbl_hlm_iod_dbde;
+	u8		dw10byte2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le32		fcp_data_transmit_length;
+	struct sli4_bde_s	first_data_bde; /* For performance hints */
+};
+
+/**
+ * @brief WQE used to create a general request.
+ */
+enum {
+	SLI4_GEN_REQ64_WQE_XBL	= 0x8,
+	SLI4_GEN_REQ64_WQE_DBDE	= 0x40,
+	SLI4_GEN_REQ64_WQE_IOD	= 0x20,
+	SLI4_GEN_REQ64_WQE_QOSD	= 0x2,
+	SLI4_GEN_REQ64_WQE_HLM	= 0x10,
+	SLI4_GEN_REQ64_CT_SHFT	= 2,
+};
+
+struct sli4_gen_request64_wqe_s {
+	struct sli4_bde_s	bde;
+	__le32		request_payload_length;
+	__le32		relative_offset;
+	u8		rsvd17;
+	u8		df_ctl;
+	u8		type;
+	u8		r_ctl;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		dw10flags0;
+	u8		dw10flags1;
+	u8		dw10flags2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		remote_n_port_id_dword;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		max_response_payload_length;
+};
+
+/**
+ * @brief WQE used to create a send frame request.
+ */
+enum {
+	SLI4_SF_WQE_DBDE	= 0x40,
+	SLI4_SF_PU		= 0x30,
+	SLI4_SF_CT		= 0xc,
+	SLI4_SF_QOSD		= 0x2,
+	SLI4_SF_LEN_LOC_BIT1	= 0x80,
+	SLI4_SF_LEN_LOC_BIT2	= 0x1,
+	SLI4_SF_XC		= 0x20,
+	SLI4_SF_XBL		= 0x8,
+};
+
+struct sli4_send_frame_wqe_s {
+	struct sli4_bde_s	bde;
+	__le32		frame_length;
+	__le32		fc_header_0_1[2];
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		dw7flags0;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	u8		eof;
+	u8		sof;
+	u8		dw10flags0;
+	u8		dw10flags1;
+	u8		dw10flags2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		fc_header_2_5[4];
+};
+
+/**
+ * @brief WQE used to create a transmit sequence.
+ */
+enum {
+	SLI4_SEQ_WQE_DBDE	= 0x4000,
+	SLI4_SEQ_WQE_XBL	= 0x800,
+	SLI4_SEQ_WQE_SI		= 0x4,
+	SLI4_SEQ_WQE_FT		= 0x8,
+	SLI4_SEQ_WQE_XO		= 0x40,
+	SLI4_SEQ_WQE_LS		= 0x80,
+	SLI4_SEQ_WQE_DIF	= 0x3,
+	SLI4_SEQ_WQE_BS		= 0x70,
+	SLI4_SEQ_WQE_PU		= 0x30,
+	SLI4_SEQ_WQE_HLM	= 0x1000,
+	SLI4_SEQ_WQE_IOD_SHIFT	= 13,
+	SLI4_SEQ_WQE_CT_SHIFT	= 2,
+	SLI4_SEQ_WQE_LEN_LOC_SHIFT = 7,
+};
+
+struct sli4_xmit_sequence64_wqe_s {
+	struct sli4_bde_s	bde;
+	__le32		remote_n_port_id_dword;
+	__le32		relative_offset;
+	u8		dw5flags0;
+	u8		df_ctl;
+	u8		type;
+	u8		r_ctl;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dw7flags0;
+	u8		command;
+	u8		dw7flags1;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	__le16		dw10w0;
+	u8		dw10flags0;
+	u8		ccp;
+	u8		cmd_type_wqec_byte;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le32		sequence_payload_len;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/**
+ * @brief WQE used unblock the specified XRI and to release
+ * it to the SLI Port's free pool.
+ */
+enum {
+	SLI4_REQU_XRI_WQE_XC	= 0x20,
+	SLI4_REQU_XRI_WQE_QOSD	= 0x2,
+};
+
+struct sli4_requeue_xri_wqe_s {
+	__le32		rsvd0;
+	__le32		rsvd4;
+	__le32		rsvd8;
+	__le32		rsvd12;
+	__le32		rsvd16;
+	__le32		rsvd20;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		rsvd32;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		flags0;
+	__le16		flags1;
+	__le16		flags2;
+	u8		ccp;
+	u8		cmd_type_wqec_byte;
+	u8		rsvd42;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/**
+ * @brief WQE used to send a single frame sequence to broadcast address
+ * SLI4_BCAST_WQE_DBDE:  dw10 bit15
+ * BCAST_WQE_CT : dw7 bits 3,4
+ * BCAST_WQE_LEN_LOC:  dw10 8,9
+ * BCAST_WQE_IOD:  dw10 bit 13
+ */
+enum {
+	SLI4_BCAST_WQE_DBDE		= 0x4000,
+	SLI4_BCAST_WQE_CT_SHIFT		= 2,
+	SLI4_BCAST_WQE_LEN_LOC_SHIFT	= 7,
+	SLI4_BCAST_WQE_IOD_SHIFT	= 13,
+};
+
+struct sli4_xmit_bcast64_wqe_s {
+	struct sli4_bde_s	sequence_payload;
+	__le32		sequence_payload_length;
+	__le32		rsvd16;
+	u8		rsvd17;
+	u8		df_ctl;
+	u8		type;
+	u8		r_ctl;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		dw7flags0;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		temporary_rpi;
+	__le16		dw10w0;
+	u8		dw10flags1;
+	u8		ccp;
+	u8		dw11flags0;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd45;
+	__le32		rsvd46;
+	__le32		rsvd47;
+};
+
+/**
+ * @brief WQE used to create a BLS response.
+ * SLI4_BLS_RSP_WQE_AR : 6th dword, bit 31
+ * BLS_RSP_WQE_CT:  8th dword, bits 3 and 4
+ * SLI4_BLS_RSP_WQE_QOSD:  dword 11, bit 10
+ * SLI4_BLS_RSP_WQE_HLM:  dword 11, bit 13
+ */
+enum {
+	SLI4_BLS_RSP_RID		= 0xffffff,
+	SLI4_BLS_RSP_WQE_AR		= 0x40000000,
+	SLI4_BLS_RSP_WQE_CT_SHFT	= 2,
+	SLI4_BLS_RSP_WQE_QOSD		= 0x2,
+	SLI4_BLS_RSP_WQE_HLM		= 0x10,
+};
+
+struct sli4_xmit_bls_rsp_wqe_s {
+	__le32		payload_word0;
+	__le16		rx_id;
+	__le16		ox_id;
+	__le16		high_seq_cnt;
+	__le16		low_seq_cnt;
+	__le32		rsvd12;
+	__le32		local_n_port_id_dword;
+	__le32		remote_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dw8flags0;
+	u8		command;
+	u8		dw8flags1;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd38;
+	u8		dw11flags0;
+	u8		dw11flags1;
+	u8		dw11flags2;
+	u8		ccp;
+	u8		dw12flags0;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le16		temporary_rpi;
+	u8		rsvd50;
+	u8		rsvd51;
+	__le32		rsvd52;
+	__le32		rsvd56;
+	__le32		rsvd60;
+};
+
+enum sli_bls_type_e {
+	SLI4_SLI_BLS_ACC,
+	SLI4_SLI_BLS_RJT,
+	SLI4_SLI_BLS_MAX
+};
+
+struct sli_bls_payload_s {
+	enum sli_bls_type_e	type;
+	__le16		ox_id;
+	__le16		rx_id;
+	union {
+		struct {
+			u8		seq_id_validity;
+			u8		seq_id_last;
+			u8		rsvd2;
+			u8		rsvd3;
+			u16		ox_id;
+			u16		rx_id;
+			__le16		low_seq_cnt;
+			__le16		high_seq_cnt;
+		} acc;
+		struct {
+			u8		vendor_unique;
+			u8		reason_explanation;
+			u8		reason_code;
+			u8		rsvd3;
+		} rjt;
+	} u;
+};
+
+/**
+ * @brief WQE used to create an ELS response.
+ * flags2 bits: rsvd, qosd, rsvd, xbl, hlm, iod, dbde, wqes
+ * flags3 bits: pri : 3, pv , eat, xc, rsvd, ccpe
+ */
+
+enum {
+	SLI4_ELS_SID		= 0xffffff,
+	SLI4_ELS_RID		= 0xffffff,
+	SLI4_ELS_DBDE		= 0x40,
+	SLI4_ELS_XBL		= 0x8,
+	SLI4_ELS_IOD		= 0x20,
+	SLI4_ELS_QOSD		= 0x2,
+	SLI4_ELS_XC		= 0x20,
+	SLI4_ELS_CT_OFFSET	= 0X2,
+	SLI4_ELS_SP		= 0X1000000,
+	SLI4_ELS_HLM		= 0X10,
+};
+
+struct sli4_xmit_els_rsp64_wqe_s {
+	struct sli4_bde_s	els_response_payload;
+	__le32		els_response_payload_length;
+	__le32		sid_dw;
+	__le32		rid_dw;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		ox_id;
+	u8		flags1;
+	u8		flags2;
+	u8		flags3;
+	u8		flags4;
+	u8		cmd_type_wqec;
+	u8		rsvd34;
+	__le16		cq_id;
+	__le16		temporary_rpi;
+	__le16		rsvd38;
+	u32	rsvd40;
+	u32	rsvd44;
+	u32	rsvd48;
+};
+
+/**
+ * @brief Local Reject Reason Codes.
+ */
+#define SLI4_FC_LOCAL_REJECT_MISSING_CONTINUE	0x01
+#define SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT	0x02
+#define SLI4_FC_LOCAL_REJECT_INTERNAL_ERROR	0x03
+#define SLI4_FC_LOCAL_REJECT_INVALID_RPI	0x04
+#define SLI4_FC_LOCAL_REJECT_NO_XRI		0x05
+#define SLI4_FC_LOCAL_REJECT_ILLEGAL_COMMAND	0x06
+#define SLI4_FC_LOCAL_REJECT_XCHG_DROPPED	0x07
+#define SLI4_FC_LOCAL_REJECT_ILLEGAL_FIELD	0x08
+#define SLI4_FC_LOCAL_REJECT_NO_ABORT_MATCH	0x0c
+#define SLI4_FC_LOCAL_REJECT_TX_DMA_FAILED	0x0d
+#define SLI4_FC_LOCAL_REJECT_RX_DMA_FAILED	0x0e
+#define SLI4_FC_LOCAL_REJECT_ILLEGAL_FRAME	0x0f
+#define SLI4_FC_LOCAL_REJECT_NO_RESOURCES	0x11
+#define SLI4_FC_LOCAL_REJECT_FCP_CONF_FAILURE	0x12
+#define SLI4_FC_LOCAL_REJECT_ILLEGAL_LENGTH	0x13
+#define SLI4_FC_LOCAL_REJECT_UNSUPPORTED_FEATURE 0x14
+#define SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS	0x15
+#define SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED	0x16
+#define SLI4_FC_LOCAL_REJECT_RCV_BUFFER_TIMEOUT	0x17
+#define SLI4_FC_LOCAL_REJECT_LOOP_OPEN_FAILURE	0x18
+#define SLI4_FC_LOCAL_REJECT_LINK_DOWN		0x1a
+#define SLI4_FC_LOCAL_REJECT_CORRUPTED_DATA	0x1b
+#define SLI4_FC_LOCAL_REJECT_CORRUPTED_RPI	0x1c
+#define SLI4_FC_LOCAL_REJECT_OUTOFORDER_DATA	0x1d
+#define SLI4_FC_LOCAL_REJECT_OUTOFORDER_ACK	0x1e
+#define SLI4_FC_LOCAL_REJECT_DUP_FRAME		0x1f
+#define SLI4_FC_LOCAL_REJECT_LINK_CONTROL_FRAME	0x20
+#define SLI4_FC_LOCAL_REJECT_BAD_HOST_ADDRESS	0x21
+#define SLI4_FC_LOCAL_REJECT_MISSING_HDR_BUFFER	0x23
+#define SLI4_FC_LOCAL_REJECT_MSEQ_CHAIN_CORRUPTED 0x24
+#define SLI4_FC_LOCAL_REJECT_ABORTMULT_REQUESTED 0x25
+#define SLI4_FC_LOCAL_REJECT_BUFFER_SHORTAGE	0x28
+#define SLI4_FC_LOCAL_REJECT_RCV_XRIBUF_WAITING	0x29
+#define SLI4_FC_LOCAL_REJECT_INVALID_VPI	0x2e
+#define SLI4_FC_LOCAL_REJECT_MISSING_XRIBUF	0x30
+#define SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET	0x40
+#define SLI4_FC_LOCAL_REJECT_MISSING_RELOFFSET	0x41
+#define SLI4_FC_LOCAL_REJECT_INSUFF_BUFFERSPACE	0x42
+#define SLI4_FC_LOCAL_REJECT_MISSING_SI		0x43
+#define SLI4_FC_LOCAL_REJECT_MISSING_ES		0x44
+#define SLI4_FC_LOCAL_REJECT_INCOMPLETE_XFER	0x45
+#define SLI4_FC_LOCAL_REJECT_SLER_FAILURE	0x46
+#define SLI4_FC_LOCAL_REJECT_SLER_CMD_RCV_FAILURE 0x47
+#define SLI4_FC_LOCAL_REJECT_SLER_REC_RJT_ERR	0x48
+#define SLI4_FC_LOCAL_REJECT_SLER_REC_SRR_RETRY_ERR 0x49
+#define SLI4_FC_LOCAL_REJECT_SLER_SRR_RJT_ERR	0x4a
+#define SLI4_FC_LOCAL_REJECT_SLER_RRQ_RJT_ERR	0x4c
+#define SLI4_FC_LOCAL_REJECT_SLER_RRQ_RETRY_ERR	0x4d
+#define SLI4_FC_LOCAL_REJECT_SLER_ABTS_ERR	0x4e
+
+enum {
+	SLI4_RACQE_RQ_EL_INDX = 0xfff,
+	SLI4_RACQE_FCFI = 0x3f,
+	SLI4_RACQE_HDPL = 0x3f,
+	SLI4_RACQE_RQ_ID = 0xffc0,
+};
+
+struct sli4_fc_async_rcqe_s {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	__le32		rsvd4;
+	__le16		fcfi_rq_id_word;
+	__le16		data_placement_length;
+	u8		sof_byte;
+	u8		eof_byte;
+	u8		code;
+	u8		hdpl_byte;
+};
+
+struct sli4_fc_async_rcqe_v1_s {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	u8		fcfi_byte;
+	u8		rsvd5;
+	__le16		rsvd6;
+	__le16		rq_id;
+	__le16		data_placement_length;
+	u8		sof_byte;
+	u8		eof_byte;
+	u8		code;
+	u8		hdpl_byte;
+};
+
+#define SLI4_FC_ASYNC_RQ_SUCCESS		0x10
+#define SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED	0x11
+#define SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED	0x12
+#define SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC	0x13
+#define SLI4_FC_ASYNC_RQ_DMA_FAILURE		0x14
+enum {
+	SLI4_RCQE_RQ_EL_INDX = 0xfff,
+};
+
+struct sli4_fc_coalescing_rcqe_s {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	__le32		rsvd4;
+	__le16		rq_id;
+	__le16		sequence_reporting_placement_length;
+	__le16		rsvd14;
+	u8		code;
+	u8		vld_byte;
+};
+
+#define SLI4_FC_COALESCE_RQ_SUCCESS		0x10
+#define SLI4_FC_COALESCE_RQ_INSUFF_XRI_NEEDED	0x18
+/*
+ * @SLI4_OCQE_RQ_EL_INDX: bits 0 to 15 in word1
+ * @SLI4_OCQE_FCFI: bits 0 to 6 in dw1
+ * @SLI4_OCQE_OOX: bit 15 in dw1
+ * @SLI4_OCQE_AGXR: bit 16 in dw1
+ */
+enum {
+	SLI4_OCQE_RQ_EL_INDX = 0x7f,
+	SLI4_OCQE_FCFI = 0x3f,
+	SLI4_OCQE_OOX = (1 << 6),
+	SLI4_OCQE_AGXR = (1 << 7),
+	SLI4_OCQE_HDPL = 0x3f,
+};
+
+struct sli4_fc_optimized_write_cmd_cqe_s {
+	u8		rsvd0;
+	u8		status;
+	__le16		w1;
+	u8		flags0;
+	u8		flags1;
+	__le16		xri;
+	__le16		rq_id;
+	__le16		data_placement_length;
+	__le16		rpi;
+	u8		code;
+	u8		hdpl_vld;
+};
+
+enum {
+	SLI4_OCQE_XB = (1 << 4),
+};
+
+struct sli4_fc_optimized_write_data_cqe_s {
+	u8		hw_status;
+	u8		status;
+	__le16		xri;
+	__le32		total_data_placed;
+	__le32		extended_status;
+	__le16		rsvd12;
+	u8		code;
+	u8		flags;
+};
+
+struct sli4_fc_xri_aborted_cqe_s {
+	u8		rsvd0;
+	u8		status;
+	__le16		rsvd2;
+	__le32		extended_status;
+	__le16		xri;
+	__le16		remote_xid;
+	__le16		rsvd12;
+	u8		code;
+	u8		flags;
+};
+
+#define SLI4_GENERIC_CONTEXT_RPI	0x0
+#define SLI4_GENERIC_CONTEXT_VPI	0x1
+#define SLI4_GENERIC_CONTEXT_VFI	0x2
+#define SLI4_GENERIC_CONTEXT_FCFI	0x3
+
+#define SLI4_GENERIC_CLASS_CLASS_2	0x1
+#define SLI4_GENERIC_CLASS_CLASS_3	0x2
+
+#define SLI4_ELS_REQUEST64_DIR_WRITE	0x0
+#define SLI4_ELS_REQUEST64_DIR_READ	0x1
+
+#define SLI4_ELS_REQUEST64_OTHER	0x0
+#define SLI4_ELS_REQUEST64_LOGO		0x1
+#define SLI4_ELS_REQUEST64_FDISC	0x2
+#define SLI4_ELS_REQUEST64_FLOGIN	0x3
+#define SLI4_ELS_REQUEST64_PLOGI	0x4
+
+#define SLI4_ELS_REQUEST64_CMD_GEN		0x08
+#define SLI4_ELS_REQUEST64_CMD_NON_FABRIC	0x0c
+#define SLI4_ELS_REQUEST64_CMD_FABRIC		0x0d
+
 #endif /* !_SLI4_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 03/32] elx: libefc_sli: Data structures and defines for mbox commands
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
  2019-10-23 21:55 ` [PATCH 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
  2019-10-23 21:55 ` [PATCH 02/32] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-25 11:19   ` Daniel Wagner
  2019-10-23 21:55 ` [PATCH 04/32] elx: libefc_sli: queue create/destroy/parse routines James Smart
                   ` (29 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds definitions for SLI-4 mailbox commands
and responses.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc_sli/sli4.h | 1996 ++++++++++++++++++++++++++++++++++++
 1 file changed, 1996 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index ebc6a67e9c8c..b36d67abf219 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -2264,4 +2264,2000 @@ struct sli4_fc_xri_aborted_cqe_s {
 #define SLI4_ELS_REQUEST64_CMD_NON_FABRIC	0x0c
 #define SLI4_ELS_REQUEST64_CMD_FABRIC		0x0d
 
+#define SLI_PAGE_SIZE		(1 << 12)	/* 4096 */
+#define SLI_SUB_PAGE_MASK	(SLI_PAGE_SIZE - 1)
+#define SLI_ROUND_PAGE(b)	(((b) + SLI_SUB_PAGE_MASK) & ~SLI_SUB_PAGE_MASK)
+
+#define SLI4_BMBX_TIMEOUT_MSEC		30000
+#define SLI4_FW_READY_TIMEOUT_MSEC	30000
+
+#define SLI4_BMBX_DELAY_US 1000 /* 1 ms */
+#define SLI4_INIT_PORT_DELAY_US 10000 /* 10 ms */
+
+static inline u32
+sli_page_count(size_t bytes, u32 page_size)
+{
+	u32	mask = page_size - 1;
+	u32	shift = 0;
+
+	switch (page_size) {
+	case 4096:
+		shift = 12;
+		break;
+	case 8192:
+		shift = 13;
+		break;
+	case 16384:
+		shift = 14;
+		break;
+	case 32768:
+		shift = 15;
+		break;
+	case 65536:
+		shift = 16;
+		break;
+	default:
+		return 0;
+	}
+
+	return (bytes + mask) >> shift;
+}
+
+/*************************************************************************
+ * SLI-4 mailbox command formats and definitions
+ */
+
+struct sli4_mbox_command_header_s {
+	u8	resvd0;
+	u8	command;
+	__le16	status;	/* Port writes to indicate success/fail */
+};
+
+enum {
+	MBX_CMD_CONFIG_LINK	= 0x07,
+	MBX_CMD_DUMP		= 0x17,
+	MBX_CMD_DOWN_LINK	= 0x06,
+	MBX_CMD_INIT_LINK	= 0x05,
+	MBX_CMD_INIT_VFI	= 0xa3,
+	MBX_CMD_INIT_VPI	= 0xa4,
+	MBX_CMD_POST_XRI	= 0xa7,
+	MBX_CMD_RELEASE_XRI	= 0xac,
+	MBX_CMD_READ_CONFIG	= 0x0b,
+	MBX_CMD_READ_STATUS	= 0x0e,
+	MBX_CMD_READ_NVPARMS	= 0x02,
+	MBX_CMD_READ_REV	= 0x11,
+	MBX_CMD_READ_LNK_STAT	= 0x12,
+	MBX_CMD_READ_SPARM64	= 0x8d,
+	MBX_CMD_READ_TOPOLOGY	= 0x95,
+	MBX_CMD_REG_FCFI	= 0xa0,
+	MBX_CMD_REG_FCFI_MRQ	= 0xaf,
+	MBX_CMD_REG_RPI		= 0x93,
+	MBX_CMD_REG_RX_RQ	= 0xa6,
+	MBX_CMD_REG_VFI		= 0x9f,
+	MBX_CMD_REG_VPI		= 0x96,
+	MBX_CMD_RQST_FEATURES	= 0x9d,
+	MBX_CMD_SLI_CONFIG	= 0x9b,
+	MBX_CMD_UNREG_FCFI	= 0xa2,
+	MBX_CMD_UNREG_RPI	= 0x14,
+	MBX_CMD_UNREG_VFI	= 0xa1,
+	MBX_CMD_UNREG_VPI	= 0x97,
+	MBX_CMD_WRITE_NVPARMS	= 0x03,
+	MBX_CMD_CFG_AUTO_XFER_RDY = 0xAD,
+
+	MBX_STATUS_SUCCESS	= 0x0000,
+	MBX_STATUS_FAILURE	= 0x0001,
+	MBX_STATUS_RPI_NOT_REG	= 0x1400,
+};
+
+/**
+ * @brief CONFIG_LINK
+ */
+enum {
+	SLI4_CFG_LINK_BBSCN = 0xf00,
+	SLI4_CFG_LINK_CSCN  = 0x1000,
+};
+
+struct sli4_cmd_config_link_s {
+	struct sli4_mbox_command_header_s	hdr;
+	u8		maxbbc;		/* Max buffer-to-buffer credit */
+	u8		rsvd5;
+	u8		rsvd6;
+	u8		rsvd7;
+	u8		alpa;
+	__le16		n_port_id;
+	u8		rsvd11;
+	__le32		rsvd12;
+	__le32		e_d_tov;
+	__le32		lp_tov;
+	__le32		r_a_tov;
+	__le32		r_t_tov;
+	__le32		al_tov;
+	__le32		rsvd36;
+	/*
+	 * Buffer-to-buffer state change number
+	 * Configure BBSCN
+	 */
+	__le32		bbscn_dword;
+};
+
+/**
+ * @brief DUMP Type 4
+ */
+enum {
+	SLI4_DUMP4_TYPE = 0xf,
+};
+
+#define SLI4_WKI_TAG_SAT_TEM 0x1040
+
+struct sli4_cmd_dump4_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		type_dword;
+	__le16		wki_selection;
+	__le16		rsvd10;
+	__le32		rsvd12;
+	__le32		returned_byte_cnt;
+	__le32		resp_data[59];
+};
+
+/* INIT_LINK - initialize the link for a FC port */
+#define FC_TOPOLOGY_FCAL	0
+#define FC_TOPOLOGY_P2P		1
+
+#define SLI4_INIT_LINK_F_LOOP_BACK	(1 << 0)
+#define SLI4_INIT_LINK_F_UNFAIR		(1 << 6)
+#define SLI4_INIT_LINK_F_NO_LIRP	(1 << 7)
+#define SLI4_INIT_LINK_F_LOOP_VALID_CHK	(1 << 8)
+#define SLI4_INIT_LINK_F_NO_LISA	(1 << 9)
+#define SLI4_INIT_LINK_F_FAIL_OVER	(1 << 10)
+#define SLI4_INIT_LINK_F_NO_AUTOSPEED	(1 << 11)
+#define SLI4_INIT_LINK_F_PICK_HI_ALPA	(1 << 15)
+
+#define SLI4_INIT_LINK_F_P2P_ONLY	1
+#define SLI4_INIT_LINK_F_FCAL_ONLY	2
+
+#define SLI4_INIT_LINK_F_FCAL_FAIL_OVER	0
+#define SLI4_INIT_LINK_F_P2P_FAIL_OVER	1
+
+enum {
+	SLI4_INIT_LINK_SEL_RESET_AL_PA = 0xff,
+	SLI4_INIT_LINK_FLAG_LOOPBACK = 0x1,
+	SLI4_INIT_LINK_FLAG_TOPOLOGY = 0x6,
+	SLI4_INIT_LINK_FLAG_UNFAIR   = 0x40,
+	SLI4_INIT_LINK_FLAG_SKIP_LIRP_LILP = 0x80,
+	SLI4_INIT_LINK_FLAG_LOOP_VALIDITY = 0x100,
+	SLI4_INIT_LINK_FLAG_SKIP_LISA = 0x200,
+	SLI4_INIT_LINK_FLAG_EN_TOPO_FAILOVER = 0x400,
+	SLI4_INIT_LINK_FLAG_FIXED_SPEED = 0x800,
+	SLI4_INIT_LINK_FLAG_SEL_HIGHTEST_AL_PA = 0x8000,
+};
+
+struct sli4_cmd_init_link_s {
+	struct sli4_mbox_command_header_s       hdr;
+	__le32	sel_reset_al_pa_dword;
+	__le32	flags0;
+	__le32	link_speed_sel_code;
+#define FC_LINK_SPEED_1G		1
+#define FC_LINK_SPEED_2G		2
+#define FC_LINK_SPEED_AUTO_1_2		3
+#define FC_LINK_SPEED_4G		4
+#define FC_LINK_SPEED_AUTO_4_1		5
+#define FC_LINK_SPEED_AUTO_4_2		6
+#define FC_LINK_SPEED_AUTO_4_2_1	7
+#define FC_LINK_SPEED_8G		8
+#define FC_LINK_SPEED_AUTO_8_1		9
+#define FC_LINK_SPEED_AUTO_8_2		10
+#define FC_LINK_SPEED_AUTO_8_2_1	11
+#define FC_LINK_SPEED_AUTO_8_4		12
+#define FC_LINK_SPEED_AUTO_8_4_1	13
+#define FC_LINK_SPEED_AUTO_8_4_2	14
+#define FC_LINK_SPEED_10G		16
+#define FC_LINK_SPEED_16G		17
+#define FC_LINK_SPEED_AUTO_16_8_4	18
+#define FC_LINK_SPEED_AUTO_16_8		19
+#define FC_LINK_SPEED_32G		20
+#define FC_LINK_SPEED_AUTO_32_16_8	21
+#define FC_LINK_SPEED_AUTO_32_16	22
+};
+
+/**
+ * @brief INIT_VFI - initialize the VFI resource
+ */
+enum {
+	SLI4_INIT_VFI_FLAG_VP = 0x1000,		/* DW1W1 */
+	SLI4_INIT_VFI_FLAG_VF = 0x2000,
+	SLI4_INIT_VFI_FLAG_VT = 0x4000,
+	SLI4_INIT_VFI_FLAG_VR = 0x8000,
+
+	SLI4_INIT_VFI_VFID	 = 0x1fff,	/* DW3W0 */
+	SLI4_INIT_VFI_PRI	 = 0xe000,
+
+	SLI4_INIT_VFI_HOP_COUNT = 0xff000000,	/* DW4 */
+};
+
+struct sli4_cmd_init_vfi_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le16		vfi;
+	__le16		flags0_word;
+	__le16		fcfi;
+	__le16		vpi;
+	__le32		vf_id_pri_dword;
+	__le32		hop_cnt_dword;
+};
+
+/**
+ * @brief INIT_VPI - initialize the VPI resource
+ */
+struct sli4_cmd_init_vpi_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le16		vpi;
+	__le16		vfi;
+};
+
+/**
+ * @brief POST_XRI - post XRI resources to the SLI Port
+ */
+enum {
+	SLI4_POST_XRI_COUNT	= 0xfff,	/* DW1W1 */
+	SLI4_POST_XRI_FLAG_ENX	= 0x1000,
+	SLI4_POST_XRI_FLAG_DL	= 0x2000,
+	SLI4_POST_XRI_FLAG_DI	= 0x4000,
+	SLI4_POST_XRI_FLAG_VAL	= 0x8000,
+};
+
+struct sli4_cmd_post_xri_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le16		xri_base;
+	__le16		xri_count_flags;
+};
+
+/**
+ * @brief RELEASE_XRI - Release XRI resources from the SLI Port
+ */
+enum {
+	SLI4_RELEASE_XRI_REL_XRI_CNT	= 0x1f,	/* DW1W0 */
+	SLI4_RELEASE_XRI_COUNT		= 0x1f,	/* DW1W1 */
+};
+
+struct sli4_cmd_release_xri_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le16		rel_xri_count_word;
+	__le16		xri_count_word;
+
+	struct {
+		__le16	xri_tag0;
+		__le16	xri_tag1;
+	} xri_tbl[62];
+};
+
+/**
+ * @brief READ_CONFIG - read SLI port configuration parameters
+ */
+struct sli4_cmd_read_config_s {
+	struct sli4_mbox_command_header_s	hdr;
+};
+
+enum {
+	SLI4_READ_CFG_RESP_RESOURCE_EXT = 0x80000000,	/* DW1 */
+	SLI4_READ_CFG_RESP_TOPOLOGY = 0xff000000,	/* DW2 */
+};
+
+struct sli4_rsp_read_config_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		ext_dword;
+	__le32		topology_dword;
+	__le32		resvd8;
+	__le16		e_d_tov;
+	__le16		resvd14;
+	__le32		resvd16;
+	__le16		r_a_tov;
+	__le16		resvd22;
+	__le32		resvd24;
+	__le32		resvd28;
+	__le16		lmt;
+	__le16		resvd34;
+	__le32		resvd36;
+	__le32		resvd40;
+	__le16		xri_base;
+	__le16		xri_count;
+	__le16		rpi_base;
+	__le16		rpi_count;
+	__le16		vpi_base;
+	__le16		vpi_count;
+	__le16		vfi_base;
+	__le16		vfi_count;
+	__le16		resvd60;
+	__le16		fcfi_count;
+	__le16		rq_count;
+	__le16		eq_count;
+	__le16		wq_count;
+	__le16		cq_count;
+	__le32		pad[45];
+};
+
+#define SLI4_READ_CFG_TOPO_FC		0x1	/** FC topology unknown */
+#define SLI4_READ_CFG_TOPO_FC_DA	0x2 /* FC Direct Attach (non FC-AL) */
+#define SLI4_READ_CFG_TOPO_FC_AL	0x3	/** FC-AL topology */
+
+/**
+ * @brief READ_NVPARMS - read SLI port configuration parameters
+ */
+
+enum {
+	SLI4_READ_NVPARAMS_HARD_ALPA	  = 0xff,
+	SLI4_READ_NVPARAMS_PREFERRED_D_ID = 0xffffff00,
+};
+
+struct sli4_cmd_read_nvparms_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		resvd0;
+	__le32		resvd4;
+	__le32		resvd8;
+	__le32		resvd12;
+	u8		wwpn[8];
+	u8		wwnn[8];
+	__le32		hard_alpa_d_id;
+};
+
+/**
+ * @brief WRITE_NVPARMS - write SLI port configuration parameters
+ */
+struct sli4_cmd_write_nvparms_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		resvd0;
+	__le32		resvd4;
+	__le32		resvd8;
+	__le32		resvd12;
+	u8		wwpn[8];
+	u8		wwnn[8];
+	__le32		hard_alpa_d_id;
+};
+
+/**
+ * @brief READ_REV - read the Port revision levels
+ */
+enum {
+	SLI4_READ_REV_FLAG_SLI_LEVEL = 0xf,
+	SLI4_READ_REV_FLAG_FCOEM	= 0x10,
+	SLI4_READ_REV_FLAG_CEEV	= 0x60,
+	SLI4_READ_REV_FLAG_VPD	= 0x2000,
+
+	SLI4_READ_REV_AVAILABLE_LENGTH = 0xffffff,
+};
+
+struct sli4_cmd_read_rev_s {
+	struct sli4_mbox_command_header_s hdr;
+	__le16		resvd0;
+	__le16		flags0_word;
+	__le32		first_hw_rev;
+	__le32		second_hw_rev;
+	__le32		resvd12;
+	__le32		third_hw_rev;
+	u8		fc_ph_low;
+	u8		fc_ph_high;
+	u8		feature_level_low;
+	u8		feature_level_high;
+	__le32		resvd24;
+	__le32		first_fw_id;
+	u8		first_fw_name[16];
+	__le32		second_fw_id;
+	u8		second_fw_name[16];
+	__le32		rsvd18[30];
+	__le32		available_length_dword;
+	struct sli4_dmaaddr_s hostbuf;
+	__le32		returned_vpd_length;
+	__le32		actual_vpd_length;
+};
+
+/**
+ * @brief READ_SPARM64 - read the Port service parameters
+ */
+struct sli4_cmd_read_sparm64_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		resvd0;
+	__le32		resvd4;
+	struct sli4_bde_s	bde_64;
+	__le16		vpi;
+	__le16		resvd22;
+	__le16		port_name_start;
+	__le16		port_name_len;
+	__le16		node_name_start;
+	__le16		node_name_len;
+};
+
+#define SLI4_READ_SPARM64_VPI_DEFAULT	0
+#define SLI4_READ_SPARM64_VPI_SPECIAL	U16_MAX
+
+#define SLI4_READ_SPARM64_WWPN_OFFSET	(4 * sizeof(u32))
+#define SLI4_READ_SPARM64_WWNN_OFFSET	(SLI4_READ_SPARM64_WWPN_OFFSET \
+					+ sizeof(uint64_t))
+/**
+ * @brief READ_TOPOLOGY - read the link event information
+ */
+enum {
+	SLI4_READTOPO_ATTEN_TYPE	= 0xff,		/* DW2 */
+	SLI4_READTOPO_FLAG_IL		= 0x100,
+	SLI4_READTOPO_FLAG_PB_RECVD	= 0x200,
+
+	SLI4_READTOPO_LINKSTATE_RECV	= 0x3,
+	SLI4_READTOPO_LINKSTATE_TRANS	= 0xc,
+	SLI4_READTOPO_LINKSTATE_MACHINE	= 0xf0,
+	SLI4_READTOPO_LINKSTATE_SPEED	= 0xff00,
+	SLI4_READTOPO_LINKSTATE_TF	= 0x40000000,
+	SLI4_READTOPO_LINKSTATE_LU	= 0x80000000,
+
+	SLI4_READTOPO_SCN_BBSCN		= 0xf,		/* DW9W1B0 */
+	SLI4_READTOPO_SCN_CBBSCN	= 0xf0,
+
+	SLI4_READTOPO_R_T_TOV		= 0x1ff,	/* DW10WO */
+	SLI4_READTOPO_AL_TOV		= 0xf000,
+
+	SLI4_READTOPO_PB_FLAG		= 0x80,
+
+	SLI4_READTOPO_INIT_N_PORTID	= 0xffffff,
+};
+
+struct sli4_cmd_read_topology_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		event_tag;
+	__le32		dw2_attentype;
+	u8		topology;
+	u8		lip_type;
+	u8		lip_al_ps;
+	u8		al_pa_granted;
+	struct sli4_bde_s	bde_loop_map;
+	__le32		linkdown_state;
+	__le32		currlink_state;
+	u8		max_bbc;
+	u8		init_bbc;
+	u8		scn_flags;
+	u8		rsvd39;
+	__le16		dw10w0_al_rt_tov;
+	__le16		lp_tov;
+	u8		acquired_al_pa;
+	u8		pb_flags;
+	__le16		specified_al_pa;
+	__le32		dw12_init_n_port_id;
+};
+
+#define SLI4_MIN_LOOP_MAP_BYTES	128
+
+#define SLI4_READ_TOPOLOGY_LINK_UP	0x1
+#define SLI4_READ_TOPOLOGY_LINK_DOWN	0x2
+#define SLI4_READ_TOPOLOGY_LINK_NO_ALPA	0x3
+
+#define SLI4_READ_TOPOLOGY_UNKNOWN	0x0
+#define SLI4_READ_TOPOLOGY_NPORT	0x1
+#define SLI4_READ_TOPOLOGY_FC_AL	0x2
+
+#define SLI4_READ_TOPOLOGY_SPEED_NONE	0x00
+#define SLI4_READ_TOPOLOGY_SPEED_1G	0x04
+#define SLI4_READ_TOPOLOGY_SPEED_2G	0x08
+#define SLI4_READ_TOPOLOGY_SPEED_4G	0x10
+#define SLI4_READ_TOPOLOGY_SPEED_8G	0x20
+#define SLI4_READ_TOPOLOGY_SPEED_10G	0x40
+#define SLI4_READ_TOPOLOGY_SPEED_16G	0x80
+#define SLI4_READ_TOPOLOGY_SPEED_32G	0x90
+
+/**
+ * @brief REG_FCFI - activate a FC Forwarder
+ */
+struct sli4_cmd_reg_fcfi_rq_cfg {
+	u8	r_ctl_mask;
+	u8	r_ctl_match;
+	u8	type_mask;
+	u8	type_match;
+};
+
+enum {
+	SLI4_REGFCFI_VLAN_TAG		= 0xfff,
+	SLI4_REGFCFI_VLANTAG_VALID	= 0x1000,
+};
+
+#define SLI4_CMD_REG_FCFI_NUM_RQ_CFG	4
+struct sli4_cmd_reg_fcfi_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le16		fcf_index;
+	__le16		fcfi;
+	__le16		rqid1;
+	__le16		rqid0;
+	__le16		rqid3;
+	__le16		rqid2;
+	struct sli4_cmd_reg_fcfi_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+	__le32		dw8_vlan;
+};
+
+#define SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG	4
+#define SLI4_CMD_REG_FCFI_MRQ_MAX_NUM_RQ	32
+#define SLI4_CMD_REG_FCFI_SET_FCFI_MODE		0
+#define SLI4_CMD_REG_FCFI_SET_MRQ_MODE		1
+
+enum {
+	SLI4_REGFCFI_MRQ_VLAN_TAG	= 0xfff,
+	SLI4_REGFCFI_MRQ_VLANTAG_VALID	= 0x1000,
+	SLI4_REGFCFI_MRQ_MODE		= 0x2000,
+
+	SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS	= 0xff,
+	SLI4_REGFCFI_MRQ_FILTER_BITMASK = 0xf00,
+	SLI4_REGFCFI_MRQ_RQ_SEL_POLICY	= 0xf000,
+};
+
+struct sli4_cmd_reg_fcfi_mrq_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le16		fcf_index;
+	__le16		fcfi;
+	__le16		rqid1;
+	__le16		rqid0;
+	__le16		rqid3;
+	__le16		rqid2;
+	struct sli4_cmd_reg_fcfi_rq_cfg
+				rq_cfg[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
+	__le32		dw8_vlan;
+	__le32		dw9_mrqflags;
+};
+
+/**
+ * @brief REG_RPI - register a Remote Port Indicator
+ */
+enum {
+	SLI4_REGRPI_REMOTE_N_PORTID	= 0xffffff,	/* DW2 */
+	SLI4_REGRPI_UPD			= 0x1000000,
+	SLI4_REGRPI_ETOW		= 0x8000000,
+	SLI4_REGRPI_TERP		= 0x20000000,
+	SLI4_REGRPI_CI			= 0x80000000,
+};
+
+struct sli4_cmd_reg_rpi_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le16		rpi;
+	__le16		rsvd2;
+	__le32		dw2_rportid_flags;
+	struct sli4_bde_s	bde_64;
+	__le16		vpi;
+	__le16		rsvd26;
+};
+
+#define SLI4_REG_RPI_BUF_LEN			0x70
+
+/**
+ * @brief REG_VFI - register a Virtual Fabric Indicator
+ */
+enum {
+	SLI4_REGVFI_VP		= 0x1000,	/* DW1 */
+	SLI4_REGVFI_UPD		= 0x2000,
+
+	SLI4_REGVFI_LOCAL_N_PORTID = 0xffffff,	/* DW10 */
+};
+
+struct sli4_cmd_reg_vfi_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le16		vfi;
+	__le16		dw0w1_flags;
+	__le16		fcfi;
+	__le16		vpi;			/* vp=TRUE */
+	u8		wwpn[8];
+	struct sli4_bde_s sparm;
+	__le32		e_d_tov;
+	__le32		r_a_tov;
+	__le32		dw10_lportid_flags;
+};
+
+/**
+ * @brief REG_VPI - register a Virtual Port Indicator
+ */
+enum {
+	SLI4_REGVPI_LOCAL_N_PORTID	= 0xffffff,
+	SLI4_REGVPI_UPD			= 0x1000000,
+};
+
+struct sli4_cmd_reg_vpi_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		rsvd0;
+	__le32		dw2_lportid_flags;
+	u8		wwpn[8];
+	__le32		rsvd12;
+	__le16		vpi;
+	__le16		vfi;
+};
+
+/**
+ * @brief REQUEST_FEATURES - request / query SLI features
+ */
+enum {
+	SLI4_REQFEAT_QRY	= 0x1,		/* Dw1 */
+
+	SLI4_REQFEAT_IAAB	= (1 << 0),	/* DW2 & DW3 */
+	SLI4_REQFEAT_NPIV	= (1 << 1),
+	SLI4_REQFEAT_DIF	= (1 << 2),
+	SLI4_REQFEAT_VF		= (1 << 3),
+	SLI4_REQFEAT_FCPI	= (1 << 4),
+	SLI4_REQFEAT_FCPT	= (1 << 5),
+	SLI4_REQFEAT_FCPC	= (1 << 6),
+	SLI4_REQFEAT_RSVD	= (1 << 7),
+	SLI4_REQFEAT_RQD	= (1 << 8),
+	SLI4_REQFEAT_IAAR	= (1 << 9),
+	SLI4_REQFEAT_HLM	= (1 << 10),
+	SLI4_REQFEAT_PERFH	= (1 << 11),
+	SLI4_REQFEAT_RXSEQ	= (1 << 12),
+	SLI4_REQFEAT_RXRI	= (1 << 13),
+	SLI4_REQFEAT_DCL2	= (1 << 14),
+	SLI4_REQFEAT_RSCO	= (1 << 15),
+	SLI4_REQFEAT_MRQP	= (1 << 16),
+};
+
+struct sli4_cmd_request_features_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		dw1_qry;
+	__le32		cmd;
+	__le32		resp;
+};
+
+/**
+ * @brief SLI_CONFIG - submit a configuration command to Port
+ *
+ * Command is either embedded as part of the payload (embed) or located
+ * in a separate memory buffer (mem)
+ */
+enum {
+	SLI4_SLICONF_EMB		= 0x1,		/* DW1 */
+	SLI4_SLICONF_PMDCMD_SHIFT	= 3,
+	SLI4_SLICONF_PMDCMD_MASK	= 0x1F << SLI4_SLICONF_PMDCMD_SHIFT,
+	SLI4_SLICONF_PMDCMD_VAL_1	= 1 << SLI4_SLICONF_PMDCMD_SHIFT,
+	SLI4_SLICONF_PMDCNT		= 0xf8,
+
+	SLI4_SLICONFIG_PMD_LEN	= 0x00ffffff,	/* Config PMD length */
+};
+
+struct sli4_cmd_sli_config_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		dw1_flags;
+	__le32		payload_len;
+	__le32		rsvd12[3];
+	union {
+		u8 embed[58 * sizeof(u32)];
+		struct sli4_bufptr_s mem;
+	} payload;
+};
+
+/**
+ * @brief READ_STATUS - read tx/rx status of a particular port
+ *
+ */
+enum {
+	SLI4_READSTATUS_CLEAR_COUNTERS	= 0x1,	/* DW1 */
+};
+
+struct sli4_cmd_read_status_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		dw1_flags;
+	__le32		rsvd4;
+	__le32		trans_kbyte_cnt;
+	__le32		recv_kbyte_cnt;
+	__le32		trans_frame_cnt;
+	__le32		recv_frame_cnt;
+	__le32		trans_seq_cnt;
+	__le32		recv_seq_cnt;
+	__le32		tot_exchanges_orig;
+	__le32		tot_exchanges_resp;
+	__le32		recv_p_bsy_cnt;
+	__le32		recv_f_bsy_cnt;
+	__le32		no_rq_buf_dropped_frames_cnt;
+	__le32		empty_rq_timeout_cnt;
+	__le32		no_xri_dropped_frames_cnt;
+	__le32		empty_xri_pool_cnt;
+};
+
+/**
+ * @brief READ_LNK_STAT - read link status of a particular port
+ *
+ */
+enum {
+	SLI4_READ_LNKSTAT_REC	= (1 << 0),
+	SLI4_READ_LNKSTAT_GEC	= (1 << 1),
+	SLI4_READ_LNKSTAT_W02OF	= (1 << 2),
+	SLI4_READ_LNKSTAT_W03OF	= (1 << 3),
+	SLI4_READ_LNKSTAT_W04OF	= (1 << 4),
+	SLI4_READ_LNKSTAT_W05OF	= (1 << 5),
+	SLI4_READ_LNKSTAT_W06OF	= (1 << 6),
+	SLI4_READ_LNKSTAT_W07OF	= (1 << 7),
+	SLI4_READ_LNKSTAT_W08OF	= (1 << 8),
+	SLI4_READ_LNKSTAT_W09OF	= (1 << 9),
+	SLI4_READ_LNKSTAT_W10OF = (1 << 10),
+	SLI4_READ_LNKSTAT_W11OF = (1 << 11),
+	SLI4_READ_LNKSTAT_W12OF	= (1 << 12),
+	SLI4_READ_LNKSTAT_W13OF	= (1 << 13),
+	SLI4_READ_LNKSTAT_W14OF	= (1 << 14),
+	SLI4_READ_LNKSTAT_W15OF	= (1 << 15),
+	SLI4_READ_LNKSTAT_W16OF	= (1 << 16),
+	SLI4_READ_LNKSTAT_W17OF	= (1 << 17),
+	SLI4_READ_LNKSTAT_W18OF	= (1 << 18),
+	SLI4_READ_LNKSTAT_W19OF	= (1 << 19),
+	SLI4_READ_LNKSTAT_W20OF	= (1 << 20),
+	SLI4_READ_LNKSTAT_W21OF	= (1 << 21),
+	SLI4_READ_LNKSTAT_CLRC	= (1 << 30),
+	SLI4_READ_LNKSTAT_CLOF	= (1 << 31),
+};
+
+struct sli4_cmd_read_link_stats_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32	dw1_flags;
+	__le32	linkfail_errcnt;
+	__le32	losssync_errcnt;
+	__le32	losssignal_errcnt;
+	__le32	primseq_errcnt;
+	__le32	inval_txword_errcnt;
+	__le32	crc_errcnt;
+	__le32	primseq_eventtimeout_cnt;
+	__le32	elastic_bufoverrun_errcnt;
+	__le32	arbit_fc_al_timeout_cnt;
+	__le32	adv_rx_buftor_to_buf_credit;
+	__le32	curr_rx_buf_to_buf_credit;
+	__le32	adv_tx_buf_to_buf_credit;
+	__le32	curr_tx_buf_to_buf_credit;
+	__le32	rx_eofa_cnt;
+	__le32	rx_eofdti_cnt;
+	__le32	rx_eofni_cnt;
+	__le32	rx_soff_cnt;
+	__le32	rx_dropped_no_aer_cnt;
+	__le32	rx_dropped_no_avail_rpi_rescnt;
+	__le32	rx_dropped_no_avail_xri_rescnt;
+};
+
+/**
+ * @brief Format a WQE with WQ_ID Association performance hint
+ *
+ * @par Description
+ * PHWQ works by over-writing part of Word 10 in the WQE with the WQ ID.
+ *
+ * @param entry Pointer to the WQE.
+ * @param q_id Queue ID.
+ *
+ * @return None.
+ */
+static inline void
+sli_set_wq_id_association(void *entry, u16 q_id)
+{
+	u32 *wqe = entry;
+
+	/*
+	 * Set Word 10, bit 0 to zero
+	 * Set Word 10, bits 15:1 to the WQ ID
+	 */
+	wqe[10] &= cpu_to_le32(~0xffff);
+	wqe[10] |= cpu_to_le16(q_id << 1);
+}
+
+/**
+ * @brief UNREG_FCFI - unregister a FCFI
+ */
+struct sli4_cmd_unreg_fcfi_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		rsvd0;
+	__le16		fcfi;
+	__le16		rsvd6;
+};
+
+/**
+ * @brief UNREG_RPI - unregister one or more RPI
+ */
+enum {
+	UNREG_RPI_DP		= 0x2000,
+	UNREG_RPI_II_SHIFT	= 14,
+	UNREG_RPI_II_MASK	= 0x03 << UNREG_RPI_II_SHIFT,
+	UNREG_RPI_II_RPI	= 0x00 << UNREG_RPI_II_SHIFT,
+	UNREG_RPI_II_VPI	= 0x01 << UNREG_RPI_II_SHIFT,
+	UNREG_RPI_II_VFI	= 0x02 << UNREG_RPI_II_SHIFT,
+	UNREG_RPI_II_FCFI	= 0x03 << UNREG_RPI_II_SHIFT,
+
+	UNREG_RPI_DEST_N_PORTID_MASK = 0x00ffffff,
+};
+
+struct sli4_cmd_unreg_rpi_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le16		index;
+	__le16		dw1w1_flags;
+	__le32		dw2_dest_n_portid;
+};
+
+/**
+ * @brief UNREG_VFI - unregister one or more VFI
+ */
+enum {
+	UNREG_VFI_II_SHIFT	= 14,
+	UNREG_VFI_II_MASK	= 0x03 << UNREG_VFI_II_SHIFT,
+	UNREG_VFI_II_VFI	= 0x00 << UNREG_VFI_II_SHIFT,
+	UNREG_VFI_II_FCFI	= 0x03 << UNREG_VFI_II_SHIFT,
+};
+
+struct sli4_cmd_unreg_vfi_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		rsvd0;
+	__le16		index;
+	__le16		dw2_flags;
+};
+
+enum sli4_unreg_type_e {
+	SLI4_UNREG_TYPE_PORT,
+	SLI4_UNREG_TYPE_DOMAIN,
+	SLI4_UNREG_TYPE_FCF,
+	SLI4_UNREG_TYPE_ALL
+};
+
+/**
+ * @brief UNREG_VPI - unregister one or more VPI
+ */
+enum {
+	UNREG_VPI_II_SHIFT	= 14,
+	UNREG_VPI_II_MASK	= 0x03 << UNREG_VPI_II_SHIFT,
+	UNREG_VPI_II_VPI	= 0x00 << UNREG_VPI_II_SHIFT,
+	UNREG_VPI_II_VFI	= 0x02 << UNREG_VPI_II_SHIFT,
+	UNREG_VPI_II_FCFI	= 0x03 << UNREG_VPI_II_SHIFT,
+};
+
+struct sli4_cmd_unreg_vpi_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		rsvd0;
+	__le16		index;
+	__le16		dw2w0_flags;
+};
+
+/**
+ * @brief AUTO_XFER_RDY - Configure the auto-generate XFER-RDY feature.
+ */
+struct sli4_cmd_config_auto_xfer_rdy_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		rsvd0;
+	__le32		max_burst_len;
+};
+
+#define SLI4_CONFIG_AUTO_XFERRDY_BLKSIZE	0xffff
+
+struct sli4_cmd_config_auto_xfer_rdy_hp_s {
+	struct sli4_mbox_command_header_s	hdr;
+	__le32		rsvd0;
+	__le32		max_burst_len;
+	__le32		dw3_esoc_flags;
+	__le16		block_size;
+	__le16		rsvd14;
+};
+
+/*************************************************************************
+ * SLI-4 common configuration command formats and definitions
+ */
+
+#define SLI4_CFG_STATUS_SUCCESS			0x00
+#define SLI4_CFG_STATUS_FAILED			0x01
+#define SLI4_CFG_STATUS_ILLEGAL_REQUEST		0x02
+#define SLI4_CFG_STATUS_ILLEGAL_FIELD		0x03
+
+#define SLI4_MGMT_STATUS_FLASHROM_READ_FAILED	0xcb
+
+#define SLI4_CFG_ADD_STATUS_NO_STATUS		0x00
+#define SLI4_CFG_ADD_STATUS_INVALID_OPCODE	0x1e
+
+/**
+ * Subsystem values.
+ */
+#define SLI4_SUBSYSTEM_COMMON			0x01
+#define SLI4_SUBSYSTEM_LOWLEVEL			0x0B
+#define SLI4_SUBSYSTEM_FC			0x0c
+#define SLI4_SUBSYSTEM_DMTF			0x11
+
+#define	SLI4_OPC_LOWLEVEL_SET_WATCHDOG		0X36
+
+/**
+ * Common opcode (OPC) values.
+ */
+enum {
+	CMN_FUNCTION_RESET	= 0x3d,
+	CMN_CREATE_CQ		= 0x0c,
+	CMN_CREATE_CQ_SET	= 0x1d,
+	CMN_DESTROY_CQ		= 0x36,
+	CMN_MODIFY_EQ_DELAY	= 0x29,
+	CMN_CREATE_EQ		= 0x0d,
+	CMN_DESTROY_EQ		= 0x37,
+	CMN_CREATE_MQ_EXT	= 0x5a,
+	CMN_DESTROY_MQ		= 0x35,
+	CMN_GET_CNTL_ATTRIBUTES	= 0x20,
+	CMN_NOP			= 0x21,
+	CMN_GET_RSC_EXTENT_INFO = 0x9a,
+	CMN_GET_SLI4_PARAMS	= 0xb5,
+	CMN_QUERY_FW_CONFIG	= 0x3a,
+	CMN_GET_PORT_NAME	= 0x4d,
+
+	CMN_WRITE_FLASHROM	= 0x07,
+	/* TRANSCEIVER Data */
+	CMN_READ_TRANS_DATA	= 0x49,
+	CMN_GET_CNTL_ADDL_ATTRS = 0x79,
+	CMN_GET_FUNCTION_CFG	= 0xa0,
+	CMN_GET_PROFILE_CFG	= 0xa4,
+	CMN_SET_PROFILE_CFG	= 0xa5,
+	CMN_GET_PROFILE_LIST	= 0xa6,
+	CMN_GET_ACTIVE_PROFILE	= 0xa7,
+	CMN_SET_ACTIVE_PROFILE	= 0xa8,
+	CMN_READ_OBJECT		= 0xab,
+	CMN_WRITE_OBJECT	= 0xac,
+	CMN_DELETE_OBJECT	= 0xae,
+	CMN_READ_OBJECT_LIST	= 0xad,
+	CMN_SET_DUMP_LOCATION	= 0xb8,
+	CMN_SET_FEATURES	= 0xbf,
+	CMN_GET_RECFG_LINK_INFO = 0xc9,
+	CMN_SET_RECNG_LINK_ID	= 0xca,
+};
+
+/**
+ * DMTF opcode (OPC) values.
+ */
+#define DMTF_EXEC_CLP_CMD 0x01
+
+/**
+ * @brief COMMON_FUNCTION_RESET
+ *
+ * Resets the Port, returning it to a power-on state. This configuration
+ * command does not have a payload and should set/expect the lengths to
+ * be zero.
+ */
+struct sli4_rqst_cmn_function_reset_s {
+	struct sli4_rqst_hdr_s	hdr;
+};
+
+struct sli4_rsp_cmn_function_reset_s {
+	struct sli4_rsp_hdr_s	hdr;
+};
+
+
+/**
+ * @brief COMMON_GET_CNTL_ATTRIBUTES
+ *
+ * Query for information about the SLI Port
+ */
+enum {
+	SLI4_CNTL_ATTR_PORTNUM	= 0x3f,		/* Port num and type */
+	SLI4_CNTL_ATTR_PORTTYPE	= 0xc0,
+};
+
+struct sli4_rsp_cmn_get_cntl_attributes_s {
+	struct sli4_rsp_hdr_s	hdr;
+	u8		version_str[32];
+	u8		manufacturer_name[32];
+	__le32		supported_modes;
+	u8		eprom_version_lo;
+	u8		eprom_version_hi;
+	__le16		rsvd17;
+	__le32		mbx_ds_version;
+	__le32		ep_fw_ds_version;
+	u8		ncsi_version_str[12];
+	__le32		def_extended_timeout;
+	u8		model_number[32];
+	u8		description[64];
+	u8		serial_number[32];
+	u8		ip_version_str[32];
+	u8		fw_version_str[32];
+	u8		bios_version_str[32];
+	u8		redboot_version_str[32];
+	u8		driver_version_str[32];
+	u8		fw_on_flash_version_str[32];
+	__le32		functionalities_supported;
+	__le16		max_cdb_length;
+	u8		asic_revision;
+	u8		generational_guid0;
+	__le32		generational_guid1_12[3];
+	__le16		generational_guid13_14;
+	u8		generational_guid15;
+	u8		hba_port_count;
+	__le16		default_link_down_timeout;
+	u8		iscsi_version_min_max;
+	u8		multifunctional_device;
+	u8		cache_valid;
+	u8		hba_status;
+	u8		max_domains_supported;
+	u8		port_num_type_flags;
+	__le32		firmware_post_status;
+	__le32		hba_mtu;
+	u8		iscsi_features;
+	u8		rsvd121[3];
+	__le16		pci_vendor_id;
+	__le16		pci_device_id;
+	__le16		pci_sub_vendor_id;
+	__le16		pci_sub_system_id;
+	u8		pci_bus_number;
+	u8		pci_device_number;
+	u8		pci_function_number;
+	u8		interface_type;
+	__le64		unique_identifier;
+	u8		number_of_netfilters;
+	u8		rsvd122[3];
+};
+
+/**
+ * @brief COMMON_GET_CNTL_ATTRIBUTES
+ *
+ * This command queries the controller information from the Flash ROM.
+ */
+struct sli4_rqst_cmn_get_cntl_addl_attributes_s {
+	struct sli4_rqst_hdr_s	hdr;
+};
+
+struct sli4_rsp_cmn_get_cntl_addl_attributes_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le16		ipl_file_number;
+	u8		ipl_file_version;
+	u8		rsvd4;
+	u8		on_die_temperature;
+	u8		rsvd5[3];
+	__le32		driver_advanced_features_supported;
+	__le32		rsvd7[4];
+	char		universal_bios_version[32];
+	char		x86_bios_version[32];
+	char		efi_bios_version[32];
+	char		fcode_version[32];
+	char		uefi_bios_version[32];
+	char		uefi_nic_version[32];
+	char		uefi_fcode_version[32];
+	char		uefi_iscsi_version[32];
+	char		iscsi_x86_bios_version[32];
+	char		pxe_x86_bios_version[32];
+	u8		default_wwpn[8];
+	u8		ext_phy_version[32];
+	u8		fc_universal_bios_version[32];
+	u8		fc_x86_bios_version[32];
+	u8		fc_efi_bios_version[32];
+	u8		fc_fcode_version[32];
+	u8		ext_phy_crc_label[8];
+	u8		ipl_file_name[16];
+	u8		rsvd139[72];
+};
+
+/**
+ * @brief COMMON_NOP
+ *
+ * This command does not do anything; it only returns
+ * the payload in the completion.
+ */
+struct sli4_rqst_cmn_nop_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32			context[2];
+};
+
+struct sli4_rsp_cmn_nop_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le32			context[2];
+};
+
+/**
+ * @brief COMMON_GET_RESOURCE_EXTENT_INFO
+ */
+struct sli4_rqst_cmn_get_resource_extent_info_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16	resource_type;
+	__le16	rsvd16;
+};
+
+#define SLI4_RSC_TYPE_ISCSI_INI_XRI	0x0c
+#define SLI4_RSC_TYPE_VFI		0x20
+#define SLI4_RSC_TYPE_VPI		0x21
+#define SLI4_RSC_TYPE_RPI		0x22
+#define SLI4_RSC_TYPE_XRI		0x23
+
+struct sli4_rsp_cmn_get_resource_extent_info_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le16	resource_extent_count;
+	__le16	resource_extent_size;
+};
+
+#define SLI4_128BYTE_WQE_SUPPORT	0x02
+/**
+ * @brief COMMON_GET_SLI4_PARAMETERS
+ */
+
+#define GET_Q_CNT_METHOD(val)\
+	((val & RSP_GET_PARAM_Q_CNT_MTHD_MASK)\
+	>> RSP_GET_PARAM_Q_CNT_MTHD_SHFT)
+#define GET_Q_CREATE_VERSION(val)\
+	((val & RSP_GET_PARAM_QV_MASK)\
+	>> RSP_GET_PARAM_QV_SHIFT)
+
+enum {
+	/*GENERIC*/
+	RSP_GET_PARAM_Q_CNT_MTHD_SHFT	= 24,
+	RSP_GET_PARAM_Q_CNT_MTHD_MASK	= (0xF << 24),
+	RSP_GET_PARAM_QV_SHIFT		= 14,
+	RSP_GET_PARAM_QV_MASK		= (3 << 14),
+
+	/* DW4 */
+	RSP_GET_PARAM_PROTO_TYPE_MASK	= 0xFF,
+	/* DW5 */
+	RSP_GET_PARAM_FT		= (1 << 0),
+	RSP_GET_PARAM_SLI_REV_MASK	= (0xF << 4),
+	RSP_GET_PARAM_SLI_FAM_MASK	= (0xF << 8),
+	RSP_GET_PARAM_IF_TYPE_MASK	= (0xF << 12),
+	RSP_GET_PARAM_SLI_HINT1_MASK	= (0xFF << 16),
+	RSP_GET_PARAM_SLI_HINT2_MASK	= (0x1F << 24),
+	/* DW6 */
+	RSP_GET_PARAM_EQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_EQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_EQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW8 */
+	RSP_GET_PARAM_CQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_CQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_CQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW10 */
+	RSP_GET_PARAM_MQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_MQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW12 */
+	RSP_GET_PARAM_WQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_WQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_WQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW14 */
+	RSP_GET_PARAM_RQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_RQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_RQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW15W1*/
+	RSP_GET_PARAM_RQ_DB_WINDOW_MASK	= 0xF000,
+	/* DW16 */
+	RSP_GET_PARAM_FC		= (1 << 0),
+	RSP_GET_PARAM_EXT		= (1 << 1),
+	RSP_GET_PARAM_HDRR		= (1 << 2),
+	RSP_GET_PARAM_SGLR		= (1 << 3),
+	RSP_GET_PARAM_FBRR		= (1 << 4),
+	RSP_GET_PARAM_AREG		= (1 << 5),
+	RSP_GET_PARAM_TGT		= (1 << 6),
+	RSP_GET_PARAM_TERP		= (1 << 7),
+	RSP_GET_PARAM_ASSI		= (1 << 8),
+	RSP_GET_PARAM_WCHN		= (1 << 9),
+	RSP_GET_PARAM_TCCA		= (1 << 10),
+	RSP_GET_PARAM_TRTY		= (1 << 11),
+	RSP_GET_PARAM_TRIR		= (1 << 12),
+	RSP_GET_PARAM_PHOFF		= (1 << 13),
+	RSP_GET_PARAM_PHON		= (1 << 14),
+	RSP_GET_PARAM_PHWQ		= (1 << 15),
+	RSP_GET_PARAM_BOUND_4GA		= (1 << 16),
+	RSP_GET_PARAM_RXC		= (1 << 17),
+	RSP_GET_PARAM_HLM		= (1 << 18),
+	RSP_GET_PARAM_IPR		= (1 << 19),
+	RSP_GET_PARAM_RXRI		= (1 << 20),
+	RSP_GET_PARAM_SGLC		= (1 << 21),
+	RSP_GET_PARAM_TIMM		= (1 << 22),
+	RSP_GET_PARAM_TSMM		= (1 << 23),
+	RSP_GET_PARAM_OAS		= (1 << 25),
+	RSP_GET_PARAM_LC		= (1 << 26),
+	RSP_GET_PARAM_AGXF		= (1 << 27),
+	RSP_GET_PARAM_LOOPBACK_MASK	= (0xF << 28),
+	/* DW18 */
+	RSP_GET_PARAM_SGL_PAGE_CNT_MASK = (0xF << 0),
+	RSP_GET_PARAM_SGL_PAGE_SZS_MASK = (0xFF << 8),
+	RSP_GET_PARAM_SGL_PP_ALIGN_MASK = (0xFF << 16)
+};
+
+struct sli4_rqst_cmn_get_sli4_params_s {
+	struct sli4_rqst_hdr_s	hdr;
+};
+
+struct sli4_rsp_cmn_get_sli4_params_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le32		dw4_protocol_type;
+	__le32		dw5_sli;
+	__le32		dw6_eq_page_cnt;
+	__le16		eqe_count_mask;
+	__le16		rsvd26;
+	__le32		dw8_cq_page_cnt;
+	__le16		cqe_count_mask;
+	__le16		rsvd34;
+	__le32		dw10_mq_page_cnt;
+	__le16		mqe_count_mask;
+	__le16		rsvd42;
+	__le32		dw12_wq_page_cnt;
+	__le16		wqe_count_mask;
+	__le16		rsvd50;
+	__le32		dw14_rq_page_cnt;
+	__le16		rqe_count_mask;
+	__le16		dw15w1_rq_db_window;
+	__le32		dw16_loopback_scope;
+	__le32		sge_supported_length;
+	__le32		dw18_sgl_page_cnt;
+	__le16		min_rq_buffer_size;
+	__le16		rsvd75;
+	__le32		max_rq_buffer_size;
+	__le16		physical_xri_max;
+	__le16		physical_rpi_max;
+	__le16		physical_vpi_max;
+	__le16		physical_vfi_max;
+	__le32		rsvd88;
+	__le16		frag_num_field_offset;
+	__le16		frag_num_field_size;
+	__le16		sgl_index_field_offset;
+	__le16		sgl_index_field_size;
+	__le32		chain_sge_initial_value_lo;
+	__le32		chain_sge_initial_value_hi;
+};
+
+/**
+ * @brief COMMON_QUERY_FW_CONFIG
+ *
+ * This command retrieves firmware configuration parameters and adapter
+ * resources available to the driver.
+ */
+struct sli4_rqst_cmn_query_fw_config_s {
+	struct sli4_rqst_hdr_s	hdr;
+};
+
+#define SLI4_FUNCTION_MODE_INI_MODE 0x40
+#define SLI4_FUNCTION_MODE_TGT_MODE 0x80
+#define SLI4_FUNCTION_MODE_DUA_MODE      0x800
+
+#define SLI4_ULP_MODE_INI           0x40
+#define SLI4_ULP_MODE_TGT           0x80
+
+struct sli4_rsp_cmn_query_fw_config_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le32		config_number;
+	__le32		asic_rev;
+	__le32		physical_port;
+	__le32		function_mode;
+	__le32		ulp0_mode;
+	__le32		ulp0_nic_wqid_base;
+	__le32		ulp0_nic_wq_total; /* Dword 10 */
+	__le32		ulp0_toe_wqid_base;
+	__le32		ulp0_toe_wq_total;
+	__le32		ulp0_toe_rqid_base;
+	__le32		ulp0_toe_rq_total;
+	__le32		ulp0_toe_defrqid_base;
+	__le32		ulp0_toe_defrq_total;
+	__le32		ulp0_lro_rqid_base;
+	__le32		ulp0_lro_rq_total;
+	__le32		ulp0_iscsi_icd_base;
+	__le32		ulp0_iscsi_icd_total; /* Dword 20 */
+	__le32		ulp1_mode;
+	__le32		ulp1_nic_wqid_base;
+	__le32		ulp1_nic_wq_total;
+	__le32		ulp1_toe_wqid_base;
+	__le32		ulp1_toe_wq_total;
+	__le32		ulp1_toe_rqid_base;
+	__le32		ulp1_toe_rq_total;
+	__le32		ulp1_toe_defrqid_base;
+	__le32		ulp1_toe_defrq_total;
+	__le32		ulp1_lro_rqid_base;  /* Dword 30 */
+	__le32		ulp1_lro_rq_total;
+	__le32		ulp1_iscsi_icd_base;
+	__le32		ulp1_iscsi_icd_total;
+	__le32		function_capabilities;
+	__le32		ulp0_cq_base;
+	__le32		ulp0_cq_total;
+	__le32		ulp0_eq_base;
+	__le32		ulp0_eq_total;
+	__le32		ulp0_iscsi_chain_icd_base;
+	__le32		ulp0_iscsi_chain_icd_total;  /* Dword 40 */
+	__le32		ulp1_iscsi_chain_icd_base;
+	__le32		ulp1_iscsi_chain_icd_total;
+};
+
+/**
+ * @brief COMMON_GET_PORT_NAME
+ */
+/*Port Types*/
+enum {
+	PORT_TYPE_ETH	= 0,
+	PORT_TYPE_FC	= 1,
+};
+
+struct sli4_rqst_cmn_get_port_name_s {
+	struct sli4_rqst_hdr_s	hdr;
+	u8      port_type;
+	u8      rsvd4[3];
+};
+
+struct sli4_rsp_cmn_get_port_name_s {
+	struct sli4_rsp_hdr_s	hdr;
+	char		port_name[4];
+};
+
+/**
+ * @brief COMMON_WRITE_FLASHROM
+ */
+struct sli4_rqst_cmn_write_flashrom_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32		flash_rom_access_opcode;
+	__le32		flash_rom_access_operation_type;
+	__le32		data_buffer_size;
+	__le32		offset;
+	u8		data_buffer[4];
+};
+
+#define SLI4_MGMT_FLASHROM_OPCODE_FLASH			0x01
+#define SLI4_MGMT_FLASHROM_OPCODE_SAVE			0x02
+#define SLI4_MGMT_FLASHROM_OPCODE_CLEAR			0x03
+#define SLI4_MGMT_FLASHROM_OPCODE_REPORT		0x04
+#define SLI4_MGMT_FLASHROM_OPCODE_IMAGE_INFO		0x05
+#define SLI4_MGMT_FLASHROM_OPCODE_IMAGE_CRC		0x06
+#define SLI4_MGMT_FLASHROM_OPCODE_OFFSET_BASED_FLASH	0x07
+#define SLI4_MGMT_FLASHROM_OPCODE_OFFSET_BASED_SAVE	0x08
+#define SLI4_MGMT_PHY_FLASHROM_OPCODE_FLASH		0x09
+#define SLI4_MGMT_PHY_FLASHROM_OPCODE_SAVE		0x0a
+
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_ISCSI		0x00
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_REDBOOT		0x01
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_BIOS		0x02
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_PXE_BIOS		0x03
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_CODE_CONTROL	0x04
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_IPSEC_CFG		0x05
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_INIT_DATA		0x06
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_ROM_OFFSET	0x07
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_FC_BIOS		0x08
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_ISCSI_BAK		0x09
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_FC_ACT		0x0a
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_FC_BAK		0x0b
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_CODE_CTRL_P	0x0c
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_NCSI		0x0d
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_NIC		0x0e
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_DCBX		0x0f
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_PXE_BIOS_CFG	0x10
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_ALL_CFG_DATA	0x11
+
+/**
+ * @brief COMMON_READ_TRANSCEIVER_DATA
+ *
+ * This command reads SFF transceiver data(Format is defined
+ * by the SFF-8472 specification).
+ */
+struct sli4_rqst_cmn_read_transceiver_data_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32		page_number;
+	__le32		port;
+};
+
+struct sli4_rsp_cmn_read_transceiver_data_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le32		page_number;
+	__le32		port;
+	__le32		page_data[32];
+	__le32		page_data_2[32];
+};
+
+/**
+ * @brief COMMON_READ_OBJECT
+ */
+
+enum {
+	SLI4_REQ_DESIRE_READLEN = 0xFFFFFF
+};
+
+struct sli4_rqst_cmn_read_object_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32		desired_read_length_dword;
+	__le32		read_offset;
+	u8		object_name[104];
+	__le32		host_buffer_descriptor_count;
+	struct sli4_bde_s	host_buffer_descriptor[0];
+};
+
+enum {
+	RSP_COM_READ_OBJ_EOF = 0x80000000
+
+};
+
+struct sli4_rsp_cmn_read_object_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le32		actual_read_length;
+	__le32		eof_dword;
+};
+
+/**
+ * @brief COMMON_WRITE_OBJECT
+ */
+
+enum {
+	SLI4_RQ_DES_WRITE_LEN = 0xFFFFFF,
+	SLI4_RQ_DES_WRITE_LEN_NOC = 0x40000000,
+	SLI4_RQ_DES_WRITE_LEN_EOF = 0x80000000
+
+};
+
+struct sli4_rqst_cmn_write_object_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32		desired_write_len_dword;
+	__le32		write_offset;
+	u8		object_name[104];
+	__le32		host_buffer_descriptor_count;
+	struct sli4_bde_s	host_buffer_descriptor[0];
+};
+
+enum {
+	RSP_CHANGE_STATUS = 0xFF
+
+};
+
+struct sli4_rsp_cmn_write_object_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le32		actual_write_length;
+	__le32		change_status_dword;
+};
+
+/**
+ * @brief COMMON_DELETE_OBJECT
+ */
+struct sli4_rqst_cmn_delete_object_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32		rsvd4;
+	__le32		rsvd5;
+	u8		object_name[104];
+};
+
+/**
+ * @brief COMMON_READ_OBJECT_LIST
+ */
+
+enum {
+	SLI4_RQ_OBJ_LIST_READ_LEN = 0xFFFFFF
+
+};
+
+struct sli4_rqst_cmn_read_object_list_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32		desired_read_length_dword;
+	__le32		read_offset;
+	u8		object_name[104];
+	__le32		host_buffer_descriptor_count;
+	struct sli4_bde_s	host_buffer_descriptor[0];
+};
+
+/**
+ * @brief COMMON_SET_DUMP_LOCATION
+ */
+
+enum {
+	SLI4_RQ_COM_SET_DUMP_BUFFER_LEN = 0xFFFFFF,
+	SLI4_RQ_COM_SET_DUMP_FDB = 0x20000000,
+	SLI4_RQ_COM_SET_DUMP_BLP = 0x40000000,
+	SLI4_RQ_COM_SET_DUMP_QRY = 0x80000000,
+
+};
+
+struct sli4_rqst_cmn_set_dump_location_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32		buffer_length_dword;
+	__le32		buf_addr_low;
+	__le32		buf_addr_high;
+};
+
+enum {
+	RSP_SET_DUMP_BUFFER_LEN = 0xFFFFFF
+
+};
+
+struct sli4_rsp_cmn_set_dump_location_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le32		buffer_length_dword;
+};
+
+/**
+ * @brief COMMON_SET_SET_FEATURES
+ */
+#define SLI4_SET_FEATURES_DIF_SEED			0x01
+#define SLI4_SET_FEATURES_XRI_TIMER			0x03
+#define SLI4_SET_FEATURES_MAX_PCIE_SPEED		0x04
+#define SLI4_SET_FEATURES_FCTL_CHECK			0x05
+#define SLI4_SET_FEATURES_FEC				0x06
+#define SLI4_SET_FEATURES_PCIE_RECV_DETECT		0x07
+#define SLI4_SET_FEATURES_DIF_MEMORY_MODE		0x08
+#define SLI4_SET_FEATURES_DISABLE_SLI_PORT_PAUSE_STATE	0x09
+#define SLI4_SET_FEATURES_ENABLE_PCIE_OPTIONS		0x0A
+#define SLI4_SET_FEAT_CFG_AUTO_XFER_RDY_T10PI	0x0C
+#define SLI4_SET_FEATURES_ENABLE_MULTI_RECEIVE_QUEUE	0x0D
+#define SLI4_SET_FEATURES_SET_FTD_XFER_HINT		0x0F
+#define SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK		0x11
+
+struct sli4_rqst_cmn_set_features_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32		feature;
+	__le32		param_len;
+	__le32		params[8];
+};
+
+struct sli4_rqst_cmn_set_features_dif_seed_s {
+	__le16		seed;
+	__le16		rsvd16;
+};
+
+enum {
+	SLI4_RQ_COM_SET_T10_PI_MEM_MODEL = 0x1
+
+};
+
+struct sli4_rqst_cmn_set_features_t10_pi_mem_model_s {
+	__le32		tmm_dword;
+};
+
+enum {
+	SLI4_RQ_MULTIRQ_ISR = 0x1,
+	SLI4_RQ_MULTIRQ_AUTOGEN_XFER_RDY = 0x2,
+
+	SLI4_RQ_MULTIRQ_NUM_RQS = 0xFF,
+	SLI4_RQ_MULTIRQ_RQ_SELECT = 0xF00
+};
+
+struct sli4_rqst_cmn_set_features_multirq_s {
+	__le32		auto_gen_xfer_dword; /* Include Sequence Reporting */
+					/* Auto Generate XFER-RDY Enabled */
+	__le32		num_rqs_dword;
+};
+
+enum {
+	SLI4_SETFEAT_XFERRDY_T10PI_RTC	= (1 << 0),	/* DW0 */
+	SLI4_SETFEAT_XFERRDY_T10PI_ATV	= (1 << 1),
+	SLI4_SETFEAT_XFERRDY_T10PI_TMM	= (1 << 2),
+	SLI4_SETFEAT_XFERRDY_T10PI_PTYPE = (0x7 << 4),
+	SLI4_SETFEAT_XFERRDY_T10PI_BLKSIZ = (0x7 << 7),
+};
+
+struct sli4_rqst_cmn_set_features_xfer_rdy_t10pi_s {
+	__le32		dw0_flags;
+	__le16		app_tag;
+	__le16		rsvd6;
+};
+
+enum {
+	SLI4_RQ_HEALTH_CHECK_ENABLE = 0x1,
+	SLI4_RQ_HEALTH_CHECK_QUERY = 0x2
+
+};
+
+struct sli4_rqst_cmn_set_features_health_check_s {
+	__le32		health_check_dword;
+};
+
+struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint_s {
+	__le32		fdt_xfer_hint;
+};
+
+/**
+ * @brief DMTF_EXEC_CLP_CMD
+ */
+struct sli4_rqst_dmtf_exec_clp_cmd_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le32		cmd_buf_length;
+	__le32		resp_buf_length;
+	__le32		cmd_buf_addr_low;
+	__le32		cmd_buf_addr_high;
+	__le32		resp_buf_addr_low;
+	__le32		resp_buf_addr_high;
+};
+
+struct sli4_rsp_dmtf_exec_clp_cmd_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le32		rsvd4;
+	__le32		resp_length;
+	__le32		rsvd6;
+	__le32		rsvd7;
+	__le32		rsvd8;
+	__le32		rsvd9;
+	__le32		clp_status;
+	__le32		clp_detailed_status;
+};
+
+#define SLI4_PROTOCOL_FC			0x10
+#define SLI4_PROTOCOL_DEFAULT			0xff
+
+struct sli4_rspource_descriptor_v1_s {
+	u8		descriptor_type;
+	u8		descriptor_length;
+	__le16		rsvd16;
+	__le32		type_specific[0];
+};
+
+enum {
+	SLI4_PCIE_DESC_IMM = 0x4000,
+	SLI4_PCIE_DESC_NOSV = 0x8000,
+
+	SLI4_PCIE_DESC_PF_NO = 0x3FF0000,
+
+	SLI4_PCIE_DESC_MISSN_ROLE = 0xFF,
+	SLI4_PCIE_DESC_PCHG = 0x8000000,
+	SLI4_PCIE_DESC_SCHG = 0x10000000,
+	SLI4_PCIE_DESC_XCHG = 0x20000000,
+	SLI4_PCIE_DESC_XROM = 0xC0000000
+};
+
+struct sli4_pcie_resource_descriptor_v1_s {
+	u8		descriptor_type;
+	u8		descriptor_length;
+	__le16		imm_nosv_dword;
+	__le32		pf_number_dword;
+	__le32		rsvd3;
+	u8		sriov_state;
+	u8		pf_state;
+	u8		pf_type;
+	u8		rsvd4;
+	__le16		number_of_vfs;
+	__le16		rsvd5;
+	__le32		mission_roles_dword;
+	__le32		rsvd7[16];
+};
+
+/**
+ * @brief COMMON_GET_FUNCTION_CONFIG
+ */
+struct sli4_rqst_cmn_get_function_config_s {
+	struct sli4_rqst_hdr_s  hdr;
+};
+
+struct sli4_rsp_cmn_get_function_config_s {
+	struct sli4_rsp_hdr_s  hdr;
+	__le32		desc_count;
+	__le32		desc[54];
+};
+
+/**
+ * @brief COMMON_GET_PROFILE_CONFIG
+ */
+
+enum {
+	SLI4_RQ_GET_PROFILE_ID = 0XFF,
+	SLI4_RQ_GET_PROFILE_TYPE = 0x300
+};
+
+struct sli4_rqst_cmn_get_profile_config_s {
+	struct sli4_rqst_hdr_s  hdr;
+	__le32		profile_id_dword;
+};
+
+struct sli4_rsp_cmn_get_profile_config_s {
+	struct sli4_rsp_hdr_s  hdr;
+	__le32			desc_count;
+	__le32			desc[0];
+};
+
+/**
+ * @brief COMMON_SET_PROFILE_CONFIG
+ */
+
+enum {
+	SLI4_RQ_SET_PROFILE_ID = 0XFF,
+	SLI4_RQ_SET_PROFILE_ISAP = 0x80000000
+
+};
+
+struct sli4_rqst_cmn_set_profile_config_s {
+	struct sli4_rqst_hdr_s  hdr;
+	__le32		profile_id_dword;
+	__le32		desc_count;
+	__le32		desc[0];
+};
+
+struct sli4_rsp_cmn_set_profile_config_s {
+	struct sli4_rsp_hdr_s  hdr;
+};
+
+/**
+ * @brief Profile Descriptor for profile functions
+ */
+struct sli4_profile_descriptor_s {
+	u8		profile_id;
+	u8		rsvd8;
+	u8		profile_index;
+	u8		rsvd24;
+	__le32		profile_description[128];
+};
+
+/*
+ * We don't know in advance how many descriptors there are.  We have
+ * to pick a number that we think will be big enough and ask for that
+ * many.
+ */
+
+#define MAX_PROD_DES	40
+
+/**
+ * @brief COMMON_GET_PROFILE_LIST
+ */
+
+enum {
+	SLI4_RQ_PROFILE_INDEX = 0XFF
+
+};
+
+struct sli4_rqst_cmn_get_profile_list_s {
+	struct sli4_rqst_hdr_s  hdr;
+	__le32	start_profile_index_dword;
+};
+
+struct sli4_rsp_cmn_get_profile_list_s {
+	struct sli4_rsp_hdr_s  hdr;
+	__le32		profile_descriptor_count;
+	struct sli4_profile_descriptor_s profile_descriptor[MAX_PROD_DES];
+};
+
+/**
+ * @brief COMMON_GET_ACTIVE_PROFILE
+ */
+struct sli4_rqst_cmn_get_active_profile_s {
+	struct sli4_rqst_hdr_s  hdr;
+};
+
+struct sli4_rsp_cmn_get_active_profile_s {
+	struct sli4_rsp_hdr_s  hdr;
+	u8		active_profile_id;
+	u8		rsvd0;
+	u8		next_profile_id;
+	u8		rsvd1;
+};
+
+/**
+ * @brief COMMON_SET_ACTIVE_PROFILE
+ */
+
+enum {
+	SLI4_REQ_SETACTIVE_PROF_ID = 0xFF,
+	SLI4_REQ_SETACTIVE_PROF_FD = 0x80000000
+};
+
+struct sli4_rqst_cmn_set_active_profile_s {
+	struct sli4_rqst_hdr_s  hdr;
+	__le32	active_profile_id_dword;
+};
+
+struct sli4_rsp_cmn_set_active_profile_s {
+	struct sli4_rsp_hdr_s  hdr;
+};
+
+/**
+ * @brief Link Config Descriptor for link config functions
+ */
+struct sli4_link_config_descriptor_s {
+	u8		link_config_id;
+	u8		rsvd1[3];
+	__le32		config_description[8];
+};
+
+#define MAX_LINK_DES	10
+
+/**
+ * @brief COMMON_GET_RECONFIG_LINK_INFO
+ */
+struct sli4_rqst_cmn_get_reconfig_link_info_s {
+	struct sli4_rqst_hdr_s  hdr;
+};
+
+struct sli4_rsp_cmn_get_reconfig_link_info_s {
+	struct sli4_rsp_hdr_s  hdr;
+	u8		active_link_config_id;
+	u8		rsvd17;
+	u8		next_link_config_id;
+	u8		rsvd19;
+	__le32		link_configuration_descriptor_count;
+	struct sli4_link_config_descriptor_s    desc[MAX_LINK_DES];
+};
+
+/**
+ * @brief COMMON_SET_RECONFIG_LINK_ID
+ */
+enum {
+	SLI4_SET_RECONFIG_LINKID_NEXT	= 0xff,
+	SLI4_SET_RECONFIG_LINKID_FD	= (1 << 31),
+};
+
+struct sli4_rqst_cmn_set_reconfig_link_id_s {
+	struct sli4_rqst_hdr_s  hdr;
+	__le32		dw4_flags;
+};
+
+struct sli4_rsp_cmn_set_reconfig_link_id_s {
+	struct sli4_rsp_hdr_s  hdr;
+};
+
+struct sli4_rqst_lowlevel_set_watchdog_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		watchdog_timeout;
+	__le16		rsvd18;
+};
+
+struct sli4_rsp_lowlevel_set_watchdog_s {
+	struct sli4_rsp_hdr_s	hdr;
+	__le32			rsvd;
+};
+
+/*
+ * Maximum value for a FCFI
+ *
+ * Note that although most commands provide a 16 bit field for the FCFI,
+ * the FC/FCoE Asynchronous Recived CQE format only provides 6 bits for
+ * the returned FCFI. Then effectively, the FCFI cannot be larger than
+ * 1 << 6 or 64.
+ */
+#define SLI4_MAX_FCFI	64
+
+/**
+ * FC opcode (OPC) values.
+ */
+#define SLI4_OPC_WQ_CREATE		0x1
+#define SLI4_OPC_WQ_DESTROY		0x2
+#define SLI4_OPC_POST_SGL_PAGES		0x3
+#define SLI4_OPC_RQ_CREATE		0x5
+#define SLI4_OPC_RQ_DESTROY		0x6
+#define SLI4_OPC_READ_FCF_TABLE		0x8
+#define SLI4_OPC_POST_HDR_TEMPLATES	0xb
+#define SLI4_OPC_REDISCOVER_FCF		0x10
+
+/* Use the default CQ associated with the WQ */
+#define SLI4_CQ_DEFAULT 0xffff
+
+/**
+ * @brief POST_SGL_PAGES
+ *
+ * Register the scatter gather list (SGL) memory and associate it with an XRI.
+ */
+struct sli4_rqst_post_sgl_pages_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		xri_start;
+	__le16		xri_count;
+	struct {
+		__le32		page0_low;
+		__le32		page0_high;
+		__le32		page1_low;
+		__le32		page1_high;
+	} page_set[10];
+};
+
+struct sli4_rsp_post_sgl_pages_s {
+	struct sli4_rsp_hdr_s	hdr;
+};
+
+/**
+ * @brief POST_HDR_TEMPLATES
+ */
+struct sli4_rqst_post_hdr_templates_s {
+	struct sli4_rqst_hdr_s	hdr;
+	__le16		rpi_offset;
+	__le16		page_count;
+	struct sli4_dmaaddr_s page_descriptor[0];
+};
+
+#define SLI4_HDR_TEMPLATE_SIZE	64
+
+/** The XRI associated with this IO is already active */
+#define SLI4_IO_CONTINUATION		(1 << 0)
+/** Automatically generate a good RSP frame */
+#define SLI4_IO_AUTO_GOOD_RESPONSE	(1 << 1)
+#define SLI4_IO_NO_ABORT		(1 << 2)
+/** Set the DNRX bit because no auto xref rdy buffer is posted */
+#define SLI4_IO_DNRX			(1 << 3)
+
+
+
+enum sli4_callback_e {
+	SLI4_CB_LINK,
+	SLI4_CB_MAX			/* must be last */
+};
+
+enum sli4_link_status_e {
+	SLI_LINK_STATUS_UP,
+	SLI_LINK_STATUS_DOWN,
+	SLI_LINK_STATUS_NO_ALPA,
+	SLI_LINK_STATUS_MAX,
+};
+
+enum sli4_link_topology_e {
+	SLI_LINK_TOPO_NPORT = 1,	/** fabric or point-to-point */
+	SLI_LINK_TOPO_LOOP,
+	SLI_LINK_TOPO_LOOPBACK_INTERNAL,
+	SLI_LINK_TOPO_LOOPBACK_EXTERNAL,
+	SLI_LINK_TOPO_NONE,
+	SLI_LINK_TOPO_MAX,
+};
+
+enum sli4_link_medium_e {
+	SLI_LINK_MEDIUM_ETHERNET,
+	SLI_LINK_MEDIUM_FC,
+	SLI_LINK_MEDIUM_MAX,
+};
+
+/*Driver specific structures*/
+
+struct sli4_link_event_s {
+	enum sli4_link_status_e	status;		/* link up/down */
+	enum sli4_link_topology_e	topology;
+	enum sli4_link_medium_e	medium;		/* Ethernet / FC */
+	u32		speed;		/* Mbps */
+	u8		*loop_map;
+	u32		fc_id;
+};
+
+enum sli4_resource_e {
+	SLI_RSRC_VFI,
+	SLI_RSRC_VPI,
+	SLI_RSRC_RPI,
+	SLI_RSRC_XRI,
+	SLI_RSRC_FCFI,
+	SLI_RSRC_MAX			/* must be last */
+};
+
+struct sli4_extent_s {
+	u32	number;	/* number of extents */
+	u32	size; /* no of element in each extent */
+	u32	n_alloc;  /* no of elements allocated */
+	u32	*base;
+	unsigned long	*use_map; /* shows resource in use */
+	u32	map_size; /* no of bits in bitmap */
+};
+
+struct sli4_queue_info_s {
+	u16	max_qcount[SLI_QTYPE_MAX];
+	u32	max_qentries[SLI_QTYPE_MAX];
+	u16	count_mask[SLI_QTYPE_MAX];
+	u16	count_method[SLI_QTYPE_MAX];
+	u32	qpage_count[SLI_QTYPE_MAX];
+};
+
+struct sli4_s {
+	void	*os;
+	struct pci_dev	*pcidev;
+#define	SLI_PCI_MAX_REGS		6
+	void __iomem *reg[SLI_PCI_MAX_REGS];
+
+	u32	sli_rev;	/* SLI revision number */
+	u32	sli_family;
+	u32	if_type;	/* SLI Interface type */
+
+	u16	asic_type;
+	u16	asic_rev;
+
+	u16	e_d_tov;
+	u16	r_a_tov;
+	struct sli4_queue_info_s qinfo;
+	u16	link_module_type;
+	u8	rq_batch;
+	u16	rq_min_buf_size;
+	u32	rq_max_buf_size;
+	u8	topology;
+	u8	wwpn[8];	/* WW Port Name */
+	u8	wwnn[8];	/* WW Node Name */
+	u32	fw_rev[2];
+	u8	fw_name[2][16];
+	char	ipl_name[16];
+	u32	hw_rev[3];
+	u8	port_number;
+	char	port_name[2];
+	char	modeldesc[64];
+	char	bios_version_string[32];
+	/*
+	 * Tracks the port resources using extents metaphor. For
+	 * devices that don't implement extents (i.e.
+	 * has_extents == FALSE), the code models each resource as
+	 * a single large extent.
+	 */
+	struct sli4_extent_s	extent[SLI_RSRC_MAX];
+	u32	features;
+	u32	has_extents:1,
+		auto_reg:1,
+		auto_xfer_rdy:1,
+		hdr_template_req:1,
+		perf_hint:1,
+		perf_wq_id_association:1,
+		cq_create_version:2,
+		mq_create_version:2,
+		high_login_mode:1,
+		sgl_pre_registered:1,
+		sgl_pre_registration_required:1,
+		t10_dif_inline_capable:1,
+		t10_dif_separate_capable:1;
+	u32	sge_supported_length;
+	u32	sgl_page_sizes;
+	u32	max_sgl_pages;
+	u32	wqe_size;
+
+	/*
+	 * Callback functions
+	 */
+	int	(*link)(void *ctx, void *event);
+	void	*link_arg;
+
+	struct efc_dma_s	bmbx;
+
+	/* Save pointer to physical memory descriptor for non-embedded
+	 * SLI_CONFIG commands for BMBX dumping purposes
+	 */
+	struct efc_dma_s	*bmbx_non_emb_pmd;
+
+	struct efc_dma_s	vpd_data;
+	u32	vpd_length;
+};
+
 #endif /* !_SLI4_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 04/32] elx: libefc_sli: queue create/destroy/parse routines
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (2 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 03/32] elx: libefc_sli: Data structures and defines for mbox commands James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-25 15:35   ` Daniel Wagner
  2019-10-23 21:55 ` [PATCH 05/32] elx: libefc_sli: Populate and post different WQEs James Smart
                   ` (28 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds service routines to create mailbox commands
and adds APIs to create/destroy/parse SLI-4 EQ, CQ, RQ and MQ queues.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/include/efc_common.h |   18 +
 drivers/scsi/elx/libefc_sli/sli4.c    | 2155 +++++++++++++++++++++++++++++++++
 2 files changed, 2173 insertions(+)

diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
index dbabc4f6ee5e..62d0f3b3f936 100644
--- a/drivers/scsi/elx/include/efc_common.h
+++ b/drivers/scsi/elx/include/efc_common.h
@@ -23,4 +23,22 @@ struct efc_dma_s {
 	struct pci_dev	*pdev;
 };
 
+#define efc_log_crit(efc, fmt, args...) \
+		dev_crit(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_err(efc, fmt, args...) \
+		dev_err(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_warn(efc, fmt, args...) \
+		dev_warn(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_info(efc, fmt, args...) \
+		dev_info(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_test(efc, fmt, args...) \
+		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_debug(efc, fmt, args...) \
+		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
+
 #endif /* __EFC_COMMON_H__ */
diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 68ccd3ad8ac8..6b62b7d8b5a4 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -24,3 +24,2158 @@ static struct sli4_asic_entry_t sli4_asic_table[] = {
 	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
 	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
 };
+
+/*
+ * @brief Convert queue type enum (SLI_QTYPE_*) into a string.
+ */
+static char *SLI_QNAME[] = {
+	"Event Queue",
+	"Completion Queue",
+	"Mailbox Queue",
+	"Work Queue",
+	"Receive Queue",
+	"Undefined"
+};
+
+/**
+ * @ingroup sli
+ * @brief Write a SLI_CONFIG command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param length Length in bytes of attached command.
+ * @param dma DMA buffer for non-embedded commands.
+ *
+ * @return Returns the number of bytes written.
+ */
+static void *
+sli_config_cmd_init(struct sli4_s *sli4, void *buf,
+		    size_t size, u32 length,
+		    struct efc_dma_s *dma)
+{
+	struct sli4_cmd_sli_config_s *sli_config = NULL;
+	u32 flags = 0;
+
+	if (length > sizeof(sli_config->payload.embed) && !dma) {
+		efc_log_info(sli4, "length(%d) > payload(%ld)\n",
+			length, sizeof(sli_config->payload.embed));
+		return NULL;
+	}
+
+	sli_config = buf;
+
+	memset(buf, 0, size);
+
+	sli_config->hdr.command = MBX_CMD_SLI_CONFIG;
+	if (!dma) {
+		flags |= SLI4_SLICONF_EMB;
+		sli_config->dw1_flags = cpu_to_le32(flags);
+		sli_config->payload_len = cpu_to_le32(length);
+	} else {
+		flags = SLI4_SLICONF_PMDCMD_VAL_1;	/* pmd_count = 1 */
+		flags &= ~SLI4_SLICONF_EMB;
+		sli_config->dw1_flags = cpu_to_le32(flags);
+
+		sli_config->payload.mem.addr.low =
+			cpu_to_le32(lower_32_bits(dma->phys));
+		sli_config->payload.mem.addr.high =
+			cpu_to_le32(upper_32_bits(dma->phys));
+		sli_config->payload.mem.length =
+			cpu_to_le32(dma->size & SLI4_SLICONFIG_PMD_LEN);
+		sli_config->payload_len = cpu_to_le32(dma->size);
+		/* save pointer to DMA for BMBX dumping purposes */
+		sli4->bmbx_non_emb_pmd = dma;
+		return dma->virt;
+	}
+
+	return buf + offsetof(struct sli4_cmd_sli_config_s, payload.embed);
+}
+
+/**
+ * @brief Write a COMMON_CREATE_CQ command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param qmem DMA memory for the queue.
+ * @param eq_id Associated EQ_ID
+ * @param ignored This parameter carries the ULP
+ * which is only used for WQ and RQs
+ *
+ * @note This creates a Version 0 message.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+static int
+sli_cmd_common_create_cq(struct sli4_s *sli4, void *buf, size_t size,
+			 struct efc_dma_s *qmem,
+			 u16 eq_id)
+{
+	struct sli4_rqst_cmn_create_cq_v2_s	*cqv2 = NULL;
+	u32 p;
+	uintptr_t addr;
+	u32 page_bytes = 0;
+	u32 num_pages = 0;
+	size_t cmd_size = 0;
+	u32 page_size = 0;
+	u32 n_cqe = 0;
+	u32 dw5_flags = 0;
+	u16 dw6w1_arm = 0;
+
+	/* First calculate number of pages and the mailbox cmd length */
+	n_cqe = qmem->size / SLI4_CQE_BYTES;
+	switch (n_cqe) {
+	case 256:
+	case 512:
+	case 1024:
+	case 2048:
+		page_size = 1;
+		break;
+	case 4096:
+		page_size = 2;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+	page_bytes = page_size * SLI_PAGE_SIZE;
+	num_pages = sli_page_count(qmem->size, page_bytes);
+
+	cmd_size = CFG_RQST_CMDSZ(cmn_create_cq_v2) + SZ_DMAADDR * num_pages;
+
+	cqv2 = sli_config_cmd_init(sli4, buf, size, cmd_size, NULL);
+	if (!cqv2)
+		return EFC_FAIL;
+
+	cqv2->hdr.opcode = CMN_CREATE_CQ;
+	cqv2->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	cqv2->hdr.dw3_version = cpu_to_le32(CMD_V2);
+	cmd_size = CFG_RQST_PYLD_LEN_VAR(cmn_create_cq_v2,
+					 SZ_DMAADDR * num_pages);
+	cqv2->hdr.request_length = cmd_size;
+	cqv2->page_size = page_size;
+
+	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.3) */
+	cqv2->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages ||
+	    num_pages > SLI4_COMMON_CREATE_CQ_V2_MAX_PAGES) {
+		return EFC_FAIL;
+	}
+
+	switch (num_pages) {
+	case 1:
+		dw5_flags |= CQ_CNT_VAL(256);
+		break;
+	case 2:
+		dw5_flags |= CQ_CNT_VAL(512);
+		break;
+	case 4:
+		dw5_flags |= CQ_CNT_VAL(1024);
+		break;
+	case 8:
+		dw5_flags |= CQ_CNT_VAL(LARGE);
+		cqv2->cqe_count = cpu_to_le16(n_cqe);
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
+		return -1;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= CREATE_CQV2_AUTOVALID;
+
+	dw5_flags |= CREATE_CQV2_EVT;
+	dw5_flags |= CREATE_CQV2_VALID;
+
+	cqv2->dw5_flags = cpu_to_le32(dw5_flags);
+	cqv2->dw6w1_arm = cpu_to_le16(dw6w1_arm);
+	cqv2->eq_id = cpu_to_le16(eq_id);
+
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += page_bytes) {
+		cqv2->page_phys_addr[p].low =
+			cpu_to_le32(lower_32_bits(addr));
+		cqv2->page_phys_addr[p].high =
+			cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a COMMON_DESTROY_CQ command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param cq_id CQ ID
+ *
+ * @note This creates a Version 0 message.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+static int
+sli_cmd_common_destroy_cq(struct sli4_s *sli4, void *buf,
+			  size_t size, u16 cq_id)
+{
+	struct sli4_rqst_cmn_destroy_cq_s *cq = NULL;
+
+	/* Payload length must accommodate both request and response */
+	cq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(cmn_destroy_cq), NULL);
+	if (!cq)
+		return EFC_FAIL;
+
+	cq->hdr.opcode = CMN_DESTROY_CQ;
+	cq->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	cq->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_destroy_cq);
+	cq->cq_id = cpu_to_le16(cq_id);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a COMMON_CREATE_EQ command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param qmem DMA memory for the queue.
+ * @param ignored1 Ignored
+ * (used for consistency among queue creation functions).
+ * @param ignored2 Ignored
+ * (used for consistency among queue creation functions).
+ *
+ * @note Other queue creation routines use the last parameter to pass in
+ * the associated Q_ID and ULP. EQ doesn't have an associated queue or ULP,
+ * so these parameters are ignored
+ *
+ * @note This creates a Version 0 message
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+static int
+sli_cmd_common_create_eq(struct sli4_s *sli4, void *buf, size_t size,
+			 struct efc_dma_s *qmem,
+			 u16 ignored1)
+{
+	struct sli4_rqst_cmn_create_eq_s *eq = NULL;
+	u32 p;
+	uintptr_t addr;
+	u16 num_pages;
+	u32 dw5_flags = 0;
+	u32 dw6_flags = 0;
+
+	eq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(cmn_create_eq), NULL);
+
+	eq->hdr.opcode = CMN_CREATE_EQ;
+	eq->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		eq->hdr.dw3_version = cpu_to_le32(CMD_V2);
+
+	eq->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_create_eq);
+
+	/* valid values for number of pages: 1, 2, 4 (sec 4.4.3) */
+	num_pages = qmem->size / SLI_PAGE_SIZE;
+	eq->num_pages = cpu_to_le16(num_pages);
+
+	switch (num_pages) {
+	case 1:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= EQ_CNT_VAL(1024);
+		break;
+	case 2:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= EQ_CNT_VAL(2048);
+		break;
+	case 4:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= EQ_CNT_VAL(4096);
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= CREATE_EQ_AUTOVALID;
+
+	dw5_flags |= CREATE_EQ_VALID;
+	dw6_flags &= (~CREATE_EQ_ARM);
+	eq->dw5_flags = cpu_to_le32(dw5_flags);
+	eq->dw6_flags = cpu_to_le32(dw6_flags);
+	eq->dw7_delaymulti = cpu_to_le32(CREATE_EQ_DELAYMULTI);
+
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += SLI_PAGE_SIZE) {
+		eq->page_address[p].low = cpu_to_le32(lower_32_bits(addr));
+		eq->page_address[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a COMMON_DESTROY_EQ command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param eq_id Queue ID to destroy.
+ *
+ * @note Other queue creation routines use the last parameter to pass in
+ * the associated Q_ID. EQ doesn't have an associated queue so this
+ * parameter is ignored.
+ *
+ * @note This creates a Version 0 message.
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+static int
+sli_cmd_common_destroy_eq(struct sli4_s *sli4, void *buf, size_t size,
+			  u16 eq_id)
+{
+	struct sli4_rqst_cmn_destroy_eq_s *eq = NULL;
+
+	eq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(cmn_destroy_eq), NULL);
+	if (!eq)
+		return EFC_FAIL;
+
+	eq->hdr.opcode = CMN_DESTROY_EQ;
+	eq->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	eq->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_destroy_eq);
+
+	eq->eq_id = cpu_to_le16(eq_id);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a COMMON_CREATE_MQ_EXT command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param qmem DMA memory for the queue.
+ * @param cq_id Associated CQ_ID.
+ * @param ignored This parameter carries the ULP
+ * which is only used for WQ and RQs
+ *
+ * @note This creates a Version 0 message.
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+static int
+sli_cmd_common_create_mq_ext(struct sli4_s *sli4, void *buf, size_t size,
+			     struct efc_dma_s *qmem,
+			     u16 cq_id)
+{
+	struct sli4_rqst_cmn_create_mq_ext_s	*mq = NULL;
+	u32 p;
+	uintptr_t addr;
+	u32 num_pages;
+	u16 dw6w1_flags = 0;
+
+	mq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(cmn_create_mq_ext),
+				 NULL);
+	if (!mq)
+		return EFC_FAIL;
+
+	mq->hdr.opcode = CMN_CREATE_MQ_EXT;
+	mq->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	mq->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_create_mq_ext);
+
+	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.12) */
+	num_pages = qmem->size / SLI_PAGE_SIZE;
+	mq->num_pages = cpu_to_le16(num_pages);
+	switch (num_pages) {
+	case 1:
+		dw6w1_flags |= SLI4_MQE_SIZE_16;
+		break;
+	case 2:
+		dw6w1_flags |= SLI4_MQE_SIZE_32;
+		break;
+	case 4:
+		dw6w1_flags |= SLI4_MQE_SIZE_64;
+		break;
+	case 8:
+		dw6w1_flags |= SLI4_MQE_SIZE_128;
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	mq->async_event_bitmap = cpu_to_le32(SLI4_ASYNC_EVT_FC_ALL);
+
+	if (sli4->mq_create_version) {
+		mq->cq_id_v1 = cpu_to_le16(cq_id);
+		mq->hdr.dw3_version = cpu_to_le32(CMD_V1);
+	} else {
+		dw6w1_flags |= (cq_id << CREATE_MQEXT_CQID_SHIFT);
+	}
+	mq->dw7_val = cpu_to_le32(CREATE_MQEXT_VAL);
+
+	mq->dw6w1_flags = cpu_to_le16(dw6w1_flags);
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += SLI_PAGE_SIZE) {
+		mq->page_phys_addr[p].low =
+			cpu_to_le32(lower_32_bits(addr));
+		mq->page_phys_addr[p].high =
+			cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a COMMON_DESTROY_MQ command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param mq_id MQ ID
+ *
+ * @note This creates a Version 0 message.
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+static int
+sli_cmd_common_destroy_mq(struct sli4_s *sli4, void *buf, size_t size,
+			  u16 mq_id)
+{
+	struct sli4_rqst_cmn_destroy_mq_s *mq = NULL;
+
+	mq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(cmn_destroy_mq), NULL);
+	if (!mq)
+		return EFC_FAIL;
+
+	mq->hdr.opcode = CMN_DESTROY_MQ;
+	mq->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	mq->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_destroy_mq);
+
+	mq->mq_id = cpu_to_le16(mq_id);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an WQ_CREATE command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param qmem DMA memory for the queue.
+ * @param cq_id Associated CQ_ID.
+ *
+ * @note This creates a Version 0 message.
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+int
+sli_cmd_wq_create(struct sli4_s *sli4, void *buf, size_t size,
+		  struct efc_dma_s *qmem, u16 cq_id)
+{
+	struct sli4_rqst_wq_create_s	*wq = NULL;
+	u32 p;
+	uintptr_t addr;
+
+	wq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(wq_create), NULL);
+	if (!wq)
+		return EFC_FAIL;
+
+	wq->hdr.opcode = SLI4_OPC_WQ_CREATE;
+	wq->hdr.subsystem = SLI4_SUBSYSTEM_FC;
+	wq->hdr.request_length = CFG_RQST_PYLD_LEN(wq_create);
+
+	/* valid values for number of pages: 1-4 (sec 4.5.1) */
+	wq->num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
+	if (!wq->num_pages ||
+	    wq->num_pages > SLI4_WQ_CREATE_V0_MAX_PAGES)
+		return EFC_FAIL;
+
+	wq->cq_id = cpu_to_le16(cq_id);
+
+	for (p = 0, addr = qmem->phys;
+			p < wq->num_pages;
+			p++, addr += SLI_PAGE_SIZE) {
+		wq->page_phys_addr[p].low  =
+				cpu_to_le32(lower_32_bits(addr));
+		wq->page_phys_addr[p].high =
+				cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an WQ_CREATE_V1 command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param qmem DMA memory for the queue.
+ * @param cq_id Associated CQ_ID.
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+int
+sli_cmd_wq_create_v1(struct sli4_s *sli4, void *buf, size_t size,
+		     struct efc_dma_s *qmem,
+		     u16 cq_id)
+{
+	struct sli4_rqst_wq_create_v1_s *wq = NULL;
+	u32 p;
+	uintptr_t addr;
+	u32 page_size = 0;
+	u32 page_bytes = 0;
+	u32 n_wqe = 0;
+	u16 num_pages;
+
+	wq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(wq_create_v1), NULL);
+	if (!wq)
+		return EFC_FAIL;
+
+	wq->hdr.opcode = SLI4_OPC_WQ_CREATE;
+	wq->hdr.subsystem = SLI4_SUBSYSTEM_FC;
+	wq->hdr.request_length = CFG_RQST_PYLD_LEN(wq_create_v1);
+	wq->hdr.dw3_version = cpu_to_le32(CMD_V1);
+
+	n_wqe = qmem->size / sli4->wqe_size;
+
+	/*
+	 * This heuristic to determine the page size is simplistic but could
+	 * be made more sophisticated
+	 */
+	switch (qmem->size) {
+	case 4096:
+	case 8192:
+	case 16384:
+	case 32768:
+		page_size = 1;
+		break;
+	case 65536:
+		page_size = 2;
+		break;
+	case 131072:
+		page_size = 4;
+		break;
+	case 262144:
+		page_size = 8;
+		break;
+	case 524288:
+		page_size = 10;
+		break;
+	default:
+		return 0;
+	}
+	page_bytes = page_size * SLI_PAGE_SIZE;
+
+	/* valid values for number of pages: 1-8 */
+	num_pages = sli_page_count(qmem->size, page_bytes);
+	wq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages ||
+	    num_pages > SLI4_WQ_CREATE_V1_MAX_PAGES)
+		return EFC_FAIL;
+
+	wq->cq_id = cpu_to_le16(cq_id);
+
+	wq->page_size = page_size;
+
+	if (sli4->wqe_size == SLI4_WQE_EXT_BYTES)
+		wq->wqe_size_byte |= SLI4_WQE_EXT_SIZE;
+	else
+		wq->wqe_size_byte |= SLI4_WQE_SIZE;
+
+	wq->wqe_count = cpu_to_le16(n_wqe);
+
+	for (p = 0, addr = qmem->phys;
+			p < num_pages;
+			p++, addr += page_bytes) {
+		wq->page_phys_addr[p].low  =
+					cpu_to_le32(lower_32_bits(addr));
+		wq->page_phys_addr[p].high =
+					cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an WQ_DESTROY command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param wq_id WQ_ID.
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+int
+sli_cmd_wq_destroy(struct sli4_s *sli4, void *buf, size_t size,
+		   u16 wq_id)
+{
+	struct sli4_rqst_wq_destroy_s *wq = NULL;
+
+	wq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(wq_destroy), NULL);
+	if (!wq)
+		return EFC_FAIL;
+
+	wq->hdr.opcode = SLI4_OPC_WQ_DESTROY;
+	wq->hdr.subsystem = SLI4_SUBSYSTEM_FC;
+	wq->hdr.request_length = CFG_RQST_PYLD_LEN(wq_destroy);
+
+	wq->wq_id = cpu_to_le16(wq_id);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an RQ_CREATE command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param qmem DMA memory for the queue.
+ * @param cq_id Associated CQ_ID.
+ * @param buffer_size Buffer size pointed to by each RQE.
+ *
+ * @note This creates a Version 0 message.
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+int
+sli_cmd_rq_create(struct sli4_s *sli4, void *buf, size_t size,
+		  struct efc_dma_s *qmem,
+		  u16 cq_id, u16 buffer_size)
+{
+	struct sli4_rqst_rq_create_s *rq = NULL;
+	u32 p;
+	uintptr_t addr;
+	u16 num_pages;
+
+	rq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(rq_create), NULL);
+	if (!rq)
+		return EFC_FAIL;
+
+	rq->hdr.opcode = SLI4_OPC_RQ_CREATE;
+	rq->hdr.subsystem = SLI4_SUBSYSTEM_FC;
+	rq->hdr.request_length = CFG_RQST_PYLD_LEN(rq_create);
+
+	/* valid values for number of pages: 1-8 (sec 4.5.6) */
+	num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
+	rq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages ||
+	    num_pages > SLI4_RQ_CREATE_V0_MAX_PAGES) {
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
+		return 0;
+	}
+
+	/*
+	 * RQE count is the log base 2 of the total number of entries
+	 */
+	rq->rqe_count_byte |= 31 - __builtin_clz(qmem->size / SLI4_RQE_SIZE);
+
+	if (buffer_size < SLI4_RQ_CREATE_V0_MIN_BUF_SIZE ||
+	    buffer_size > SLI4_RQ_CREATE_V0_MAX_BUF_SIZE) {
+		efc_log_err(sli4, "buffer_size %d out of range (%d-%d)\n",
+		       buffer_size,
+		       SLI4_RQ_CREATE_V0_MIN_BUF_SIZE,
+		       SLI4_RQ_CREATE_V0_MAX_BUF_SIZE);
+		return -1;
+	}
+	rq->buffer_size = cpu_to_le16(buffer_size);
+
+	rq->cq_id = cpu_to_le16(cq_id);
+
+	for (p = 0, addr = qmem->phys;
+			p < num_pages;
+			p++, addr += SLI_PAGE_SIZE) {
+		rq->page_phys_addr[p].low  =
+				cpu_to_le32(lower_32_bits(addr));
+		rq->page_phys_addr[p].high =
+				cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an RQ_CREATE_V1 command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param qmem DMA memory for the queue.
+ * @param cq_id Associated CQ_ID.
+ * @param buffer_size Buffer size pointed to by each RQE.
+ *
+ * @note This creates a Version 0 message
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+int
+sli_cmd_rq_create_v1(struct sli4_s *sli4, void *buf, size_t size,
+		     struct efc_dma_s *qmem, u16 cq_id,
+		     u16 buffer_size)
+{
+	struct sli4_rqst_rq_create_v1_s *rq = NULL;
+	u32 p;
+	uintptr_t addr;
+	u32 num_pages;
+
+	rq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(rq_create_v1), NULL);
+	if (!rq)
+		return EFC_FAIL;
+
+	rq->hdr.opcode = SLI4_OPC_RQ_CREATE;
+	rq->hdr.subsystem = SLI4_SUBSYSTEM_FC;
+	rq->hdr.request_length = CFG_RQST_PYLD_LEN(rq_create_v1);
+	rq->hdr.dw3_version = cpu_to_le32(CMD_V1);
+
+	/* Disable "no buffer warnings" to avoid Lancer bug */
+	rq->dim_dfd_dnb |= SLI4_RQ_CREATE_V1_DNB;
+
+	/* valid values for number of pages: 1-8 (sec 4.5.6) */
+	num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
+	rq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages ||
+	    num_pages > SLI4_RQ_CREATE_V1_MAX_PAGES) {
+		efc_log_info(sli4, "num_pages %d not valid, max %d\n",
+			num_pages, SLI4_RQ_CREATE_V1_MAX_PAGES);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * RQE count is the total number of entries (note not lg2(# entries))
+	 */
+	rq->rqe_count = cpu_to_le16(qmem->size / SLI4_RQE_SIZE);
+
+	rq->rqe_size_byte |= SLI4_RQE_SIZE_8;
+
+	rq->page_size = SLI4_RQ_PAGE_SIZE_4096;
+
+	if (buffer_size < sli4->rq_min_buf_size ||
+	    buffer_size > sli4->rq_max_buf_size) {
+		efc_log_err(sli4, "buffer_size %d out of range (%d-%d)\n",
+		       buffer_size,
+				sli4->rq_min_buf_size,
+				sli4->rq_max_buf_size);
+		return EFC_FAIL;
+	}
+	rq->buffer_size = cpu_to_le32(buffer_size);
+
+	rq->cq_id = cpu_to_le16(cq_id);
+
+	for (p = 0, addr = qmem->phys;
+			p < num_pages;
+			p++, addr += SLI_PAGE_SIZE) {
+		rq->page_phys_addr[p].low  =
+					cpu_to_le32(lower_32_bits(addr));
+		rq->page_phys_addr[p].high =
+					cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an RQ_CREATE_V2 command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param qmem DMA memory for the queue.
+ * @param cq_id Associated CQ_ID.
+ * @param buffer_size Buffer size pointed to by each RQE.
+ *
+ * @note This creates a Version 0 message
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+static int
+sli_cmd_rq_create_v2(struct sli4_s *sli4, u32 num_rqs,
+		     struct sli4_queue_s *qs[], u32 base_cq_id,
+		     u32 header_buffer_size,
+		     u32 payload_buffer_size, struct efc_dma_s *dma)
+{
+	struct sli4_rqst_rq_create_v2_s *req = NULL;
+	u32 i, p, offset;
+	u32 payload_size, page_count;
+	uintptr_t addr;
+	u32 num_pages;
+
+	page_count =  sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE) * num_rqs;
+
+	/* Payload length must accommodate both request and response */
+	payload_size = max(CFG_RQST_CMDSZ(rq_create_v2) +
+			   SZ_DMAADDR * page_count,
+			   sizeof(struct sli4_rsp_cmn_create_queue_set_s));
+
+	dma->size = payload_size;
+	dma->virt = dma_alloc_coherent(&sli4->pcidev->dev, dma->size,
+				      &dma->phys, GFP_DMA);
+	if (!dma->virt)
+		return EFC_FAIL;
+
+	memset(dma->virt, 0, payload_size);
+
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+			       payload_size, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	/* Fill Header fields */
+	req->hdr.opcode    = SLI4_OPC_RQ_CREATE;
+	req->hdr.subsystem = SLI4_SUBSYSTEM_FC;
+	req->hdr.dw3_version   = cpu_to_le32(CMD_V2);
+	req->hdr.request_length = CFG_RQST_PYLD_LEN_VAR(rq_create_v2,
+						SZ_DMAADDR * page_count);
+
+	/* Fill Payload fields */
+	req->dim_dfd_dnb  |= SLI4_RQCREATEV2_DNB;
+	num_pages = sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE);
+	req->num_pages	   = cpu_to_le16(num_pages);
+	req->rqe_count     = cpu_to_le16(qs[0]->dma.size / SLI4_RQE_SIZE);
+	req->rqe_size_byte |= SLI4_RQE_SIZE_8;
+	req->page_size     = SLI4_RQ_PAGE_SIZE_4096;
+	req->rq_count      = num_rqs;
+	req->base_cq_id    = cpu_to_le16(base_cq_id);
+	req->hdr_buffer_size     = cpu_to_le16(header_buffer_size);
+	req->payload_buffer_size = cpu_to_le16(payload_buffer_size);
+
+	for (i = 0; i < num_rqs; i++) {
+		for (p = 0, addr = qs[i]->dma.phys; p < num_pages;
+		     p++, addr += SLI_PAGE_SIZE) {
+			req->page_phys_addr[offset].low =
+					cpu_to_le32(lower_32_bits(addr));
+			req->page_phys_addr[offset].high =
+					cpu_to_le32(upper_32_bits(addr));
+			offset++;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an RQ_DESTROY command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param rq_id RQ_ID.
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+int
+sli_cmd_rq_destroy(struct sli4_s *sli4, void *buf, size_t size,
+		   u16 rq_id)
+{
+	struct sli4_rqst_rq_destroy_s	*rq = NULL;
+
+	rq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(rq_destroy), NULL);
+	if (!rq)
+		return EFC_FAIL;
+
+	rq->hdr.opcode = SLI4_OPC_RQ_DESTROY;
+	rq->hdr.subsystem = SLI4_SUBSYSTEM_FC;
+	rq->hdr.request_length = CFG_RQST_PYLD_LEN(rq_destroy);
+	rq->rq_id = cpu_to_le16(rq_id);
+
+	return EFC_SUCCESS;
+}
+/**
+ * @ingroup sli
+ * @brief Destroy a queue object.
+ *
+ * @par Description
+ * This destroys the sli4_queue_s object members, including the underlying
+ * DMA memory.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to queue object.
+ *
+ */
+static void
+__sli_queue_destroy(struct sli4_s *sli4, struct sli4_queue_s *q)
+{
+	if (!q->dma.size)
+		return;
+
+	dma_free_coherent(&sli4->pcidev->dev, q->dma.size,
+			  q->dma.virt, q->dma.phys);
+
+}
+/**
+ * @ingroup sli
+ * @brief Initialize a queue object.
+ *
+ * @par Description
+ * This initializes the sli4_queue_s object members, including the underlying
+ * DMA memory.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to queue object.
+ * @param qtype Type of queue to create.
+ * @param size Size of each entry.
+ * @param n_entries Number of entries to allocate.
+ * @param align Starting memory address alignment.
+ *
+ * @note Checks if using the existing DMA memory (if any) is possible. If not,
+ * it frees the existing memory and re-allocates.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+__sli_queue_init(struct sli4_s *sli4, struct sli4_queue_s *q,
+		 u32 qtype, size_t size, u32 n_entries,
+		      u32 align)
+{
+	if (!q->dma.virt || size != q->size ||
+	    n_entries != q->length) {
+		if (q->dma.size)
+			__sli_queue_destroy(sli4, q);
+
+		memset(q, 0, sizeof(struct sli4_queue_s));
+
+		q->dma.size = size * n_entries;
+		q->dma.virt = dma_alloc_coherent(&sli4->pcidev->dev,
+						 q->dma.size, &q->dma.phys,
+						 GFP_DMA);
+		if (!q->dma.virt) {
+			memset(&q->dma, 0, sizeof(struct efc_dma_s));
+			efc_log_err(sli4, "%s allocation failed\n",
+			       SLI_QNAME[qtype]);
+			return -1;
+		}
+
+		memset(q->dma.virt, 0, size * n_entries);
+
+		spin_lock_init(&q->lock);
+
+		q->type = qtype;
+		q->size = size;
+		q->length = n_entries;
+
+		if (q->type == SLI_QTYPE_EQ || q->type == SLI_QTYPE_CQ) {
+			/* For prism, phase will be flipped after
+			 * a sweep through eq and cq
+			 */
+			q->phase = 1;
+		}
+
+		/* Limit to hwf the queue size per interrupt */
+		q->proc_limit = n_entries / 2;
+
+		switch (q->type) {
+		case SLI_QTYPE_EQ:
+			q->posted_limit = q->length / 2;
+			break;
+		default:
+			q->posted_limit = 64;
+			break;
+		}
+	} else {
+		efc_log_err(sli4, "%s failed\n", __func__);
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Allocate a receive queue.
+ *
+ * @par Description
+ * Allocates DMA memory and configures the requested queue type.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to the queue object for the header.
+ * @param n_entries Number of entries to allocate.
+ * @param buffer_size buffer size for the queue.
+ * @param cq Associated CQ.
+ * @param is_hdr Used to validate the rq_id and set the type of queue
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+int
+sli_fc_rq_alloc(struct sli4_s *sli4, struct sli4_queue_s *q,
+		u32 n_entries, u32 buffer_size,
+		struct sli4_queue_s *cq, bool is_hdr)
+{
+	if (__sli_queue_init(sli4, q, SLI_QTYPE_RQ, SLI4_RQE_SIZE,
+			     n_entries, SLI_PAGE_SIZE))
+		return EFC_FAIL;
+
+	if (!sli_cmd_rq_create_v1(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				  &q->dma, cq->id, buffer_size)) {
+		if (__sli_create_queue(sli4, q)) {
+			efc_log_info(sli4, "Create queue failed %d\n", q->id);
+			goto error;
+		}
+		if (is_hdr && q->id & 1) {
+			efc_log_info(sli4, "bad header RQ_ID %d\n", q->id);
+			goto error;
+		} else if (!is_hdr  && (q->id & 1) == 0) {
+			efc_log_info(sli4, "bad data RQ_ID %d\n", q->id);
+			goto error;
+		}
+	} else {
+		goto error;
+	}
+	if (is_hdr)
+		q->u.flag.dword |= SLI4_QUEUE_FLAG_HDR;
+	else
+		q->u.flag.dword &= ~SLI4_QUEUE_FLAG_HDR;
+	return EFC_SUCCESS;
+error:
+	__sli_queue_destroy(sli4, q);
+	return EFC_FAIL;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Allocate a receive queue set.
+ *
+ * @param sli4 SLI context.
+ * @param num_rq_pairs to create
+ * @param qs Pointers to the queue objects for both header and data.
+ *	Length of this arrays should be 2 * num_rq_pairs
+ * @param base_cq_id. Assumes base_cq_id : (base_cq_id + num_rq_pairs) cqs as
+ * allotted.
+ * @param n_entries number of entries in each RQ queue.
+ * @param header_buffer_size
+ * @param payload_buffer_size
+ *
+ * @return Returns zero for success and non-zero for failure.
+ */
+int
+sli_fc_rq_set_alloc(struct sli4_s *sli4, u32 num_rq_pairs,
+		    struct sli4_queue_s *qs[], u32 base_cq_id,
+		    u32 n_entries, u32 header_buffer_size,
+		    u32 payload_buffer_size)
+{
+	u32 i;
+	struct efc_dma_s dma = {0};
+	struct sli4_rsp_cmn_create_queue_set_s *rsp = NULL;
+	void __iomem *db_regaddr = NULL;
+	u32 num_rqs = num_rq_pairs * 2;
+
+	for (i = 0; i < num_rqs; i++) {
+		if (__sli_queue_init(sli4, qs[i], SLI_QTYPE_RQ,
+				     SLI4_RQE_SIZE, n_entries,
+				     SLI_PAGE_SIZE)) {
+			goto error;
+		}
+	}
+
+	if (sli_cmd_rq_create_v2(sli4, num_rqs, qs, base_cq_id,
+			       header_buffer_size, payload_buffer_size, &dma)) {
+		goto error;
+	}
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_err(sli4, "bootstrap mailbox write failed RQSet\n");
+		goto error;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		db_regaddr = sli4->reg[1] + SLI4_IF6_RQ_DB_REG;
+	else
+		db_regaddr = sli4->reg[0] + SLI4_RQ_DB_REG;
+
+	rsp = dma.virt;
+	if (rsp->hdr.status) {
+		efc_log_err(sli4, "bad create RQSet status=%#x addl=%#x\n",
+		       rsp->hdr.status, rsp->hdr.additional_status);
+		goto error;
+	} else {
+		for (i = 0; i < num_rqs; i++) {
+			qs[i]->id = i + le16_to_cpu(rsp->q_id);
+			if ((qs[i]->id & 1) == 0)
+				qs[i]->u.flag.dword |= SLI4_QUEUE_FLAG_HDR;
+			else
+				qs[i]->u.flag.dword &= ~SLI4_QUEUE_FLAG_HDR;
+
+			qs[i]->db_regaddr = db_regaddr;
+		}
+	}
+
+	dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt, dma.phys);
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_rqs; i++)
+		__sli_queue_destroy(sli4, qs[i]);
+
+	if (dma.virt)
+		dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt,
+				  dma.phys);
+
+	return EFC_FAIL;
+}
+
+/**
+ * @brief Check the SLI_CONFIG response.
+ *
+ * @par Description
+ * Function checks the SLI_CONFIG response and the payload status.
+ *
+ * @param buf Pointer to SLI_CONFIG response.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+static int
+sli_res_sli_config(struct sli4_s *sli4, void *buf)
+{
+	struct sli4_cmd_sli_config_s *sli_config = buf;
+
+	/* sanity check */
+	if (!buf || sli_config->hdr.command !=
+		    MBX_CMD_SLI_CONFIG) {
+		efc_log_err(sli4, "bad parameter buf=%p cmd=%#x\n", buf,
+		       buf ? sli_config->hdr.command : -1);
+		return EFC_FAIL;
+	}
+
+	if (le16_to_cpu(sli_config->hdr.status))
+		return le16_to_cpu(sli_config->hdr.status);
+
+	if (le32_to_cpu(sli_config->dw1_flags) & SLI4_SLICONF_EMB)
+		return sli_config->payload.embed[4];
+
+	efc_log_info(sli4, "external buffers not supported\n");
+	return EFC_FAIL;
+}
+
+/**
+ * @ingroup sli
+ * @brief Issue the command to create a queue.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to queue object.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+__sli_create_queue(struct sli4_s *sli4, struct sli4_queue_s *q)
+{
+	struct sli4_rsp_cmn_create_queue_s *res_q = NULL;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail %s\n",
+			SLI_QNAME[q->type]);
+		return EFC_FAIL;
+	}
+	if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad status create %s\n",
+		       SLI_QNAME[q->type]);
+		return EFC_FAIL;
+	}
+	res_q = (void *)((u8 *)sli4->bmbx.virt +
+			offsetof(struct sli4_cmd_sli_config_s, payload));
+
+	if (res_q->hdr.status) {
+		efc_log_err(sli4, "bad create %s status=%#x addl=%#x\n",
+		       SLI_QNAME[q->type], res_q->hdr.status,
+			    res_q->hdr.additional_status);
+		return EFC_FAIL;
+	}
+	q->id = le16_to_cpu(res_q->q_id);
+	switch (q->type) {
+	case SLI_QTYPE_EQ:
+		/* No doorbell information in response for EQs */
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_EQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] +
+					SLI4_EQCQ_DB_REG;
+		break;
+	case SLI_QTYPE_CQ:
+		/* No doorbell information in response for CQs */
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] +
+					SLI4_IF6_CQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] +
+					SLI4_EQCQ_DB_REG;
+		break;
+	case SLI_QTYPE_MQ:
+		/* No doorbell information in response for MQs */
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] +
+					 SLI4_IF6_MQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] +
+					SLI4_MQ_DB_REG;
+		break;
+	case SLI_QTYPE_RQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] +
+					SLI4_IF6_RQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] +
+					 SLI4_RQ_DB_REG;
+		break;
+	case SLI_QTYPE_WQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] +
+					SLI4_IF6_WQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] +
+					SLI4_IO_WQ_DB_REG;
+		break;
+	default:
+		break;
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Get queue entry size.
+ *
+ * Get queue entry size given queue type.
+ *
+ * @param sli4 SLI context
+ * @param qtype Type for which the entry size is returned.
+ *
+ * @return Returns > 0 on success (queue entry size),
+ * or a negative value on failure.
+ */
+int
+sli_get_queue_entry_size(struct sli4_s *sli4, u32 qtype)
+{
+	u32 size = 0;
+
+	switch (qtype) {
+	case SLI_QTYPE_EQ:
+		size = sizeof(u32);
+		break;
+	case SLI_QTYPE_CQ:
+		size = 16;
+		break;
+	case SLI_QTYPE_MQ:
+		size = 256;
+		break;
+	case SLI_QTYPE_WQ:
+		size = sli4->wqe_size;
+		break;
+	case SLI_QTYPE_RQ:
+		size = SLI4_RQE_SIZE;
+		break;
+	default:
+		efc_log_info(sli4, "unknown queue type %d\n", qtype);
+		return -1;
+	}
+	return size;
+}
+
+/**
+ * @ingroup sli
+ * @brief Allocate a queue.
+ *
+ * @par Description
+ * Allocates DMA memory and configures the requested queue type.
+ *
+ * @param sli4 SLI context.
+ * @param qtype Type of queue to create.
+ * @param q Pointer to the queue object.
+ * @param n_entries Number of entries to allocate.
+ * @param assoc Associated queue
+ * (that is, the EQ for a CQ, the CQ for a MQ, and so on).
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+sli_queue_alloc(struct sli4_s *sli4, u32 qtype,
+		struct sli4_queue_s *q, u32 n_entries,
+		     struct sli4_queue_s *assoc)
+{
+	int size;
+	u32 align = 0;
+
+	/* get queue size */
+	size = sli_get_queue_entry_size(sli4, qtype);
+	if (size < 0)
+		return EFC_FAIL;
+	align = SLI_PAGE_SIZE;
+
+	if (__sli_queue_init(sli4, q, qtype, size, n_entries, align)) {
+		efc_log_err(sli4, "%s allocation failed\n",
+		       SLI_QNAME[qtype]);
+		return EFC_FAIL;
+	}
+
+	switch (qtype) {
+	case SLI_QTYPE_EQ:
+		if (!sli_cmd_common_create_eq(sli4, sli4->bmbx.virt,
+					     SLI4_BMBX_SIZE, &q->dma,
+					assoc ? assoc->id : 0)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+
+		break;
+	case SLI_QTYPE_CQ:
+		if (!sli_cmd_common_create_cq(sli4, sli4->bmbx.virt,
+					     SLI4_BMBX_SIZE, &q->dma,
+						assoc ? assoc->id : 0)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+		break;
+	case SLI_QTYPE_MQ:
+		assoc->u.flag.dword |= SLI4_QUEUE_FLAG_MQ;
+		if (!sli_cmd_common_create_mq_ext(sli4, sli4->bmbx.virt,
+						  SLI4_BMBX_SIZE, &q->dma,
+						  assoc->id)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+
+		break;
+	case SLI_QTYPE_WQ:
+		if (!sli_cmd_wq_create_v1(sli4, sli4->bmbx.virt,
+					 SLI4_BMBX_SIZE, &q->dma,
+					assoc ? assoc->id : 0)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+		break;
+	default:
+		efc_log_info(sli4, "unknown queue type %d\n", qtype);
+		goto error;
+	}
+
+	return EFC_SUCCESS;
+error:
+	__sli_queue_destroy(sli4, q);
+	return EFC_FAIL;
+}
+
+static int sli_cmd_cq_set_create(struct sli4_s *sli4,
+				 struct sli4_queue_s *qs[], u32 num_cqs,
+				 struct sli4_queue_s *eqs[],
+				 struct efc_dma_s *dma)
+{
+	struct sli4_rqst_cmn_create_cq_set_v0_s  *req = NULL;
+	uintptr_t addr;
+	u32 i, offset = 0,  page_bytes = 0, payload_size;
+	u32 p = 0, page_size = 0, n_cqe = 0, num_pages_cq;
+	u32 dw5_flags = 0;
+	u16 dw6w1_flags = 0;
+
+
+	n_cqe = qs[0]->dma.size / SLI4_CQE_BYTES;
+	switch (n_cqe) {
+	case 256:
+	case 512:
+	case 1024:
+	case 2048:
+		page_size = 1;
+		break;
+	case 4096:
+		page_size = 2;
+		break;
+	default:
+		return -1;
+	}
+
+	page_bytes = page_size * SLI_PAGE_SIZE;
+	num_pages_cq = sli_page_count(qs[0]->dma.size, page_bytes);
+	payload_size = max(CFG_RQST_CMDSZ(cmn_create_cq_set_v0) +
+			   (SZ_DMAADDR * num_pages_cq * num_cqs),
+			   sizeof(struct sli4_rsp_cmn_create_queue_set_s));
+
+	dma->size = payload_size;
+	dma->virt = dma_alloc_coherent(&sli4->pcidev->dev, dma->size,
+				      &dma->phys, GFP_DMA);
+	if (!dma->virt)
+		return EFC_FAIL;
+
+	memset(dma->virt, 0, payload_size);
+
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				  payload_size, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	/* Fill the request structure */
+	req->hdr.opcode = CMN_CREATE_CQ_SET;
+	req->hdr.subsystem = SLI4_SUBSYSTEM_FC;
+	req->hdr.dw3_version = CMD_V0;
+	req->hdr.request_length = CFG_RQST_PYLD_LEN_VAR(cmn_create_cq_set_v0,
+					SZ_DMAADDR * num_pages_cq * num_cqs);
+	req->page_size = page_size;
+
+	req->num_pages = cpu_to_le16(num_pages_cq);
+	switch (num_pages_cq) {
+	case 1:
+		dw5_flags |= CQ_CNT_VAL(256);
+		break;
+	case 2:
+		dw5_flags |= CQ_CNT_VAL(512);
+		break;
+	case 4:
+		dw5_flags |= CQ_CNT_VAL(1024);
+		break;
+	case 8:
+		dw5_flags |= CQ_CNT_VAL(LARGE);
+		dw6w1_flags |= (n_cqe & CREATE_CQSETV0_CQE_COUNT);
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages_cq);
+		return EFC_FAIL;
+	}
+
+	dw5_flags |= CREATE_CQSETV0_EVT;
+	dw5_flags |= CREATE_CQSETV0_VALID;
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= CREATE_CQSETV0_AUTOVALID;
+
+	dw6w1_flags &= (~CREATE_CQSETV0_ARM);
+
+	req->dw5_flags = cpu_to_le32(dw5_flags);
+	req->dw6w1_flags = cpu_to_le16(dw6w1_flags);
+
+	req->num_cq_req = cpu_to_le16(num_cqs);
+
+	/* Fill page addresses of all the CQs. */
+	for (i = 0; i < num_cqs; i++) {
+		req->eq_id[i] = cpu_to_le16(eqs[i]->id);
+		for (p = 0, addr = qs[i]->dma.phys; p < num_pages_cq;
+		     p++, addr += page_bytes) {
+			req->page_phys_addr[offset].low =
+				cpu_to_le32(lower_32_bits(addr));
+			req->page_phys_addr[offset].high =
+				cpu_to_le32(upper_32_bits(addr));
+			offset++;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Allocate a c queue set.
+ *
+ * @param sli4 SLI context.
+ * @param num_cqs to create
+ * @param qs Pointers to the queue objects.
+ * @param n_entries Number of entries to allocate per CQ.
+ * @param eqs Associated event queues
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+sli_cq_alloc_set(struct sli4_s *sli4, struct sli4_queue_s *qs[],
+		 u32 num_cqs, u32 n_entries, struct sli4_queue_s *eqs[])
+{
+	u32 i;
+	struct efc_dma_s dma = {0};
+	struct sli4_rsp_cmn_create_queue_set_s *res = NULL;
+	void __iomem *db_regaddr = NULL;
+
+	/* Align the queue DMA memory */
+	for (i = 0; i < num_cqs; i++) {
+		if (__sli_queue_init(sli4, qs[i], SLI_QTYPE_CQ,
+				     SLI4_CQE_BYTES,
+					  n_entries, SLI_PAGE_SIZE)) {
+			efc_log_err(sli4, "Queue init failed.\n");
+			goto error;
+		}
+	}
+
+	if (sli_cmd_cq_set_create(sli4, qs, num_cqs, eqs, &dma))
+		goto error;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail CQSet\n");
+		goto error;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		db_regaddr = sli4->reg[1] + SLI4_IF6_CQ_DB_REG;
+	else
+		db_regaddr = sli4->reg[0] + SLI4_EQCQ_DB_REG;
+
+	res = dma.virt;
+	if (res->hdr.status) {
+		efc_log_err(sli4, "bad create CQSet status=%#x addl=%#x\n",
+		       res->hdr.status, res->hdr.additional_status);
+		goto error;
+	} else {
+		/* Check if we got all requested CQs. */
+		if (le16_to_cpu(res->num_q_allocated) != num_cqs) {
+			efc_log_crit(sli4, "Requested count CQs doesn't match.\n");
+			goto error;
+		}
+		/* Fill the resp cq ids. */
+		for (i = 0; i < num_cqs; i++) {
+			qs[i]->id = le16_to_cpu(res->q_id) + i;
+			qs[i]->db_regaddr = db_regaddr;
+		}
+	}
+
+	dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt, dma.phys);
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_cqs; i++)
+		__sli_queue_destroy(sli4, qs[i]);
+
+	if (dma.virt)
+		dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt,
+				  dma.phys);
+
+	return EFC_FAIL;
+}
+
+/**
+ * @ingroup sli
+ * @brief Free a queue.
+ *
+ * @par Description
+ * Frees DMA memory and de-registers the requested queue.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to the queue object.
+ * @param destroy_queues Non-zero if the mailbox commands
+ * should be sent to destroy the queues.
+ * @param free_memory Non-zero if the DMA memory associated
+ * with the queue should be freed.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+sli_queue_free(struct sli4_s *sli4, struct sli4_queue_s *q,
+	       u32 destroy_queues, u32 free_memory)
+{
+	int rc = EFC_SUCCESS;
+
+	if (!q) {
+		efc_log_err(sli4, "bad parameter sli4=%p q=%p\n", sli4, q);
+		return EFC_FAIL;
+	}
+
+	if (destroy_queues) {
+		switch (q->type) {
+		case SLI_QTYPE_EQ:
+			rc = sli_cmd_common_destroy_eq(sli4, sli4->bmbx.virt,
+						       SLI4_BMBX_SIZE,	q->id);
+			break;
+		case SLI_QTYPE_CQ:
+			rc = sli_cmd_common_destroy_cq(sli4, sli4->bmbx.virt,
+						       SLI4_BMBX_SIZE,	q->id);
+			break;
+		case SLI_QTYPE_MQ:
+			rc = sli_cmd_common_destroy_mq(sli4, sli4->bmbx.virt,
+						       SLI4_BMBX_SIZE,	q->id);
+			break;
+		case SLI_QTYPE_WQ:
+			rc = sli_cmd_wq_destroy(sli4, sli4->bmbx.virt,
+						SLI4_BMBX_SIZE,	q->id);
+			break;
+		case SLI_QTYPE_RQ:
+			rc = sli_cmd_rq_destroy(sli4, sli4->bmbx.virt,
+						SLI4_BMBX_SIZE,	q->id);
+			break;
+		default:
+			efc_log_info(sli4, "bad queue type %d\n",
+				q->type);
+			return EFC_FAIL;
+		}
+
+		if (rc) {
+			struct sli4_rsp_hdr_s	*res = NULL;
+
+			if (sli_bmbx_command(sli4)) {
+				efc_log_crit(sli4, "bootstrap mailbox fail destroy %s\n",
+					SLI_QNAME[q->type]);
+			} else if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+				efc_log_err(sli4, "bad status destroy %s\n",
+				       SLI_QNAME[q->type]);
+			} else {
+				res = (void *)((u8 *)sli4->bmbx.virt +
+					offsetof(struct sli4_cmd_sli_config_s,
+						 payload));
+
+				if (res->status) {
+					efc_log_err(sli4, "destroy %s st=%#x addl=%#x\n",
+					       SLI_QNAME[q->type],
+						res->status,
+						res->additional_status);
+				} else {
+					rc = EFC_SUCCESS;
+				}
+			}
+		}
+	}
+
+	if (free_memory)
+		__sli_queue_destroy(sli4, q);
+
+	return rc;
+}
+
+/**
+ * @ingroup sli
+ * @brief Arm an EQ.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to queue object.
+ * @param arm If TRUE, arm the EQ.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+sli_queue_eq_arm(struct sli4_s *sli4, struct sli4_queue_s *q, bool arm)
+{
+	u32 val = 0;
+	unsigned long flags = 0;
+	u32 a = arm ? SLI4_EQCQ_ARM : SLI4_EQCQ_UNARM;
+
+	spin_lock_irqsave(&q->lock, flags);
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		val = SLI4_IF6_EQ_DOORBELL(q->n_posted, q->id, a);
+	else
+		val = SLI4_EQ_DOORBELL(q->n_posted, q->id, a);
+
+	writel(val, q->db_regaddr);
+	q->n_posted = 0;
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Arm a queue.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to queue object.
+ * @param arm If TRUE, arm the queue.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+sli_queue_arm(struct sli4_s *sli4, struct sli4_queue_s *q, bool arm)
+{
+	u32 val = 0;
+	unsigned long flags = 0;
+	u32 a = arm ? SLI4_EQCQ_ARM : SLI4_EQCQ_UNARM;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	switch (q->type) {
+	case SLI_QTYPE_EQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			val = SLI4_IF6_EQ_DOORBELL(q->n_posted, q->id, a);
+		else
+			val = SLI4_EQ_DOORBELL(q->n_posted, q->id, a);
+
+		writel(val, q->db_regaddr);
+		q->n_posted = 0;
+		break;
+	case SLI_QTYPE_CQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			val = SLI4_IF6_CQ_DOORBELL(q->n_posted, q->id, a);
+		else
+			val = SLI4_CQ_DOORBELL(q->n_posted, q->id, a);
+
+		writel(val, q->db_regaddr);
+		q->n_posted = 0;
+		break;
+	default:
+		efc_log_info(sli4, "should only be used for EQ/CQ, not %s\n",
+			SLI_QNAME[q->type]);
+	}
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a WQ entry to the queue object.
+ *
+ * Note: Assumes the q->lock will be locked and released by the caller.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to the queue object.
+ * @param entry Pointer to the entry contents.
+ *
+ * @return Returns queue index on success, or negative error value otherwise.
+ */
+int
+sli_wq_write(struct sli4_s *sli4, struct sli4_queue_s *q,
+	     u8 *entry)
+{
+	u8		*qe = q->dma.virt;
+	u32	qindex;
+	u32	val = 0;
+
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	if (sli4->perf_wq_id_association)
+		sli_set_wq_id_association(entry, q->id);
+
+	memcpy(qe, entry, q->size);
+	q->n_posted = 1;
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		/* non-dpp write for iftype = 6 */
+		val = SLI4_WQ_DOORBELL(q->n_posted, 0, q->id);
+	else
+		val = SLI4_WQ_DOORBELL(q->n_posted, q->index, q->id);
+
+	writel(val, q->db_regaddr);
+	q->index = (q->index + q->n_posted) & (q->length - 1);
+	q->n_posted = 0;
+
+	return qindex;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a MQ entry to the queue object.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to the queue object.
+ * @param entry Pointer to the entry contents.
+ *
+ * @return Returns queue index on success, or negative error value otherwise.
+ */
+int
+sli_mq_write(struct sli4_s *sli4, struct sli4_queue_s *q,
+	     u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 qindex;
+	u32 val = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&q->lock, flags);
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	memcpy(qe, entry, q->size);
+	q->n_posted = 1;
+
+	val = SLI4_MQ_DOORBELL(q->n_posted, q->id);
+	writel(val, q->db_regaddr);
+	q->index = (q->index + q->n_posted) & (q->length - 1);
+	q->n_posted = 0;
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return qindex;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a RQ entry to the queue object.
+ *
+ * Note: Assumes the q->lock will be locked and released by the caller.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to the queue object.
+ * @param entry Pointer to the entry contents.
+ *
+ * @return Returns queue index on success, or negative error value otherwise.
+ */
+int
+sli_rq_write(struct sli4_s *sli4, struct sli4_queue_s *q,
+	     u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 qindex, n_posted;
+	u32 val = 0;
+
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	memcpy(qe, entry, q->size);
+	q->n_posted = 1;
+
+	n_posted = q->n_posted;
+
+	/*
+	 * In RQ-pair, an RQ either contains the FC header
+	 * (i.e. is_hdr == TRUE) or the payload.
+	 *
+	 * Don't ring doorbell for payload RQ
+	 */
+	if (!(q->u.flag.dword & SLI4_QUEUE_FLAG_HDR))
+		goto skip;
+
+	/*
+	 * Some RQ cannot be incremented one entry at a time.
+	 * Instead, the driver collects a number of entries
+	 * and updates the RQ in batches.
+	 */
+	if (q->u.flag.dword & SLI4_QUEUE_FLAG_RQBATCH) {
+		if (((q->index + q->n_posted) %
+		    SLI4_QUEUE_RQ_BATCH)) {
+			goto skip;
+		}
+		n_posted = SLI4_QUEUE_RQ_BATCH;
+	}
+
+	val = SLI4_RQ_DOORBELL(n_posted, q->id);
+	writel(val, q->db_regaddr);
+skip:
+	q->index = (q->index + q->n_posted) & (q->length - 1);
+	q->n_posted = 0;
+
+	return qindex;
+}
+
+/**
+ * @ingroup sli
+ * @brief Read an EQ entry from the queue object.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to the queue object.
+ * @param entry Destination pointer for the queue entry contents.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+sli_eq_read(struct sli4_s *sli4,
+	    struct sli4_queue_s *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 *qindex = NULL;
+	unsigned long flags = 0;
+	u8 clear = false, valid = false;
+	u16 wflags = 0;
+
+	clear = (sli4->if_type == SLI4_INTF_IF_TYPE_6) ?  false : true;
+
+	qindex = &q->index;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += *qindex * q->size;
+
+	/* Check if eqe is valid */
+	wflags = le16_to_cpu(((struct sli4_eqe_s *)qe)->dw0w0_flags);
+	valid = ((wflags & SLI4_EQE_VALID) == q->phase);
+	if (!valid) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return EFC_FAIL;
+	}
+
+	if (valid && clear) {
+		wflags &= ~SLI4_EQE_VALID;
+		((struct sli4_eqe_s *)qe)->dw0w0_flags =
+						cpu_to_le16(wflags);
+	}
+
+	memcpy(entry, qe, q->size);
+	*qindex = (*qindex + 1) & (q->length - 1);
+	q->n_posted++;
+	/*
+	 * For prism, the phase value will be used
+	 * to check the validity of eq/cq entries.
+	 * The value toggles after a complete sweep
+	 * through the queue.
+	 */
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6 && *qindex == 0)
+		q->phase ^= (u16)0x1;
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Read an CQ entry from the queue object.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to the queue object.
+ * @param entry Destination pointer for the queue entry contents.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+sli_cq_read(struct sli4_s *sli4,
+	    struct sli4_queue_s *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 *qindex = NULL;
+	unsigned long	flags = 0;
+	u8 clear = false;
+	u32 dwflags = 0;
+	bool valid = false, valid_bit_set = false;
+
+	clear = (sli4->if_type == SLI4_INTF_IF_TYPE_6) ?  false : true;
+
+	qindex = &q->index;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += *qindex * q->size;
+
+	/* Check if cqe is valid */
+	dwflags = le32_to_cpu(((struct sli4_mcqe_s *)qe)->dw3_flags);
+	valid_bit_set = (dwflags & SLI4_MCQE_VALID) != 0;
+
+	valid = (valid_bit_set == q->phase);
+	if (!valid) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return -1;
+	}
+
+	if (valid && clear) {
+		dwflags &= ~SLI4_MCQE_VALID;
+		((struct sli4_mcqe_s *)qe)->dw3_flags =
+					cpu_to_le32(dwflags);
+	}
+
+	memcpy(entry, qe, q->size);
+	*qindex = (*qindex + 1) & (q->length - 1);
+	q->n_posted++;
+	/*
+	 * For prism, the phase value will be used
+	 * to check the validity of eq/cq entries.
+	 * The value toggles after a complete sweep
+	 * through the queue.
+	 */
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6 && *qindex == 0)
+		q->phase ^= (u16)0x1;
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Read an MQ entry from the queue object.
+ *
+ * @param sli4 SLI context.
+ * @param q Pointer to the queue object.
+ * @param entry Destination pointer for the queue entry contents.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+sli_mq_read(struct sli4_s *sli4,
+	    struct sli4_queue_s *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 *qindex = NULL;
+	unsigned long flags = 0;
+
+	qindex = &q->u.r_idx;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += *qindex * q->size;
+
+	/* Check if mqe is valid */
+	if (q->index == q->u.r_idx) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return -1;
+	}
+
+	memcpy(entry, qe, q->size);
+	*qindex = (*qindex + 1) & (q->length - 1);
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_queue_index(struct sli4_s *sli4, struct sli4_queue_s *q)
+{
+	if (q)
+		return q->index;
+	else
+		return -1;
+}
+
+int
+sli_queue_poke(struct sli4_s *sli4, struct sli4_queue_s *q,
+	       u32 index, u8 *entry)
+{
+	int rc;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&q->lock, flags);
+	rc = _sli_queue_poke(sli4, q, index, entry);
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return rc;
+}
+
+int
+_sli_queue_poke(struct sli4_s *sli4, struct sli4_queue_s *q,
+		u32 index, u8 *entry)
+{
+	int rc = 0;
+	u8 *qe = q->dma.virt;
+
+	if (index >= q->length)
+		return -1;
+
+	qe += index * q->size;
+
+	if (entry)
+		memcpy(qe, entry, q->size);
+
+	return rc;
+}
+
+/**
+ * @ingroup sli
+ * @brief Parse an EQ entry to retrieve the CQ_ID for this event.
+ *
+ * @param sli4 SLI context.
+ * @param buf Pointer to the EQ entry.
+ * @param cq_id CQ_ID for this entry (only valid on success).
+ *
+ * @return
+ * - 0 if success.
+ * - < 0 if error.
+ * - > 0 if firmware detects EQ overflow.
+ */
+int
+sli_eq_parse(struct sli4_s *sli4, u8 *buf, u16 *cq_id)
+{
+	struct sli4_eqe_s *eqe = (void *)buf;
+	int rc = EFC_SUCCESS;
+	u16 flags = 0;
+	u16 majorcode;
+	u16 minorcode;
+
+	if (!buf || !cq_id) {
+		efc_log_err(sli4, "bad parameters sli4=%p buf=%p cq_id=%p\n",
+		       sli4, buf, cq_id);
+		return -1;
+	}
+
+	flags = le16_to_cpu(eqe->dw0w0_flags);
+	majorcode = (flags & SLI4_EQE_MJCODE) >> 1;
+	minorcode = (flags & SLI4_EQE_MNCODE) >> 4;
+	switch (majorcode) {
+	case SLI4_MAJOR_CODE_STANDARD:
+		*cq_id = le16_to_cpu(eqe->resource_id);
+		break;
+	case SLI4_MAJOR_CODE_SENTINEL:
+		efc_log_info(sli4, "sentinel EQE\n");
+		rc = EFC_FAIL;
+		break;
+	default:
+		efc_log_info(sli4, "Unsupported EQE: major %x minor %x\n",
+			majorcode, minorcode);
+		rc = -1;
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup sli
+ * @brief Parse a CQ entry to retrieve the event type and the associated queue.
+ *
+ * @param sli4 SLI context.
+ * @param cq CQ to process.
+ * @param cqe Pointer to the CQ entry.
+ * @param etype CQ event type.
+ * @param q_id Queue ID associated with this completion message
+ * (that is, MQ_ID, RQ_ID, and so on).
+ *
+ * @return
+ * - 0 if call completed correctly and CQE status is SUCCESS.
+ * - -1 if call failed (no CQE status).
+ * - Other value if call completed correctly and return value is a
+ *   CQE status value.
+ */
+int
+sli_cq_parse(struct sli4_s *sli4, struct sli4_queue_s *cq, u8 *cqe,
+	     enum sli4_qentry_e *etype, u16 *q_id)
+{
+	int rc = EFC_SUCCESS;
+
+	if (!cq || !cqe || !etype) {
+		efc_log_err(sli4, "bad params sli4=%p cq=%p cqe=%p etype=%p q_id=%p\n",
+		       sli4, cq, cqe, etype, q_id);
+		return -1;
+	}
+
+	if (cq->u.flag.dword & SLI4_QUEUE_FLAG_MQ) {
+		struct sli4_mcqe_s	*mcqe = (void *)cqe;
+
+		if (le32_to_cpu(mcqe->dw3_flags) & SLI4_MCQE_AE) {
+			*etype = SLI_QENTRY_ASYNC;
+		} else {
+			*etype = SLI_QENTRY_MQ;
+			rc = sli_cqe_mq(sli4, mcqe);
+		}
+		*q_id = -1;
+	} else {
+		rc = sli_fc_cqe_parse(sli4, cq, cqe, etype, q_id);
+	}
+
+	return rc;
+}
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 05/32] elx: libefc_sli: Populate and post different WQEs
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (3 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 04/32] elx: libefc_sli: queue create/destroy/parse routines James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 06/32] elx: libefc_sli: bmbx routines and SLI config commands James Smart
                   ` (27 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds service routines to create different WQEs and adds
APIs to issue iread, iwrite, treceive, tsend and other work queue
entries.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc_sli/sli4.c | 2102 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h |    2 +
 2 files changed, 2104 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 6b62b7d8b5a4..9e57fa850da6 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -2179,3 +2179,2105 @@ sli_cq_parse(struct sli4_s *sli4, struct sli4_queue_s *cq, u8 *cqe,
 
 	return rc;
 }
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an ABORT_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param type Abort type, such as XRI, abort tag, and request tag.
+ * @param send_abts Boolean to cause the hardware to automatically generate an
+ * ABTS.
+ * @param ids ID of IOs to abort.
+ * @param mask Mask applied to the ID values to abort.
+ * @param tag Tag value associated with this abort.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param dnrx When set to 1, this field indicates that the SLI Port must not
+ * return the associated XRI to the SLI
+ *             Port's optimized write XRI pool.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_abort_wqe(struct sli4_s *sli4, void *buf, size_t size,
+	      enum sli4_abort_type_e type, bool send_abts, u32 ids,
+	      u32 mask, u16 tag, u16 cq_id)
+{
+	struct sli4_abort_wqe_s	*abort = buf;
+
+	memset(buf, 0, size);
+
+	switch (type) {
+	case SLI_ABORT_XRI:
+		abort->criteria = SLI4_ABORT_CRITERIA_XRI_TAG;
+		if (mask) {
+			efc_log_warn(sli4, "%#x aborting XRI %#x warning non-zero mask",
+				mask, ids);
+			mask = 0;
+		}
+		break;
+	case SLI_ABORT_ABORT_ID:
+		abort->criteria = SLI4_ABORT_CRITERIA_ABORT_TAG;
+		break;
+	case SLI_ABORT_REQUEST_ID:
+		abort->criteria = SLI4_ABORT_CRITERIA_REQUEST_TAG;
+		break;
+	default:
+		efc_log_info(sli4, "unsupported type %#x\n", type);
+		return EFC_FAIL;
+	}
+
+	abort->ia_ir_byte |= send_abts ? 0 : 1;
+
+	/* Suppress ABTS retries */
+	abort->ia_ir_byte |= SLI4_ABRT_WQE_IR;
+
+	abort->t_mask = cpu_to_le32(mask);
+	abort->t_tag  = cpu_to_le32(ids);
+	abort->command = SLI4_WQE_ABORT;
+	abort->request_tag = cpu_to_le16(tag);
+
+	abort->dw10w0_flags = cpu_to_le16(SLI4_ABRT_WQE_QOSD);
+
+	abort->cq_id = cpu_to_le16(cq_id);
+	abort->cmdtype_wqec_byte |= SLI4_CMD_ABORT_WQE;
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an ELS_REQUEST64_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param sgl DMA memory for the ELS request.
+ * @param req_type ELS request type.
+ * @param req_len Length of ELS request in bytes.
+ * @param max_rsp_len Max length of ELS response in bytes.
+ * @param timeout Time, in seconds, before an IO times out. Zero means 2 *
+ *  R_A_TOV.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param rnode Destination of ELS request (that is, the remote node).
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_els_request64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		      struct efc_dma_s *sgl,
+		      u8 req_type, u32 req_len, u32 max_rsp_len,
+		      u8 timeout, u16 xri, u16 tag,
+		      u16 cq_id, u16 rnodeindicator, u16 sportindicator,
+		      bool hlm, bool rnodeattached, u32 rnode_fcid,
+		      u32 sport_fcid)
+{
+	struct sli4_els_request64_wqe_s	*els = buf;
+	struct sli4_sge_s *sge = sgl->virt;
+	bool is_fabric = false;
+	struct sli4_bde_s *bptr;
+
+	memset(buf, 0, size);
+
+	bptr = &els->els_request_payload;
+	if (sli4->sgl_pre_registered) {
+		els->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_REQ_WQE_XBL;
+
+		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (req_len & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    ((2 * sizeof(struct sli4_sge_s)) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	els->els_request_payload_length = cpu_to_le32(req_len);
+	els->max_response_payload_length = cpu_to_le32(max_rsp_len);
+
+	els->xri_tag = cpu_to_le16(xri);
+	els->timer = timeout;
+	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	els->command = SLI4_WQE_ELS_REQUEST64;
+
+	els->request_tag = cpu_to_le16(tag);
+
+	if (hlm) {
+		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_HLM;
+		els->remote_id_dword = cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_IOD;
+
+	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_QOSD;
+
+	/* figure out the ELS_ID value from the request buffer */
+
+	switch (req_type) {
+	case ELS_LOGO:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_LOGO << SLI4_REQ_WQE_ELSID_SHFT;
+		if (rnodeattached) {
+			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
+					 SLI4_REQ_WQE_CT_SHFT);
+			els->context_tag = cpu_to_le16(rnodeindicator);
+		} else {
+			els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+			els->context_tag =
+				cpu_to_le16(sportindicator);
+		}
+		if (rnode_fcid == FC_FID_FLOGI)
+			is_fabric = true;
+		break;
+	case ELS_FDISC:
+		if (rnode_fcid == FC_FID_FLOGI)
+			is_fabric = true;
+		if (sport_fcid == 0) {
+			els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_FDISC << SLI4_REQ_WQE_ELSID_SHFT;
+			is_fabric = true;
+		} else {
+			els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		}
+		els->ct_byte |= (SLI4_GENERIC_CONTEXT_VPI <<
+				 SLI4_REQ_WQE_CT_SHFT);
+		els->context_tag = cpu_to_le16(sportindicator);
+		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
+		break;
+	case ELS_FLOGI:
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(sportindicator);
+		/*
+		 * Set SP here ... we haven't done a REG_VPI yet
+		 * need to maybe not set this when we have
+		 * completed VFI/VPI registrations ...
+		 *
+		 * Use the FC_ID of the SPORT if it has been allocated,
+		 * otherwise use an S_ID of zero.
+		 */
+		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
+		if (sport_fcid != U32_MAX)
+			els->sid_sp_dword |= cpu_to_le32(sport_fcid);
+		break;
+	case ELS_PLOGI:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_PLOGI << SLI4_REQ_WQE_ELSID_SHFT;
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(sportindicator);
+		break;
+	case ELS_SCR:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(sportindicator);
+		break;
+	default:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		if (rnodeattached) {
+			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
+					 SLI4_REQ_WQE_CT_SHFT);
+			els->context_tag = cpu_to_le16(sportindicator);
+		} else {
+			els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+			els->context_tag =
+				cpu_to_le16(sportindicator);
+		}
+		break;
+	}
+
+	if (is_fabric)
+		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_FABRIC;
+	else
+		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_NON_FABRIC;
+
+	els->cq_id = cpu_to_le16(cq_id);
+
+	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) !=
+					SLI4_GENERIC_CONTEXT_RPI)
+		els->remote_id_dword = cpu_to_le32(rnode_fcid);
+
+	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) ==
+					SLI4_GENERIC_CONTEXT_VPI)
+		els->temporary_rpi = cpu_to_le16(rnodeindicator);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an FCP_ICMND64_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param sgl DMA memory for the scatter gather list.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param rpi remote node indicator (RPI)
+ * @param rnode Destination request (that is, the remote node).
+ * @param timeout Time, in seconds, before an IO times out. Zero means no
+ * timeout.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fcp_icmnd64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		    struct efc_dma_s *sgl, u16 xri, u16 tag,
+		    u16 cq_id, u32 rpi, bool hlm,
+		    u32 rnode_fcid, u8 timeout)
+{
+	struct sli4_fcp_icmnd64_wqe_s *icmnd = buf;
+	struct sli4_sge_s *sge = NULL;
+	struct sli4_bde_s *bptr;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+	bptr = &icmnd->bde;
+	if (sli4->sgl_pre_registered) {
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_ICMD_WQE_XBL;
+
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (le32_to_cpu(sge[0].buffer_length) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	icmnd->payload_offset_length = (sge[0].buffer_length +
+					 sge[1].buffer_length);
+	icmnd->xri_tag = cpu_to_le16(xri);
+	icmnd->context_tag = cpu_to_le16(rpi);
+	icmnd->timer = timeout;
+
+	/* WQE word 4 contains read transfer length */
+	icmnd->class_pu_byte |= 2 << SLI4_ICMD_WQE_PU_SHFT;
+	icmnd->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	icmnd->command = SLI4_WQE_FCP_ICMND64;
+	icmnd->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_ICMD_WQE_CT_SHFT;
+
+	icmnd->abort_tag = cpu_to_le32(xri);
+
+	icmnd->request_tag = cpu_to_le16(tag);
+	icmnd->len_loc1_byte |= SLI4_ICMD_WQE_LEN_LOC_BIT1;
+	icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_LEN_LOC_BIT2;
+	if (hlm) {
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_HLM;
+		icmnd->remote_n_port_id_dword =
+				cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+	icmnd->cmd_type_byte |= SLI4_CMD_FCP_ICMND64_WQE;
+	icmnd->cq_id = cpu_to_le16(cq_id);
+
+	return  0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an FCP_IREAD64_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param sgl DMA memory for the scatter gather list.
+ * @param first_data_sge Index of first data sge (used if perf hints are
+ * enabled)
+ * @param xfer_len Data transfer length.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param rpi remote node indicator (RPI)
+ * @param rnode Destination request (i.e. remote node).
+ * @param dif T10 DIF operation, or 0 to disable.
+ * @param bs T10 DIF block size, or 0 if DIF is disabled.
+ * @param timeout Time, in seconds, before an IO times out. Zero means no
+ * timeout.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fcp_iread64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		    struct efc_dma_s *sgl, u32 first_data_sge,
+		    u32 xfer_len, u16 xri, u16 tag,
+		    u16 cq_id, u32 rpi, bool hlm, u32 rnode_fcid,
+		    u8 dif, u8 bs, u8 timeout)
+{
+	struct sli4_fcp_iread64_wqe_s *iread = buf;
+	struct sli4_sge_s *sge = NULL;
+	struct sli4_bde_s *bptr;
+	u32 sge_flags = 0;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+	bptr = &iread->bde;
+	if (sli4->sgl_pre_registered) {
+		iread->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_IR_WQE_XBL;
+
+		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (le32_to_cpu(sge[0].buffer_length) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  = sge[0].buffer_address_low;
+		bptr->u.blp.high = sge[0].buffer_address_high;
+	} else {
+		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+
+		/*
+		 * fill out fcp_cmnd buffer len and change resp buffer to be of
+		 * type "skip" (note: response will still be written to sge[1]
+		 * if necessary)
+		 */
+		iread->fcp_cmd_buffer_length =
+					cpu_to_le16(sge[0].buffer_length);
+
+		sge_flags = sge[1].dw2_flags;
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		sge[1].dw2_flags = sge_flags;
+	}
+
+	iread->payload_offset_length = (sge[0].buffer_length +
+					 sge[1].buffer_length);
+	iread->total_transfer_length = cpu_to_le32(xfer_len);
+
+	iread->xri_tag = cpu_to_le16(xri);
+	iread->context_tag = cpu_to_le16(rpi);
+
+	iread->timer = timeout;
+
+	/* WQE word 4 contains read transfer length */
+	iread->class_pu_byte |= 2 << SLI4_IR_WQE_PU_SHFT;
+	iread->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	iread->command = SLI4_WQE_FCP_IREAD64;
+	iread->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_IR_WQE_CT_SHFT;
+	iread->dif_ct_bs_byte |= dif;
+	iread->dif_ct_bs_byte  |= bs << SLI4_IR_WQE_BS_SHFT;
+
+	iread->abort_tag = cpu_to_le32(xri);
+
+	iread->request_tag = cpu_to_le16(tag);
+	iread->len_loc1_byte |= SLI4_IR_WQE_LEN_LOC_BIT1;
+	iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_LEN_LOC_BIT2;
+	if (hlm) {
+		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_HLM;
+		iread->remote_n_port_id_dword =
+				cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+	iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_IOD;
+	iread->cmd_type_byte |= SLI4_CMD_FCP_IREAD64_WQE;
+	iread->cq_id = cpu_to_le16(cq_id);
+
+	if (sli4->perf_hint) {
+		bptr = &iread->first_data_bde;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			  (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	return  0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an FCP_IWRITE64_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param sgl DMA memory for the scatter gather list.
+ * @param first_data_sge Index of first data sge (used if perf hints are
+ * enabled)
+ * @param xfer_len Data transfer length.
+ * @param first_burst The number of first burst bytes
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param rpi remote node indicator (RPI)
+ * @param rnode Destination request (i.e. remote node)
+ * @param dif T10 DIF operation, or 0 to disable
+ * @param bs T10 DIF block size, or 0 if DIF is disabled
+ * @param timeout Time, in seconds, before an IO times out. Zero means no
+ * timeout.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fcp_iwrite64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		     struct efc_dma_s *sgl,
+		     u32 first_data_sge, u32 xfer_len,
+		     u32 first_burst, u16 xri, u16 tag,
+		     u16 cq_id, u32 rpi,
+		     bool hlm, u32 rnode_fcid,
+		     u8 dif, u8 bs, u8 timeout)
+{
+	struct sli4_fcp_iwrite64_wqe_s *iwrite = buf;
+	struct sli4_sge_s *sge = NULL;
+	struct sli4_bde_s *bptr;
+	u32 sge_flags = 0, min = 0;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+	bptr = &iwrite->bde;
+	if (sli4->sgl_pre_registered) {
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_IWR_WQE_XBL;
+
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				     (le32_to_cpu(sge[0].buffer_length) &
+				      SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  =
+			cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+			cpu_to_le32(upper_32_bits(sgl->phys));
+
+		/*
+		 * fill out fcp_cmnd buffer len and change resp buffer to be of
+		 * type "skip" (note: response will still be written to sge[1]
+		 * if necessary)
+		 */
+		iwrite->fcp_cmd_buffer_length =
+					cpu_to_le16(sge[0].buffer_length);
+		sge_flags = sge[1].dw2_flags;
+		sge_flags &= ~SLI4_SGE_TYPE_MASK;
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		sge[1].dw2_flags = sge_flags;
+	}
+
+	iwrite->payload_offset_length = (sge[0].buffer_length +
+					 sge[1].buffer_length);
+	iwrite->total_transfer_length = cpu_to_le16(xfer_len);
+	min = (xfer_len < first_burst) ? xfer_len : first_burst;
+	iwrite->initial_transfer_length = cpu_to_le16(min);
+
+	iwrite->xri_tag = cpu_to_le16(xri);
+	iwrite->context_tag = cpu_to_le16(rpi);
+
+	iwrite->timer = timeout;
+	/* WQE word 4 contains read transfer length */
+	iwrite->class_pu_byte |= 2 << SLI4_IWR_WQE_PU_SHFT;
+	iwrite->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	iwrite->command = SLI4_WQE_FCP_IWRITE64;
+	iwrite->dif_ct_bs_byte |=
+			SLI4_GENERIC_CONTEXT_RPI << SLI4_IWR_WQE_CT_SHFT;
+	iwrite->dif_ct_bs_byte |= dif;
+	iwrite->dif_ct_bs_byte |= bs << SLI4_IWR_WQE_BS_SHFT;
+
+	iwrite->abort_tag = cpu_to_le32(xri);
+
+	iwrite->request_tag = cpu_to_le16(tag);
+	iwrite->len_loc1_byte |= SLI4_IWR_WQE_LEN_LOC_BIT1;
+	iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_LEN_LOC_BIT2;
+	if (hlm) {
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_HLM;
+		iwrite->remote_n_port_id_dword =
+			cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+	iwrite->cmd_type_byte |= SLI4_CMD_FCP_IWRITE64_WQE;
+	iwrite->cq_id = cpu_to_le16(cq_id);
+
+	if (sli4->perf_hint) {
+		bptr = &iwrite->first_data_bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			 (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	return  0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an FCP_TRECEIVE64_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param sgl DMA memory for the Scatter-Gather List.
+ * @param first_data_sge Index of first data sge (used if perf hints are
+ * enabled)
+ * @param relative_off Relative offset of the IO (if any).
+ * @param xfer_len Data transfer length.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param xid OX_ID for the exchange.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param rpi remote node indicator (RPI)
+ * @param rnode Destination request (i.e. remote node).
+ * @param flags Optional attributes, including:
+ *  - ACTIVE - IO is already active.
+ *  - AUTO RSP - Automatically generate a good FCP_RSP.
+ * @param dif T10 DIF operation, or 0 to disable.
+ * @param bs T10 DIF block size, or 0 if DIF is disabled.
+ * @param csctl value of csctl field.
+ * @param app_id value for VM application header.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fcp_treceive64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		       struct efc_dma_s *sgl,
+		       u32 first_data_sge, u32 relative_off,
+		       u32 xfer_len, u16 xri, u16 tag,
+		       u16 cq_id, u16 xid, u32 rpi, bool hlm,
+		       u32 rnode_fcid, u32 flags, u8 dif,
+		       u8 bs, u8 csctl, u32 app_id)
+{
+	struct sli4_fcp_treceive64_wqe_s *trecv = buf;
+	struct sli4_fcp_128byte_wqe_s *trecv_128 = buf;
+	struct sli4_sge_s *sge = NULL;
+	struct sli4_bde_s *bptr;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+	bptr = &trecv->bde;
+	if (sli4->sgl_pre_registered) {
+		trecv->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_TRCV_WQE_XBL;
+
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (le32_to_cpu(sge[0].buffer_length)
+					& SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+
+		trecv->payload_offset_length = sge[0].buffer_length;
+	} else {
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_XBL;
+
+		/* if data is a single physical address, use a BDE */
+		if (!dif && xfer_len <= le32_to_cpu(sge[2].buffer_length)) {
+			trecv->qosd_xbl_hlm_iod_dbde_wqes |=
+							SLI4_TRCV_WQE_DBDE;
+			bptr->bde_type_buflen =
+			      cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+					  (le32_to_cpu(sge[2].buffer_length)
+					  & SLI4_BDE_MASK_BUFFER_LEN));
+
+			bptr->u.data.low =
+				sge[2].buffer_address_low;
+			bptr->u.data.high =
+				sge[2].buffer_address_high;
+		} else {
+			bptr->bde_type_buflen =
+				cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				(sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+			bptr->u.blp.low =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+			bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+		}
+	}
+
+	trecv->relative_offset = cpu_to_le32(relative_off);
+
+	if (flags & SLI4_IO_CONTINUATION)
+		trecv->eat_xc_ccpe |= SLI4_TRCV_WQE_XC;
+
+	trecv->xri_tag = cpu_to_le16(xri);
+
+	trecv->context_tag = cpu_to_le16(rpi);
+
+	/* WQE uses relative offset */
+	trecv->class_ar_pu_byte |= 1 << SLI4_TRCV_WQE_PU_SHFT;
+
+	if (flags & SLI4_IO_AUTO_GOOD_RESPONSE)
+		trecv->class_ar_pu_byte |= SLI4_TRCV_WQE_AR;
+
+	trecv->command = SLI4_WQE_FCP_TRECEIVE64;
+	trecv->class_ar_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	trecv->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_TRCV_WQE_CT_SHFT;
+	trecv->dif_ct_bs_byte |= bs << SLI4_TRCV_WQE_BS_SHFT;
+
+	trecv->remote_xid = cpu_to_le16(xid);
+
+	trecv->request_tag = cpu_to_le16(tag);
+
+	trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_IOD;
+
+	trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_LEN_LOC_BIT2;
+
+	if (hlm) {
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_HLM;
+		trecv->dword5.dword = cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	trecv->cmd_type_byte |= SLI4_CMD_FCP_TRECEIVE64_WQE;
+
+	trecv->cq_id = cpu_to_le16(cq_id);
+
+	trecv->fcp_data_receive_length = cpu_to_le32(xfer_len);
+
+	if (sli4->perf_hint) {
+		bptr = &trecv->first_data_bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	/* The upper 7 bits of csctl is the priority */
+	if (csctl & SLI4_MASK_CCP) {
+		trecv->eat_xc_ccpe |= SLI4_TRCV_WQE_CCPE;
+		trecv->ccp = (csctl & SLI4_MASK_CCP);
+	}
+
+	if (app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(trecv->eat_xc_ccpe & SLI4_TRSP_WQE_EAT)) {
+		trecv->lloc1_appid |= SLI4_TRCV_WQE_APPID;
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_WQES;
+		trecv_128->dw[31] = cpu_to_le32(app_id);
+	}
+	return 0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an FCP_CONT_TRECEIVE64_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param sgl DMA memory for the Scatter-Gather List.
+ * @param first_data_sge Index of first data sge (used if perf hints are
+ * enabled)
+ * @param relative_off Relative offset of the IO (if any).
+ * @param xfer_len Data transfer length.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param xid OX_ID for the exchange.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param rpi remote node indicator (RPI)
+ * @param rnode Destination request (i.e. remote node).
+ * @param flags Optional attributes, including:
+ *  - ACTIVE - IO is already active.
+ *  - AUTO RSP - Automatically generate a good FCP_RSP.
+ * @param dif T10 DIF operation, or 0 to disable.
+ * @param bs T10 DIF block size, or 0 if DIF is disabled.
+ * @param csctl value of csctl field.
+ * @param app_id value for VM application header.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fcp_cont_treceive64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+			    struct efc_dma_s *sgl, u32 first_data_sge,
+			    u32 relative_off, u32 xfer_len,
+			    u16 xri, u16 sec_xri, u16 tag,
+			    u16 cq_id, u16 xid, u32 rpi,
+			    bool hlm, u32 rnode_fcid, u32 flags,
+			    u8 dif, u8 bs, u8 csctl,
+			    u32 app_id)
+{
+	int rc;
+
+	rc = sli_fcp_treceive64_wqe(sli4, buf, size, sgl, first_data_sge,
+				    relative_off, xfer_len, xri, tag, cq_id,
+				    xid, rpi, hlm, rnode_fcid, flags, dif, bs,
+				    csctl, app_id);
+	if (rc == 0) {
+		struct sli4_fcp_treceive64_wqe_s *trecv = buf;
+
+		trecv->command = SLI4_WQE_FCP_CONT_TRECEIVE64;
+		trecv->dword5.sec_xri_tag = cpu_to_le16(sec_xri);
+	}
+	return rc;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an FCP_TRSP64_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param sgl DMA memory for the Scatter-Gather List.
+ * @param rsp_len Response data length.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param xid OX_ID for the exchange.
+ * @param rpi remote node indicator (RPI)
+ * @param rnode Destination request (i.e. remote node).
+ * @param flags Optional attributes, including:
+ *  - ACTIVE - IO is already active
+ *  - AUTO RSP - Automatically generate a good FCP_RSP.
+ * @param csctl value of csctl field.
+ * @param port_owned 0/1 to indicate if the XRI is port owned (used to seti
+ * XBL=0)
+ * @param app_id value for VM application header.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fcp_trsp64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		   struct efc_dma_s *sgl,
+		   u32 rsp_len, u16 xri, u16 tag, u16 cq_id,
+		   u16 xid, u32 rpi, bool hlm, u32 rnode_fcid,
+		   u32 flags, u8 csctl, u8 port_owned,
+		   u32 app_id)
+{
+	struct sli4_fcp_trsp64_wqe_s *trsp = buf;
+	struct sli4_fcp_128byte_wqe_s *trsp_128 = buf;
+	struct sli4_bde_s *bptr;
+
+	memset(buf, 0, size);
+
+	if (flags & SLI4_IO_AUTO_GOOD_RESPONSE) {
+		trsp->class_ag_byte |= SLI4_TRSP_WQE_AG;
+	} else {
+		struct sli4_sge_s	*sge = sgl->virt;
+
+		if (sli4->sgl_pre_registered || port_owned)
+			trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_DBDE;
+		else
+			trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_XBL;
+		bptr = &trsp->bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				     (le32_to_cpu(sge[0].buffer_length) &
+				      SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+
+		trsp->fcp_response_length = cpu_to_le32(rsp_len);
+	}
+
+	if (flags & SLI4_IO_CONTINUATION)
+		trsp->eat_xc_ccpe |= SLI4_TRSP_WQE_XC;
+
+	if (hlm) {
+		trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_HLM;
+		trsp->dword5 = cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	trsp->xri_tag = cpu_to_le16(xri);
+	trsp->rpi = cpu_to_le16(rpi);
+
+	trsp->command = SLI4_WQE_FCP_TRSP64;
+	trsp->class_ag_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	trsp->remote_xid = cpu_to_le16(xid);
+	trsp->request_tag = cpu_to_le16(tag);
+	if (flags & SLI4_IO_DNRX)
+		trsp->ct_dnrx_byte |= SLI4_TRSP_WQE_DNRX;
+	else
+		trsp->ct_dnrx_byte &= ~SLI4_TRSP_WQE_DNRX;
+
+	trsp->lloc1_appid |= 0x1;
+	trsp->cq_id = cpu_to_le16(cq_id);
+	trsp->cmd_type_byte = SLI4_CMD_FCP_TRSP64_WQE;
+
+	/* The upper 7 bits of csctl is the priority */
+	if (csctl & SLI4_MASK_CCP) {
+		trsp->eat_xc_ccpe |= SLI4_TRSP_WQE_CCPE;
+		trsp->ccp = (csctl & SLI4_MASK_CCP);
+	}
+
+	if (app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(trsp->eat_xc_ccpe & SLI4_TRSP_WQE_EAT)) {
+		trsp->lloc1_appid |= SLI4_TRSP_WQE_APPID;
+		trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_WQES;
+		trsp_128->dw[31] = cpu_to_le32(app_id);
+	}
+	return 0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an FCP_TSEND64_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param sgl DMA memory for the scatter gather list.
+ * @param first_data_sge Index of first data sge (used if perf hints are
+ * enabled)
+ * @param relative_off Relative offset of the IO (if any).
+ * @param xfer_len Data transfer length.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param xid OX_ID for the exchange.
+ * @param rpi remote node indicator (RPI)
+ * @param rnode Destination request (i.e. remote node).
+ * @param flags Optional attributes, including:
+ *  - ACTIVE - IO is already active.
+ *  - AUTO RSP - Automatically generate a good FCP_RSP.
+ * @param dif T10 DIF operation, or 0 to disable.
+ * @param bs T10 DIF block size, or 0 if DIF is disabled.
+ * @param csctl value of csctl field.
+ * @param app_id value for VM application header.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fcp_tsend64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		    struct efc_dma_s *sgl,
+		    u32 first_data_sge, u32 relative_off,
+		    u32 xfer_len, u16 xri, u16 tag,
+		    u16 cq_id, u16 xid, u32 rpi,
+		    bool hlm, u32 rnode_fcid, u32 flags, u8 dif,
+		    u8 bs, u8 csctl, u32 app_id)
+{
+	struct sli4_fcp_tsend64_wqe_s *tsend = buf;
+	struct sli4_fcp_128byte_wqe_s *tsend_128 = buf;
+	struct sli4_sge_s *sge = NULL;
+	struct sli4_bde_s *bptr;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+
+	bptr = &tsend->bde;
+	if (sli4->sgl_pre_registered) {
+		tsend->ll_qd_xbl_hlm_iod_dbde &= ~SLI4_TSEND_WQE_XBL;
+
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				   (le32_to_cpu(sge[2].buffer_length) &
+				    SLI4_BDE_MASK_BUFFER_LEN));
+
+		/* TSEND64_WQE specifies first two SGE are skipped (3rd is
+		 * valid)
+		 */
+		bptr->u.data.low  = sge[2].buffer_address_low;
+		bptr->u.data.high = sge[2].buffer_address_high;
+	} else {
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_XBL;
+
+		/* if data is a single physical address, use a BDE */
+		if (!dif && xfer_len <= sge[2].buffer_length) {
+			tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_DBDE;
+
+			bptr->bde_type_buflen =
+			    cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+					(le32_to_cpu(sge[2].buffer_length) &
+					SLI4_BDE_MASK_BUFFER_LEN));
+			/*
+			 * TSEND64_WQE specifies first two SGE are skipped
+			 * (i.e. 3rd is valid)
+			 */
+			bptr->u.data.low =
+				sge[2].buffer_address_low;
+			bptr->u.data.high =
+				sge[2].buffer_address_high;
+		} else {
+			bptr->bde_type_buflen =
+				cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+					    (sgl->size &
+					     SLI4_BDE_MASK_BUFFER_LEN));
+			bptr->u.blp.low =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+			bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+		}
+	}
+
+	tsend->relative_offset = cpu_to_le32(relative_off);
+
+	if (flags & SLI4_IO_CONTINUATION)
+		tsend->dw10byte2 |= SLI4_TSEND_XC;
+
+	tsend->xri_tag = cpu_to_le16(xri);
+
+	tsend->rpi = cpu_to_le16(rpi);
+	/* WQE uses relative offset */
+	tsend->class_pu_ar_byte |= 1 << SLI4_TSEND_WQE_PU_SHFT;
+
+	if (flags & SLI4_IO_AUTO_GOOD_RESPONSE)
+		tsend->class_pu_ar_byte |= SLI4_TSEND_WQE_AR;
+
+	tsend->command = SLI4_WQE_FCP_TSEND64;
+	tsend->class_pu_ar_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	tsend->ct_byte |= SLI4_GENERIC_CONTEXT_RPI << SLI4_TSEND_CT_SHFT;
+	tsend->ct_byte |= dif;
+	tsend->ct_byte |= bs << SLI4_TSEND_BS_SHFT;
+
+	tsend->remote_xid = cpu_to_le16(xid);
+
+	tsend->request_tag = cpu_to_le16(tag);
+
+	tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_LEN_LOC_BIT2;
+
+	if (hlm) {
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_HLM;
+		tsend->dword5 = cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	tsend->cq_id = cpu_to_le16(cq_id);
+
+	tsend->cmd_type_byte |= SLI4_CMD_FCP_TSEND64_WQE;
+
+	tsend->fcp_data_transmit_length = cpu_to_le32(xfer_len);
+
+	if (sli4->perf_hint) {
+		bptr = &tsend->first_data_bde;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	/* The upper 7 bits of csctl is the priority */
+	if (csctl & SLI4_MASK_CCP) {
+		tsend->dw10byte2 |= SLI4_TSEND_CCPE;
+		tsend->ccp = (csctl & SLI4_MASK_CCP);
+	}
+
+	if (app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(tsend->dw10byte2 & SLI4_TSEND_EAT)) {
+		tsend->dw10byte0 |= SLI4_TSEND_APPID_VALID;
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQES;
+		tsend_128->dw[31] = cpu_to_le32(app_id);
+	}
+	return 0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write a GEN_REQUEST64 work queue entry.
+ *
+ * @note This WQE is only used to send FC-CT commands.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param sgl DMA memory for the request.
+ * @param req_len Length of request.
+ * @param max_rsp_len Max length of response.
+ * @param timeout Time, in seconds, before an IO times out.
+ * Zero means infinite.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param rnode Destination of request (that is, the remote node).
+ * @param r_ctl R_CTL value for sequence.
+ * @param type TYPE value for sequence.
+ * @param df_ctl DF_CTL value for sequence.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_gen_request64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		      struct efc_dma_s *sgl, u32 req_len,
+		      u32 max_rsp_len, u8 timeout, u16 xri,
+		      u16 tag, u16 cq_id, bool hlm, u32 rnode_fcid,
+		      u16 rnodeindicator, u8 r_ctl,
+		      u8 type, u8 df_ctl)
+{
+	struct sli4_gen_request64_wqe_s	*gen = buf;
+	struct sli4_sge_s *sge = NULL;
+	struct sli4_bde_s *bptr;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+	bptr = &gen->bde;
+
+	if (sli4->sgl_pre_registered) {
+		gen->dw10flags1 &= ~SLI4_GEN_REQ64_WQE_XBL;
+
+		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (req_len & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    ((2 * sizeof(struct sli4_sge_s)) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low =
+			cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+			cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	gen->request_payload_length = cpu_to_le32(req_len);
+	gen->max_response_payload_length = cpu_to_le32(max_rsp_len);
+
+	gen->df_ctl = df_ctl;
+	gen->type = type;
+	gen->r_ctl = r_ctl;
+
+	gen->xri_tag = cpu_to_le16(xri);
+
+	gen->ct_byte = SLI4_GENERIC_CONTEXT_RPI << SLI4_GEN_REQ64_CT_SHFT;
+	gen->context_tag = cpu_to_le16(rnodeindicator);
+
+	gen->class_byte = SLI4_GENERIC_CLASS_CLASS_3;
+
+	gen->command = SLI4_WQE_GEN_REQUEST64;
+
+	gen->timer = timeout;
+
+	gen->request_tag = cpu_to_le16(tag);
+
+	gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_IOD;
+
+	gen->dw10flags0 |= SLI4_GEN_REQ64_WQE_QOSD;
+
+	if (hlm) {
+		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_HLM;
+		gen->remote_n_port_id_dword =
+			cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	gen->cmd_type_byte = SLI4_CMD_GEN_REQUEST64_WQE;
+
+	gen->cq_id = cpu_to_le16(cq_id);
+
+	return 0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write a SEND_FRAME work queue entry
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param sof Start of frame value
+ * @param eof End of frame value
+ * @param hdr Pointer to FC header data
+ * @param payload DMA memory for the payload.
+ * @param req_len Length of payload.
+ * @param timeout Time, in seconds, before an IO times out. Zero means infinite.
+ * @param xri XRI for this exchange.
+ * @param req_tag IO tag value.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_send_frame_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		   u8 sof, u8 eof, u32 *hdr,
+			struct efc_dma_s *payload, u32 req_len,
+			u8 timeout, u16 xri, u16 req_tag)
+{
+	struct sli4_send_frame_wqe_s *sf = buf;
+
+	memset(buf, 0, size);
+
+	sf->dw10flags1 |= SLI4_SF_WQE_DBDE;
+	sf->bde.bde_type_buflen = cpu_to_le32(req_len &
+					      SLI4_BDE_MASK_BUFFER_LEN);
+	sf->bde.u.data.low =
+		cpu_to_le32(lower_32_bits(payload->phys));
+	sf->bde.u.data.high =
+		cpu_to_le32(upper_32_bits(payload->phys));
+
+	/* Copy FC header */
+	sf->fc_header_0_1[0] = cpu_to_le32(hdr[0]);
+	sf->fc_header_0_1[1] = cpu_to_le32(hdr[1]);
+	sf->fc_header_2_5[0] = cpu_to_le32(hdr[2]);
+	sf->fc_header_2_5[1] = cpu_to_le32(hdr[3]);
+	sf->fc_header_2_5[2] = cpu_to_le32(hdr[4]);
+	sf->fc_header_2_5[3] = cpu_to_le32(hdr[5]);
+
+	sf->frame_length = cpu_to_le32(req_len);
+
+	sf->xri_tag = cpu_to_le16(xri);
+	sf->dw7flags0 &= ~SLI4_SF_PU;
+	sf->context_tag = 0;
+
+	sf->ct_byte &= ~SLI4_SF_CT;
+	sf->command = SLI4_WQE_SEND_FRAME;
+	sf->dw7flags0 |= SLI4_GENERIC_CLASS_CLASS_3;
+	sf->timer = timeout;
+
+	sf->request_tag = cpu_to_le16(req_tag);
+	sf->eof = eof;
+	sf->sof = sof;
+
+	sf->dw10flags1 &= ~SLI4_SF_QOSD;
+	sf->dw10flags0 |= SLI4_SF_LEN_LOC_BIT1;
+	sf->dw10flags2 &= ~SLI4_SF_XC;
+
+	sf->dw10flags1 |= SLI4_SF_XBL;
+
+	sf->cmd_type_byte |= SLI4_CMD_SEND_FRAME_WQE;
+	sf->cq_id = 0xffff;
+
+	return 0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an XMIT_BLS_RSP64_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param payload Contents of the BLS payload to be sent.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param rnode Destination of request (that is, the remote node).
+ * @param s_id Source ID to use in the response. If U32_MAX, use SLI Port's
+ * ID.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_xmit_bls_rsp64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		       struct sli_bls_payload_s *payload, u16 xri,
+		       u16 tag, u16 cq_id,
+		       bool rnodeattached, bool hlm, u16 rnodeindicator,
+		       u16 sportindicator, u32 rnode_fcid,
+		       u32 sport_fcid, u32 s_id)
+{
+	struct sli4_xmit_bls_rsp_wqe_s *bls = buf;
+	u32 dw_ridflags = 0;
+
+	/*
+	 * Callers can either specify RPI or S_ID, but not both
+	 */
+	if (rnodeattached && s_id != U32_MAX) {
+		efc_log_info(sli4, "S_ID specified for attached remote node %d\n",
+			rnodeindicator);
+		return -1;
+	}
+
+	memset(buf, 0, size);
+
+	if (payload->type == SLI4_SLI_BLS_ACC) {
+		bls->payload_word0 =
+			cpu_to_le32((payload->u.acc.seq_id_last << 16) |
+				    (payload->u.acc.seq_id_validity << 24));
+		bls->high_seq_cnt = cpu_to_le16(payload->u.acc.high_seq_cnt);
+		bls->low_seq_cnt = cpu_to_le16(payload->u.acc.low_seq_cnt);
+	} else if (payload->type == SLI4_SLI_BLS_RJT) {
+		bls->payload_word0 =
+				cpu_to_le32(*((u32 *)&payload->u.rjt));
+		dw_ridflags |= SLI4_BLS_RSP_WQE_AR;
+	} else {
+		efc_log_info(sli4, "bad BLS type %#x\n", payload->type);
+		return -1;
+	}
+
+	bls->ox_id = cpu_to_le16(payload->ox_id);
+	bls->rx_id = cpu_to_le16(payload->rx_id);
+
+	if (rnodeattached) {
+		bls->dw8flags0 |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_BLS_RSP_WQE_CT_SHFT;
+		bls->context_tag = cpu_to_le16(rnodeindicator);
+	} else {
+		bls->dw8flags0 |=
+		SLI4_GENERIC_CONTEXT_VPI << SLI4_BLS_RSP_WQE_CT_SHFT;
+		bls->context_tag = cpu_to_le16(sportindicator);
+
+		if (s_id != U32_MAX)
+			bls->local_n_port_id_dword |=
+				cpu_to_le32(s_id & 0x00ffffff);
+		else
+			bls->local_n_port_id_dword |=
+				cpu_to_le32(sport_fcid & 0x00ffffff);
+
+		dw_ridflags = (dw_ridflags & ~SLI4_BLS_RSP_RID) |
+			       (rnode_fcid & SLI4_BLS_RSP_RID);
+
+		bls->temporary_rpi = cpu_to_le16(rnodeindicator);
+	}
+
+	bls->xri_tag = cpu_to_le16(xri);
+
+	bls->dw8flags1 |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	bls->command = SLI4_WQE_XMIT_BLS_RSP;
+
+	bls->request_tag = cpu_to_le16(tag);
+
+	bls->dw11flags1 |= SLI4_BLS_RSP_WQE_QOSD;
+
+	if (hlm) {
+		bls->dw11flags1 |= SLI4_BLS_RSP_WQE_HLM;
+		dw_ridflags = (dw_ridflags & ~SLI4_BLS_RSP_RID) |
+			       (rnode_fcid & SLI4_BLS_RSP_RID);
+	}
+
+	bls->remote_id_dword = cpu_to_le32(dw_ridflags);
+	bls->cq_id = cpu_to_le16(cq_id);
+
+	bls->dw12flags0 |= SLI4_CMD_XMIT_BLS_RSP64_WQE;
+
+	return 0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write a XMIT_ELS_RSP64_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param rsp DMA memory for the ELS response.
+ * @param rsp_len Length of ELS response, in bytes.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ * @param ox_id OX_ID of the exchange containing the request.
+ * @param rnode Destination of the ELS response (that is, the remote node).
+ * @param flags Optional attributes, including:
+ *  - SLI4_IO_CONTINUATION - IO is already active.
+ * @param s_id S_ID used for special responses.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_xmit_els_rsp64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		       struct efc_dma_s *rsp, u32 rsp_len,
+				u16 xri, u16 tag, u16 cq_id,
+				u16 ox_id, u16 rnodeindicator,
+				u16 sportindicator, bool hlm,
+				bool rnodeattached, u32 rnode_fcid,
+				u32 flags, u32 s_id)
+{
+	struct sli4_xmit_els_rsp64_wqe_s *els = buf;
+
+	memset(buf, 0, size);
+
+	if (sli4->sgl_pre_registered)
+		els->flags2 |= SLI4_ELS_DBDE;
+	else
+		els->flags2 |= SLI4_ELS_XBL;
+
+	els->els_response_payload.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (rsp_len & SLI4_BDE_MASK_BUFFER_LEN));
+	els->els_response_payload.u.data.low =
+		cpu_to_le32(lower_32_bits(rsp->phys));
+	els->els_response_payload.u.data.high =
+		cpu_to_le32(upper_32_bits(rsp->phys));
+
+	els->els_response_payload_length = rsp_len;
+
+	els->xri_tag = cpu_to_le16(xri);
+
+	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	els->command = SLI4_WQE_ELS_RSP64;
+
+	els->request_tag = cpu_to_le16(tag);
+
+	els->ox_id = cpu_to_le16(ox_id);
+
+	els->flags2 |= (SLI4_ELS_IOD & SLI4_ELS_REQUEST64_DIR_WRITE);
+
+	els->flags2 |= SLI4_ELS_QOSD;
+
+	if (flags & SLI4_IO_CONTINUATION)
+		els->flags3 |= SLI4_ELS_XC;
+
+	if (rnodeattached) {
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_RPI << SLI4_ELS_CT_OFFSET;
+		els->context_tag = cpu_to_le16(rnodeindicator);
+	} else {
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_ELS_CT_OFFSET;
+		els->context_tag = cpu_to_le16(sportindicator);
+		els->rid_dw = cpu_to_le32(rnode_fcid & SLI4_ELS_RID);
+		els->temporary_rpi = cpu_to_le16(rnodeindicator);
+		if (s_id != U32_MAX) {
+			els->sid_dw |= cpu_to_le32(SLI4_ELS_SP |
+						   (s_id & SLI4_ELS_SID));
+		}
+	}
+
+	if (hlm) {
+		els->flags2 |= SLI4_ELS_HLM;
+		els->rid_dw = cpu_to_le32(rnode_fcid & SLI4_ELS_RID);
+	}
+
+	els->cmd_type_wqec = SLI4_ELS_REQUEST64_CMD_GEN;
+
+	els->cq_id = cpu_to_le16(cq_id);
+
+	return 0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write a XMIT_SEQUENCE64 work queue entry.
+ *
+ * This WQE is used to send FC-CT response frames.
+ *
+ * @note This API implements a restricted use for this WQE,
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param payload DMA memory for the request.
+ * @param payload_len Length of request.
+ * @param timeout Time, in seconds, before an IO times out.
+ * Zero means infinite.
+ * @param ox_id originator exchange ID
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param rnode Destination of request (that is, the remote node).
+ * @param r_ctl R_CTL value for sequence.
+ * @param type TYPE value for sequence.
+ * @param df_ctl DF_CTL value for sequence.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_xmit_sequence64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+			struct efc_dma_s *payload, u32 payload_len,
+		u8 timeout, u16 ox_id, u16 xri,
+		u16 tag, bool hlm, u32 rnode_fcid,
+		u16 rnodeindicator, u8 r_ctl,
+		u8 type, u8 df_ctl)
+{
+	struct sli4_xmit_sequence64_wqe_s *xmit = buf;
+
+	memset(buf, 0, size);
+
+	if (!payload || !payload->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       payload, payload ? payload->virt : NULL);
+		return -1;
+	}
+
+	if (sli4->sgl_pre_registered)
+		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_DBDE);
+	else
+		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_XBL);
+
+	xmit->bde.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			(payload_len & SLI4_BDE_MASK_BUFFER_LEN));
+	xmit->bde.u.data.low  =
+			cpu_to_le32(lower_32_bits(payload->phys));
+	xmit->bde.u.data.high =
+			cpu_to_le32(upper_32_bits(payload->phys));
+	xmit->sequence_payload_len = cpu_to_le32(payload_len);
+
+	xmit->remote_n_port_id_dword |= rnode_fcid & 0x00ffffff;
+
+	xmit->relative_offset = 0;
+
+	/* sequence initiative - this matches what is seen from
+	 * FC switches in response to FCGS commands
+	 */
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_SI);
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_FT);/* force transmit */
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_XO);/* exchange responder */
+	xmit->dw5flags0 |= SLI4_SEQ_WQE_LS;/* last in seqence */
+	xmit->df_ctl = df_ctl;
+	xmit->type = type;
+	xmit->r_ctl = r_ctl;
+
+	xmit->xri_tag = cpu_to_le16(xri);
+	xmit->context_tag = cpu_to_le16(rnodeindicator);
+
+	xmit->dw7flags0 &= (~SLI4_SEQ_WQE_DIF);
+	xmit->dw7flags0 |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_SEQ_WQE_CT_SHIFT;
+	xmit->dw7flags0 &= (~SLI4_SEQ_WQE_BS);
+
+	xmit->command = SLI4_WQE_XMIT_SEQUENCE64;
+	xmit->dw7flags1 |= SLI4_GENERIC_CLASS_CLASS_3;
+	xmit->dw7flags1 &= (~SLI4_SEQ_WQE_PU);
+	xmit->timer = timeout;
+
+	xmit->abort_tag = 0;
+	xmit->request_tag = cpu_to_le16(tag);
+	xmit->remote_xid = cpu_to_le16(ox_id);
+
+	xmit->dw10w0 |=
+	cpu_to_le16(SLI4_ELS_REQUEST64_DIR_READ << SLI4_SEQ_WQE_IOD_SHIFT);
+
+	if (hlm) {
+		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_HLM);
+		xmit->remote_n_port_id_dword |= rnode_fcid & 0x00ffffff;
+	}
+
+	xmit->cmd_type_wqec_byte |= SLI4_CMD_XMIT_SEQUENCE64_WQE;
+
+	xmit->dw10w0 |= cpu_to_le16(2 << SLI4_SEQ_WQE_LEN_LOC_SHIFT);
+
+	xmit->cq_id = cpu_to_le16(0xFFFF);
+
+	return 0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write a REQUEUE_XRI_WQE work queue entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the WQE.
+ * @param size Buffer size, in bytes.
+ * @param xri XRI for this exchange.
+ * @param tag IO tag value.
+ * @param cq_id The id of the completion queue where the WQE response is sent.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_requeue_xri_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		    u16 xri, u16 tag, u16 cq_id)
+{
+	struct sli4_requeue_xri_wqe_s *requeue = buf;
+
+	memset(buf, 0, size);
+
+	requeue->command = SLI4_WQE_REQUEUE_XRI;
+	requeue->xri_tag = cpu_to_le16(xri);
+	requeue->request_tag = cpu_to_le16(tag);
+	requeue->flags2 |= SLI4_REQU_XRI_WQE_XC;
+	requeue->flags1 |= SLI4_REQU_XRI_WQE_QOSD;
+	requeue->cq_id = cpu_to_le16(cq_id);
+	requeue->cmd_type_wqec_byte = SLI4_CMD_REQUEUE_XRI_WQE;
+	return 0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Process an asynchronous Link State event entry.
+ *
+ * @par Description
+ * Parses Asynchronous Completion Queue Entry (ACQE),
+ * creates an abstracted event, and calls registered callback functions.
+ *
+ * @param sli4 SLI context.
+ * @param acqe Pointer to the ACQE.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fc_process_link_state(struct sli4_s *sli4, void *acqe)
+{
+	struct sli4_link_state_s *link_state = acqe;
+	struct sli4_link_event_s event = { 0 };
+	int rc = 0;
+	u8 link_type = (link_state->link_num_type & LINK_TYPE_MASK);
+
+	if (!sli4->link) {
+		/* bail if there is no callback */
+		return 0;
+	}
+
+	if (link_type == LINK_TYPE_ETHERNET) {
+		event.topology = SLI_LINK_TOPO_NPORT;
+		event.medium   = SLI_LINK_MEDIUM_ETHERNET;
+	} else {
+		efc_log_info(sli4, "unsupported link type %#x\n",
+			link_type);
+		event.topology = SLI_LINK_TOPO_MAX;
+		event.medium   = SLI_LINK_MEDIUM_MAX;
+		rc = -1;
+	}
+
+	switch (link_state->port_link_status) {
+	case PORT_LINK_STATUS_PHYSICAL_DOWN:
+	case PORT_LINK_STATUS_LOGICAL_DOWN:
+		event.status = SLI_LINK_STATUS_DOWN;
+		break;
+	case PORT_LINK_STATUS_PHYSICAL_UP:
+	case PORT_LINK_STATUS_LOGICAL_UP:
+		event.status = SLI_LINK_STATUS_UP;
+		break;
+	default:
+		efc_log_info(sli4, "unsupported link status %#x\n",
+			link_state->port_link_status);
+		event.status = SLI_LINK_STATUS_MAX;
+		rc = -1;
+	}
+
+	switch (link_state->port_speed) {
+	case PORT_SPEED_NO_LINK:
+		event.speed = 0;
+		break;
+	case PORT_SPEED_10_MBPS:
+		event.speed = 10;
+		break;
+	case PORT_SPEED_100_MBP:
+		event.speed = 100;
+		break;
+	case PORT_SPEED_1_GBPS:
+		event.speed = 1000;
+		break;
+	case PORT_SPEED_10_GBPS:
+		event.speed = 10000;
+		break;
+	case PORT_SPEED_20_GBPS:
+		event.speed = 20000;
+		break;
+	case PORT_SPEED_25_GBPS:
+		event.speed = 25000;
+		break;
+	case PORT_SPEED_40_GBPS:
+		event.speed = 40000;
+		break;
+	case PORT_SPEED_100_GBPS:
+		event.speed = 100000;
+		break;
+	default:
+		efc_log_info(sli4, "unsupported port_speed %#x\n",
+			link_state->port_speed);
+		rc = -1;
+	}
+
+	sli4->link(sli4->link_arg, (void *)&event);
+
+	return rc;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Process an asynchronous Link Attention event entry.
+ *
+ * @par Description
+ * Parses Asynchronous Completion Queue Entry (ACQE),
+ * creates an abstracted event, and calls the registered callback functions.
+ *
+ * @param sli4 SLI context.
+ * @param acqe Pointer to the ACQE.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fc_process_link_attention(struct sli4_s *sli4, void *acqe)
+{
+	struct sli4_link_attention_s *link_attn = acqe;
+	struct sli4_link_event_s event = { 0 };
+
+	efc_log_info(sli4, "link=%d attn_type=%#x top=%#x speed=%#x pfault=%#x\n",
+		link_attn->link_number, link_attn->attn_type,
+		      link_attn->topology, link_attn->port_speed,
+		      link_attn->port_fault);
+	efc_log_info(sli4, "shared_lnk_status=%#x logl_lnk_speed=%#x evnttag=%#x\n",
+		link_attn->shared_link_status,
+		      le16_to_cpu(link_attn->logical_link_speed),
+		      le32_to_cpu(link_attn->event_tag));
+
+	if (!sli4->link)
+		return 0;
+
+	event.medium   = SLI_LINK_MEDIUM_FC;
+
+	switch (link_attn->attn_type) {
+	case LINK_ATTN_TYPE_LINK_UP:
+		event.status = SLI_LINK_STATUS_UP;
+		break;
+	case LINK_ATTN_TYPE_LINK_DOWN:
+		event.status = SLI_LINK_STATUS_DOWN;
+		break;
+	case LINK_ATTN_TYPE_NO_HARD_ALPA:
+		efc_log_info(sli4, "attn_type: no hard alpa\n");
+		event.status = SLI_LINK_STATUS_NO_ALPA;
+		break;
+	default:
+		efc_log_info(sli4, "attn_type: unknown\n");
+		break;
+	}
+
+	switch (link_attn->event_type) {
+	case FC_EVENT_LINK_ATTENTION:
+		break;
+	case FC_EVENT_SHARED_LINK_ATTENTION:
+		efc_log_info(sli4, "event_type: FC shared link event\n");
+		break;
+	default:
+		efc_log_info(sli4, "event_type: unknown\n");
+		break;
+	}
+
+	switch (link_attn->topology) {
+	case LINK_ATTN_P2P:
+		event.topology = SLI_LINK_TOPO_NPORT;
+		break;
+	case LINK_ATTN_FC_AL:
+		event.topology = SLI_LINK_TOPO_LOOP;
+		break;
+	case LINK_ATTN_INTERNAL_LOOPBACK:
+		efc_log_info(sli4, "topology Internal loopback\n");
+		event.topology = SLI_LINK_TOPO_LOOPBACK_INTERNAL;
+		break;
+	case LINK_ATTN_SERDES_LOOPBACK:
+		efc_log_info(sli4, "topology serdes loopback\n");
+		event.topology = SLI_LINK_TOPO_LOOPBACK_EXTERNAL;
+		break;
+	default:
+		efc_log_info(sli4, "topology: unknown\n");
+		break;
+	}
+
+	event.speed    = link_attn->port_speed * 1000;
+
+	sli4->link(sli4->link_arg, (void *)&event);
+
+	return 0;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Parse an FC/FCoE work queue CQ entry.
+ *
+ * @param sli4 SLI context.
+ * @param cq CQ to process.
+ * @param cqe Pointer to the CQ entry.
+ * @param etype CQ event type.
+ * @param r_id Resource ID associated with this completion message (such as the
+ * IO tag).
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fc_cqe_parse(struct sli4_s *sli4, struct sli4_queue_s *cq,
+		 u8 *cqe, enum sli4_qentry_e *etype, u16 *r_id)
+{
+	u8 code = cqe[SLI4_CQE_CODE_OFFSET];
+	int rc = -1;
+
+	switch (code) {
+	case SLI4_CQE_CODE_WORK_REQUEST_COMPLETION:
+	{
+		struct sli4_fc_wcqe_s *wcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_WQ;
+		*r_id = le16_to_cpu(wcqe->request_tag);
+		rc = wcqe->status;
+
+		/* Flag errors except for FCP_RSP_FAILURE */
+		if (rc && rc != SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE) {
+			efc_log_info(sli4, "WCQE: status=%#x hw_status=%#x tag=%#x\n",
+				wcqe->status, wcqe->hw_status,
+				le16_to_cpu(wcqe->request_tag));
+			efc_log_info(sli4, "w1=%#x w2=%#x xb=%d\n",
+				le32_to_cpu(wcqe->wqe_specific_1),
+				     le32_to_cpu(wcqe->wqe_specific_2),
+				     (wcqe->flags & SLI4_WCQE_XB));
+			efc_log_info(sli4, "      %08X %08X %08X %08X\n",
+				((u32 *)cqe)[0],
+				     ((u32 *)cqe)[1],
+				     ((u32 *)cqe)[2],
+				     ((u32 *)cqe)[3]);
+		}
+
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC:
+	{
+		struct sli4_fc_async_rcqe_s *rcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID;
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC_V1:
+	{
+		struct sli4_fc_async_rcqe_v1_s *rcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_RQ;
+		*r_id = rcqe->rq_id;
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD:
+	{
+		struct sli4_fc_optimized_write_cmd_cqe_s *optcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_OPT_WRITE_CMD;
+		*r_id = le16_to_cpu(optcqe->rq_id);
+		rc = optcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_DATA:
+	{
+		struct sli4_fc_optimized_write_data_cqe_s *dcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_OPT_WRITE_DATA;
+		*r_id = le16_to_cpu(dcqe->xri);
+		rc = dcqe->status;
+
+		/* Flag errors */
+		if (rc != SLI4_FC_WCQE_STATUS_SUCCESS) {
+			efc_log_info(sli4, "Optimized DATA CQE: status=%#x\n",
+				dcqe->status);
+			efc_log_info(sli4, "hstat=%#x xri=%#x dpl=%#x w3=%#x xb=%d\n",
+				dcqe->hw_status, le16_to_cpu(dcqe->xri),
+				le32_to_cpu(dcqe->total_data_placed),
+				((u32 *)cqe)[3],
+				(dcqe->flags & SLI4_OCQE_XB));
+		}
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_COALESCING:
+	{
+		struct sli4_fc_coalescing_rcqe_s *rcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->rq_id);
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_XRI_ABORTED:
+	{
+		struct sli4_fc_xri_aborted_cqe_s *xa = (void *)cqe;
+
+		*etype = SLI_QENTRY_XABT;
+		*r_id = le16_to_cpu(xa->xri);
+		rc = 0;
+		break;
+	}
+	case SLI4_CQE_CODE_RELEASE_WQE: {
+		struct sli4_fc_wqec_s *wqec = (void *)cqe;
+
+		*etype = SLI_QENTRY_WQ_RELEASE;
+		*r_id = le16_to_cpu(wqec->wq_id);
+		rc = 0;
+		break;
+	}
+	default:
+		efc_log_info(sli4, "CQE completion code %d not handled\n",
+			code);
+		*etype = SLI_QENTRY_MAX;
+		*r_id = U16_MAX;
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Return the ELS/CT response length.
+ *
+ * @param sli4 SLI context.
+ * @param cqe Pointer to the CQ entry.
+ *
+ * @return Returns the length, in bytes.
+ */
+u32
+sli_fc_response_length(struct sli4_s *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe_s *wcqe = (void *)cqe;
+
+	return le32_to_cpu(wcqe->wqe_specific_1);
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Return the FCP IO length.
+ *
+ * @param sli4 SLI context.
+ * @param cqe Pointer to the CQ entry.
+ *
+ * @return Returns the length, in bytes.
+ */
+u32
+sli_fc_io_length(struct sli4_s *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe_s *wcqe = (void *)cqe;
+
+	return le32_to_cpu(wcqe->wqe_specific_1);
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Retrieve the D_ID from the completion.
+ *
+ * @param sli4 SLI context.
+ * @param cqe Pointer to the CQ entry.
+ * @param d_id Pointer where the D_ID is written.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fc_els_did(struct sli4_s *sli4, u8 *cqe, u32 *d_id)
+{
+	struct sli4_fc_wcqe_s *wcqe = (void *)cqe;
+
+	*d_id = 0;
+
+	if (wcqe->status)
+		return -1;
+	*d_id = le32_to_cpu(wcqe->wqe_specific_2) & 0x00ffffff;
+	return 0;
+}
+
+u32
+sli_fc_ext_status(struct sli4_s *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe_s *wcqe = (void *)cqe;
+	u32	mask;
+
+	switch (wcqe->status) {
+	case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE:
+		mask = U32_MAX;
+		break;
+	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+	case SLI4_FC_WCQE_STATUS_CMD_REJECT:
+		mask = 0xff;
+		break;
+	case SLI4_FC_WCQE_STATUS_NPORT_RJT:
+	case SLI4_FC_WCQE_STATUS_FABRIC_RJT:
+	case SLI4_FC_WCQE_STATUS_NPORT_BSY:
+	case SLI4_FC_WCQE_STATUS_FABRIC_BSY:
+	case SLI4_FC_WCQE_STATUS_LS_RJT:
+		mask = U32_MAX;
+		break;
+	case SLI4_FC_WCQE_STATUS_DI_ERROR:
+		mask = U32_MAX;
+		break;
+	default:
+		mask = 0;
+	}
+
+	return le32_to_cpu(wcqe->wqe_specific_2) & mask;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Retrieve the RQ index from the completion.
+ *
+ * @param sli4 SLI context.
+ * @param cqe Pointer to the CQ entry.
+ * @param rq_id Pointer where the rq_id is written.
+ * @param index Pointer where the index is written.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_fc_rqe_rqid_and_index(struct sli4_s *sli4, u8 *cqe,
+			  u16 *rq_id, u32 *index)
+{
+	struct sli4_fc_async_rcqe_s *rcqe = (void *)cqe;
+	struct sli4_fc_async_rcqe_v1_s *rcqe_v1 = (void *)cqe;
+	int rc = -1;
+	u8 code = 0;
+	u16 rq_element_index;
+
+	*rq_id = 0;
+	*index = U32_MAX;
+
+	code = cqe[SLI4_CQE_CODE_OFFSET];
+
+	if (code == SLI4_CQE_CODE_RQ_ASYNC) {
+		*rq_id = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID;
+		rq_element_index =
+		le16_to_cpu(rcqe->rq_elmt_indx_word) & SLI4_RACQE_RQ_EL_INDX;
+		*index = rq_element_index;
+		if (rcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = 0;
+		} else {
+			rc = rcqe->status;
+			efc_log_info(sli4, "status=%02x (%s) rq_id=%d\n",
+				rcqe->status,
+				sli_fc_get_status_string(rcqe->status),
+				le16_to_cpu(rcqe->fcfi_rq_id_word) &
+				SLI4_RACQE_RQ_ID);
+
+			efc_log_info(sli4, "pdpl=%x sof=%02x eof=%02x hdpl=%x\n",
+				le16_to_cpu(rcqe->data_placement_length),
+				rcqe->sof_byte, rcqe->eof_byte,
+				rcqe->hdpl_byte & SLI4_RACQE_HDPL);
+		}
+	} else if (code == SLI4_CQE_CODE_RQ_ASYNC_V1) {
+		*rq_id = le16_to_cpu(rcqe_v1->rq_id);
+		rq_element_index =
+			(le16_to_cpu(rcqe_v1->rq_elmt_indx_word) &
+			 SLI4_RACQE_RQ_EL_INDX);
+		*index = rq_element_index;
+		if (rcqe_v1->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = 0;
+		} else {
+			rc = rcqe_v1->status;
+			efc_log_info(sli4, "status=%02x (%s) rq_id=%d, index=%x\n",
+				rcqe_v1->status,
+				sli_fc_get_status_string(rcqe_v1->status),
+				le16_to_cpu(rcqe_v1->rq_id), rq_element_index);
+
+			efc_log_info(sli4, "pdpl=%x sof=%02x eof=%02x hdpl=%x\n",
+				le16_to_cpu(rcqe_v1->data_placement_length),
+			rcqe_v1->sof_byte, rcqe_v1->eof_byte,
+			rcqe_v1->hdpl_byte & SLI4_RACQE_HDPL);
+		}
+	} else if (code == SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD) {
+		struct sli4_fc_optimized_write_cmd_cqe_s *optcqe = (void *)cqe;
+
+		*rq_id = le16_to_cpu(optcqe->rq_id);
+		*index = le16_to_cpu(optcqe->w1) & SLI4_OCQE_RQ_EL_INDX;
+		if (optcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = 0;
+		} else {
+			rc = optcqe->status;
+			efc_log_info(sli4, "stat=%02x (%s) rqid=%d, idx=%x pdpl=%x\n",
+				optcqe->status,
+				sli_fc_get_status_string(optcqe->status),
+				le16_to_cpu(optcqe->rq_id), *index,
+				le16_to_cpu(optcqe->data_placement_length));
+
+			efc_log_info(sli4, "hdpl=%x oox=%d agxr=%d xri=0x%x rpi=%x\n",
+				(optcqe->hdpl_vld & SLI4_OCQE_HDPL),
+				(optcqe->flags1 & SLI4_OCQE_OOX),
+				(optcqe->flags1 & SLI4_OCQE_AGXR), optcqe->xri,
+				le16_to_cpu(optcqe->rpi));
+		}
+	} else if (code == SLI4_CQE_CODE_RQ_COALESCING) {
+		struct sli4_fc_coalescing_rcqe_s	*rcqe = (void *)cqe;
+		u16 rq_element_index =
+				(le16_to_cpu(rcqe->rq_elmt_indx_word) &
+				 SLI4_RCQE_RQ_EL_INDX);
+
+		*rq_id = le16_to_cpu(rcqe->rq_id);
+		if (rcqe->status == SLI4_FC_COALESCE_RQ_SUCCESS) {
+			*index = rq_element_index;
+			rc = 0;
+		} else {
+			*index = U32_MAX;
+			rc = rcqe->status;
+
+			efc_log_info(sli4, "stat=%02x (%s) rq_id=%d, idx=%x\n",
+				rcqe->status,
+				sli_fc_get_status_string(rcqe->status),
+				le16_to_cpu(rcqe->rq_id), rq_element_index);
+			efc_log_info(sli4, "rq_id=%#x sdpl=%x\n",
+				le16_to_cpu(rcqe->rq_id),
+		    le16_to_cpu(rcqe->sequence_reporting_placement_length));
+		}
+	} else {
+		*index = U32_MAX;
+
+		rc = rcqe->status;
+
+		efc_log_info(sli4, "status=%02x rq_id=%d, index=%x pdpl=%x\n",
+			rcqe->status,
+		le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID,
+		(le16_to_cpu(rcqe->rq_elmt_indx_word) & SLI4_RACQE_RQ_EL_INDX),
+		le16_to_cpu(rcqe->data_placement_length));
+		efc_log_info(sli4, "sof=%02x eof=%02x hdpl=%x\n",
+			rcqe->sof_byte, rcqe->eof_byte,
+			rcqe->hdpl_byte & SLI4_RACQE_HDPL);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index b36d67abf219..20ab558db2d2 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -12,6 +12,8 @@
 #ifndef _SLI4_H
 #define _SLI4_H
 
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
 #include "../include/efc_common.h"
 
 /*************************************************************************
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 06/32] elx: libefc_sli: bmbx routines and SLI config commands
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (4 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 05/32] elx: libefc_sli: Populate and post different WQEs James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 07/32] elx: libefc_sli: APIs to setup SLI library James Smart
                   ` (26 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds routines to create mailbox commands used during
adapter initialization and adds APIs to issue mailbox commands to the
adapter through the bootstrap mailbox register.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1767 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h |    2 +
 2 files changed, 1769 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 9e57fa850da6..1306d0a335c6 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -4281,3 +4281,1770 @@ sli_fc_rqe_rqid_and_index(struct sli4_s *sli4, u8 *cqe,
 
 	return rc;
 }
+
+/*
+ * @brief Wait for the bootstrap mailbox to report "ready".
+ *
+ * @param sli4 SLI context pointer.
+ * @param msec Number of milliseconds to wait.
+ *
+ * @return Returns 0 if BMBX is ready, or non-zero otherwise
+ * (i.e. time out occurred).
+ */
+static int
+sli_bmbx_wait(struct sli4_s *sli4, u32 msec)
+{
+	u32 val = 0;
+
+	do {
+		mdelay(1);	/* 1 ms */
+		val = readl(sli4->reg[0] + SLI4_BMBX_REG);
+		msec--;
+	} while (msec && !(val & SLI4_BMBX_RDY));
+
+	val = (!(val & SLI4_BMBX_RDY));
+	return val;
+}
+
+/**
+ * @brief Write bootstrap mailbox.
+ *
+ * @param sli4 SLI context pointer.
+ *
+ * @return Returns 0 if command succeeded, or non-zero otherwise.
+ */
+static int
+sli_bmbx_write(struct sli4_s *sli4)
+{
+	u32 val = 0;
+
+	/* write buffer location to bootstrap mailbox register */
+	val = SLI4_BMBX_WRITE_HI(sli4->bmbx.phys);
+	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
+
+	if (sli_bmbx_wait(sli4, SLI4_BMBX_DELAY_US)) {
+		efc_log_crit(sli4, "BMBX WRITE_HI failed\n");
+		return -1;
+	}
+	val = SLI4_BMBX_WRITE_LO(sli4->bmbx.phys);
+	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
+
+	/* wait for SLI Port to set ready bit */
+	return sli_bmbx_wait(sli4, SLI4_BMBX_TIMEOUT_MSEC);
+}
+
+/**
+ * @ingroup sli
+ * @brief Submit a command to the bootstrap mailbox and check the status.
+ *
+ * @param sli4 SLI context pointer.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_bmbx_command(struct sli4_s *sli4)
+{
+	void *cqe = (u8 *)sli4->bmbx.virt + SLI4_BMBX_SIZE;
+
+	if (sli_fw_error_status(sli4) > 0) {
+		efc_log_crit(sli4, "Chip is in an error state -Mailbox command rejected");
+		efc_log_crit(sli4, " status=%#x error1=%#x error2=%#x\n",
+			sli_reg_read_status(sli4),
+			sli_reg_read_err1(sli4),
+			sli_reg_read_err2(sli4));
+		return -1;
+	}
+
+	if (sli_bmbx_write(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail phys=%p reg=%#x\n",
+			(void *)sli4->bmbx.phys,
+			readl(sli4->reg[0] + SLI4_BMBX_REG));
+		return -1;
+	}
+
+	/* check completion queue entry status */
+	if (le32_to_cpu(((struct sli4_mcqe_s *)cqe)->dw3_flags) &
+	    SLI4_MCQE_VALID) {
+		return sli_cqe_mq(sli4, cqe);
+	}
+	efc_log_crit(sli4, "invalid or wrong type\n");
+	return -1;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a CONFIG_LINK command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_config_link(struct sli4_s *sli4, void *buf, size_t size)
+{
+	struct sli4_cmd_config_link_s *config_link = buf;
+
+	memset(buf, 0, size);
+
+	config_link->hdr.command = MBX_CMD_CONFIG_LINK;
+
+	/* Port interprets zero in a field as "use default value" */
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a DOWN_LINK command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_down_link(struct sli4_s *sli4, void *buf, size_t size)
+{
+	struct sli4_mbox_command_header_s *hdr = buf;
+
+	memset(buf, 0, size);
+
+	hdr->command = MBX_CMD_DOWN_LINK;
+
+	/* Port interprets zero in a field as "use default value" */
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a DUMP Type 4 command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param wki The well known item ID.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_dump_type4(struct sli4_s *sli4, void *buf,
+		   size_t size, u16 wki)
+{
+	struct sli4_cmd_dump4_s *cmd = buf;
+
+	memset(buf, 0, size);
+
+	cmd->hdr.command = MBX_CMD_DUMP;
+	cmd->type_dword = cpu_to_le32(0x4);
+	cmd->wki_selection = cpu_to_le16(wki);
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a COMMON_READ_TRANSCEIVER_DATA command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param page_num The page of SFP data to retrieve (0xa0 or 0xa2).
+ * @param dma DMA structure from which the data will be copied.
+ *
+ * @note This creates a Version 0 message.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_common_read_transceiver_data(struct sli4_s *sli4, void *buf,
+				     size_t size, u32 page_num,
+				     struct efc_dma_s *dma)
+{
+	struct sli4_rqst_cmn_read_transceiver_data_s *req = NULL;
+	u32 psize;
+
+	if (!dma)
+		psize = SLI_CONFIG_PYLD_LENGTH(cmn_read_transceiver_data);
+	else
+		psize = dma->size;
+
+	req = sli_config_cmd_init(sli4, buf, size,
+					    psize, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	req->hdr.opcode = CMN_READ_TRANS_DATA;
+	req->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	req->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_read_transceiver_data);
+
+	req->page_number = cpu_to_le32(page_num);
+	req->port = cpu_to_le32(sli4->port_number);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a READ_LINK_STAT command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param req_ext_counters If TRUE,
+ * then the extended counters will be requested.
+ * @param clear_overflow_flags If TRUE, then overflow flags will be cleared.
+ * @param clear_all_counters If TRUE, the counters will be cleared.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_read_link_stats(struct sli4_s *sli4, void *buf, size_t size,
+			u8 req_ext_counters,
+			u8 clear_overflow_flags,
+			u8 clear_all_counters)
+{
+	struct sli4_cmd_read_link_stats_s *cmd = buf;
+	u32 flags;
+
+	memset(buf, 0, size);
+
+	cmd->hdr.command = MBX_CMD_READ_LNK_STAT;
+
+	flags = 0;
+	if (req_ext_counters)
+		flags |= SLI4_READ_LNKSTAT_REC;
+	if (clear_all_counters)
+		flags |= SLI4_READ_LNKSTAT_CLRC;
+	if (clear_overflow_flags)
+		flags |= SLI4_READ_LNKSTAT_CLOF;
+
+	cmd->dw1_flags = cpu_to_le32(flags);
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a READ_STATUS command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param clear_counters If TRUE, the counters will be cleared.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_read_status(struct sli4_s *sli4, void *buf, size_t size,
+		    u8 clear_counters)
+{
+	struct sli4_cmd_read_status_s *cmd = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	cmd->hdr.command = MBX_CMD_READ_STATUS;
+	if (clear_counters)
+		flags |= SLI4_READSTATUS_CLEAR_COUNTERS;
+	else
+		flags &= ~SLI4_READSTATUS_CLEAR_COUNTERS;
+
+	cmd->dw1_flags = cpu_to_le32(flags);
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write an INIT_LINK command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param speed Link speed.
+ * @param reset_alpa For native FC, this is the selective reset AL_PA
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_init_link(struct sli4_s *sli4, void *buf, size_t size,
+		  u32 speed, u8 reset_alpa)
+{
+	struct sli4_cmd_init_link_s *init_link = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	init_link->hdr.command = MBX_CMD_INIT_LINK;
+
+	init_link->sel_reset_al_pa_dword =
+				cpu_to_le32(reset_alpa);
+	flags &= ~SLI4_INIT_LINK_FLAG_LOOPBACK;
+
+	init_link->link_speed_sel_code = cpu_to_le32(speed);
+	switch (speed) {
+	case FC_LINK_SPEED_1G:
+	case FC_LINK_SPEED_2G:
+	case FC_LINK_SPEED_4G:
+	case FC_LINK_SPEED_8G:
+	case FC_LINK_SPEED_16G:
+	case FC_LINK_SPEED_32G:
+		flags |= SLI4_INIT_LINK_FLAG_FIXED_SPEED;
+		break;
+	case FC_LINK_SPEED_10G:
+		efc_log_info(sli4, "unsupported FC speed %d\n", speed);
+		init_link->flags0 = cpu_to_le32(flags);
+		return EFC_FAIL;
+	}
+
+	switch (sli4->topology) {
+	case SLI4_READ_CFG_TOPO_FC:
+		/* Attempt P2P but failover to FC-AL */
+		flags |= SLI4_INIT_LINK_FLAG_EN_TOPO_FAILOVER;
+
+		flags &= ~SLI4_INIT_LINK_FLAG_TOPOLOGY;
+		flags |= (SLI4_INIT_LINK_F_P2P_FAIL_OVER << 1);
+		break;
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		flags &= ~SLI4_INIT_LINK_FLAG_TOPOLOGY;
+		flags |= (SLI4_INIT_LINK_F_FCAL_ONLY << 1);
+		if (speed == FC_LINK_SPEED_16G ||
+		    speed == FC_LINK_SPEED_32G) {
+			efc_log_info(sli4, "unsupported FC-AL speed %d\n",
+				speed);
+			init_link->flags0 = cpu_to_le32(flags);
+			return EFC_FAIL;
+		}
+		break;
+	case SLI4_READ_CFG_TOPO_FC_DA:
+		flags &= ~SLI4_INIT_LINK_FLAG_TOPOLOGY;
+		flags |= (FC_TOPOLOGY_P2P << 1);
+		break;
+	default:
+
+		efc_log_info(sli4, "unsupported topology %#x\n",
+			sli4->topology);
+
+		init_link->flags0 = cpu_to_le32(flags);
+		return EFC_FAIL;
+	}
+
+	flags &= (~SLI4_INIT_LINK_FLAG_UNFAIR);
+	flags &= (~SLI4_INIT_LINK_FLAG_SKIP_LIRP_LILP);
+	flags &= (~SLI4_INIT_LINK_FLAG_LOOP_VALIDITY);
+	flags &= (~SLI4_INIT_LINK_FLAG_SKIP_LISA);
+	flags &= (~SLI4_INIT_LINK_FLAG_SEL_HIGHTEST_AL_PA);
+	init_link->flags0 = cpu_to_le32(flags);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write an INIT_VFI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param vfi VFI
+ * @param fcfi FCFI
+ * @param vpi VPI (Set to -1 if unused.)
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_init_vfi(struct sli4_s *sli4, void *buf, size_t size,
+		 u16 vfi, u16 fcfi, u16 vpi)
+{
+	struct sli4_cmd_init_vfi_s *init_vfi = buf;
+	u16 flags = 0;
+
+	memset(buf, 0, size);
+
+	init_vfi->hdr.command = MBX_CMD_INIT_VFI;
+
+	init_vfi->vfi = cpu_to_le16(vfi);
+	init_vfi->fcfi = cpu_to_le16(fcfi);
+
+	/*
+	 * If the VPI is valid, initialize it at the same time as
+	 * the VFI
+	 */
+	if (vpi != U16_MAX) {
+		flags |= SLI4_INIT_VFI_FLAG_VP;
+		init_vfi->flags0_word = cpu_to_le16(flags);
+		init_vfi->vpi = cpu_to_le16(vpi);
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write an INIT_VPI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param vpi VPI allocated.
+ * @param vfi VFI associated with this VPI.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_init_vpi(struct sli4_s *sli4, void *buf, size_t size,
+		 u16 vpi, u16 vfi)
+{
+	struct sli4_cmd_init_vpi_s *init_vpi = buf;
+
+	memset(buf, 0, size);
+
+	init_vpi->hdr.command = MBX_CMD_INIT_VPI;
+	init_vpi->vpi = cpu_to_le16(vpi);
+	init_vpi->vfi = cpu_to_le16(vfi);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a POST_XRI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param xri_base Starting XRI value for range of XRI given to SLI Port.
+ * @param xri_count Number of XRIs provided to the SLI Port.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_post_xri(struct sli4_s *sli4, void *buf, size_t size,
+		 u16 xri_base, u16 xri_count)
+{
+	struct sli4_cmd_post_xri_s *post_xri = buf;
+	u16 xri_count_flags = 0;
+
+	memset(buf, 0, size);
+
+	post_xri->hdr.command = MBX_CMD_POST_XRI;
+	post_xri->xri_base = cpu_to_le16(xri_base);
+	xri_count_flags = (xri_count & SLI4_POST_XRI_COUNT);
+	xri_count_flags |= SLI4_POST_XRI_FLAG_ENX;
+	xri_count_flags |= SLI4_POST_XRI_FLAG_VAL;
+	post_xri->xri_count_flags = cpu_to_le16(xri_count_flags);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a RELEASE_XRI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param num_xri The number of XRIs to be released.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_release_xri(struct sli4_s *sli4, void *buf, size_t size,
+		    u8 num_xri)
+{
+	struct sli4_cmd_release_xri_s *release_xri = buf;
+
+	memset(buf, 0, size);
+
+	release_xri->hdr.command = MBX_CMD_RELEASE_XRI;
+	release_xri->xri_count_word = cpu_to_le16(num_xri &
+					SLI4_RELEASE_XRI_COUNT);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a READ_CONFIG command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes
+ *
+ * @return Returns the number of bytes written.
+ */
+static int
+sli_cmd_read_config(struct sli4_s *sli4, void *buf, size_t size)
+{
+	struct sli4_cmd_read_config_s *read_config = buf;
+
+	memset(buf, 0, size);
+
+	read_config->hdr.command = MBX_CMD_READ_CONFIG;
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a READ_NVPARMS command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_read_nvparms(struct sli4_s *sli4, void *buf, size_t size)
+{
+	struct sli4_cmd_read_nvparms_s *read_nvparms = buf;
+
+	memset(buf, 0, size);
+
+	read_nvparms->hdr.command = MBX_CMD_READ_NVPARMS;
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a WRITE_NVPARMS command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param wwpn WWPN to write - pointer to array of 8 u8.
+ * @param wwnn WWNN to write - pointer to array of 8 u8.
+ * @param hard_alpa Hard ALPA to write.
+ * @param preferred_d_id  Preferred D_ID to write.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_write_nvparms(struct sli4_s *sli4, void *buf, size_t size,
+		      u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		u32 preferred_d_id)
+{
+	struct sli4_cmd_write_nvparms_s *write_nvparms = buf;
+
+	memset(buf, 0, size);
+
+	write_nvparms->hdr.command = MBX_CMD_WRITE_NVPARMS;
+	memcpy(write_nvparms->wwpn, wwpn, 8);
+	memcpy(write_nvparms->wwnn, wwnn, 8);
+
+	write_nvparms->hard_alpa_d_id =
+			cpu_to_le32((preferred_d_id << 8) | hard_alpa);
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a READ_REV command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param vpd Pointer to the buffer.
+ *
+ * @return Returns the number of bytes written.
+ */
+static int
+sli_cmd_read_rev(struct sli4_s *sli4, void *buf, size_t size,
+		 struct efc_dma_s *vpd)
+{
+	struct sli4_cmd_read_rev_s *read_rev = buf;
+
+	memset(buf, 0, size);
+
+	read_rev->hdr.command = MBX_CMD_READ_REV;
+
+	if (vpd && vpd->size) {
+		read_rev->flags0_word |= cpu_to_le16(SLI4_READ_REV_FLAG_VPD);
+
+		read_rev->available_length_dword =
+			cpu_to_le16(vpd->size &
+				    SLI4_READ_REV_AVAILABLE_LENGTH);
+
+		read_rev->hostbuf.low =
+				cpu_to_le32(lower_32_bits(vpd->phys));
+		read_rev->hostbuf.high =
+				cpu_to_le32(upper_32_bits(vpd->phys));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a READ_SPARM64 command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param dma DMA buffer for the service parameters.
+ * @param vpi VPI used to determine the WWN.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_read_sparm64(struct sli4_s *sli4, void *buf, size_t size,
+		     struct efc_dma_s *dma,
+		     u16 vpi)
+{
+	struct sli4_cmd_read_sparm64_s *read_sparm64 = buf;
+
+	memset(buf, 0, size);
+
+	if (vpi == SLI4_READ_SPARM64_VPI_SPECIAL) {
+		efc_log_info(sli4, "special VPI not supported!!!\n");
+		return -1;
+	}
+
+	if (!dma || !dma->phys) {
+		efc_log_info(sli4, "bad DMA buffer\n");
+		return -1;
+	}
+
+	read_sparm64->hdr.command = MBX_CMD_READ_SPARM64;
+
+	read_sparm64->bde_64.bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (dma->size & SLI4_BDE_MASK_BUFFER_LEN));
+	read_sparm64->bde_64.u.data.low =
+			cpu_to_le32(lower_32_bits(dma->phys));
+	read_sparm64->bde_64.u.data.high =
+			cpu_to_le32(upper_32_bits(dma->phys));
+
+	read_sparm64->vpi = cpu_to_le16(vpi);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a READ_TOPOLOGY command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param dma DMA buffer for loop map (optional).
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_read_topology(struct sli4_s *sli4, void *buf, size_t size,
+		      struct efc_dma_s *dma)
+{
+	struct sli4_cmd_read_topology_s *read_topo = buf;
+
+	memset(buf, 0, size);
+
+	read_topo->hdr.command = MBX_CMD_READ_TOPOLOGY;
+
+	if (dma && dma->size) {
+		if (dma->size < SLI4_MIN_LOOP_MAP_BYTES) {
+			efc_log_info(sli4, "loop map buffer too small %jd\n",
+				dma->size);
+			return 0;
+		}
+
+		memset(dma->virt, 0, dma->size);
+
+		read_topo->bde_loop_map.bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (dma->size & SLI4_BDE_MASK_BUFFER_LEN));
+		read_topo->bde_loop_map.u.data.low  =
+			cpu_to_le32(lower_32_bits(dma->phys));
+		read_topo->bde_loop_map.u.data.high =
+			cpu_to_le32(upper_32_bits(dma->phys));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a REG_FCFI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param index FCF index returned by READ_FCF_TABLE.
+ * @param rq_cfg RQ_ID/R_CTL/TYPE routing information
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_reg_fcfi(struct sli4_s *sli4, void *buf, size_t size,
+		 u16 index,
+		 struct sli4_cmd_rq_cfg_s rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG])
+{
+	struct sli4_cmd_reg_fcfi_s *reg_fcfi = buf;
+	u32 i;
+
+	memset(buf, 0, size);
+
+	reg_fcfi->hdr.command = MBX_CMD_REG_FCFI;
+
+	reg_fcfi->fcf_index = cpu_to_le16(index);
+
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		switch (i) {
+		case 0:
+			reg_fcfi->rqid0 = cpu_to_le16(rq_cfg[0].rq_id);
+			break;
+		case 1:
+			reg_fcfi->rqid1 = cpu_to_le16(rq_cfg[1].rq_id);
+			break;
+		case 2:
+			reg_fcfi->rqid2 = cpu_to_le16(rq_cfg[2].rq_id);
+			break;
+		case 3:
+			reg_fcfi->rqid3 = cpu_to_le16(rq_cfg[3].rq_id);
+			break;
+		}
+		reg_fcfi->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
+		reg_fcfi->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
+		reg_fcfi->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
+		reg_fcfi->rq_cfg[i].type_match = rq_cfg[i].type_match;
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write REG_FCFI_MRQ to provided command buffer
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param fcf_index FCF index returned by READ_FCF_TABLE.
+ * @param rr_quant Round robin quanta if RQ selection policy is 2
+ * @param rq_selection_policy RQ selection policy
+ * @param num_rqs Array of count of RQs per filter
+ * @param rq_ids Array of RQ ids per filter
+ * @param rq_cfg RQ_ID/R_CTL/TYPE routing information
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+int
+sli_cmd_reg_fcfi_mrq(struct sli4_s *sli4, void *buf, size_t size,
+		     u8 mode, u16 fcf_index,
+		     u8 rq_selection_policy, u8 mrq_bit_mask,
+		     u16 num_mrqs,
+		struct sli4_cmd_rq_cfg_s rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG])
+{
+	struct sli4_cmd_reg_fcfi_mrq_s *reg_fcfi_mrq = buf;
+	u32 i;
+	u32 mrq_flags = 0;
+
+	memset(buf, 0, size);
+
+	reg_fcfi_mrq->hdr.command = MBX_CMD_REG_FCFI_MRQ;
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE) {
+		reg_fcfi_mrq->fcf_index = cpu_to_le16(fcf_index);
+		goto done;
+	}
+
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		reg_fcfi_mrq->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
+		reg_fcfi_mrq->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
+		reg_fcfi_mrq->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
+		reg_fcfi_mrq->rq_cfg[i].type_match = rq_cfg[i].type_match;
+
+		switch (i) {
+		case 3:
+			reg_fcfi_mrq->rqid3 = cpu_to_le16(rq_cfg[i].rq_id);
+			break;
+		case 2:
+			reg_fcfi_mrq->rqid2 = cpu_to_le16(rq_cfg[i].rq_id);
+			break;
+		case 1:
+			reg_fcfi_mrq->rqid1 = cpu_to_le16(rq_cfg[i].rq_id);
+			break;
+		case 0:
+			reg_fcfi_mrq->rqid0 = cpu_to_le16(rq_cfg[i].rq_id);
+			break;
+		}
+	}
+
+	mrq_flags = num_mrqs & SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS;
+	mrq_flags |= (mrq_bit_mask << 8);
+	mrq_flags |= (rq_selection_policy << 12);
+	reg_fcfi_mrq->dw9_mrqflags = cpu_to_le32(mrq_flags);
+done:
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a REG_RPI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param nport_id Remote F/N_Port_ID.
+ * @param rpi Previously-allocated Remote Port Indicator.
+ * @param vpi Previously-allocated Virtual Port Indicator.
+ * @param dma DMA buffer that contains the remote port's service parameters.
+ * @param update Boolean indicating an update to an existing RPI (TRUE)
+ * or a new registration (FALSE).
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_reg_rpi(struct sli4_s *sli4, void *buf, size_t size,
+		u32 nport_id, u16 rpi, u16 vpi,
+		struct efc_dma_s *dma, u8 update,
+		u8 enable_t10_pi)
+{
+	struct sli4_cmd_reg_rpi_s *reg_rpi = buf;
+	u32 rportid_flags = 0;
+
+	memset(buf, 0, size);
+
+	reg_rpi->hdr.command = MBX_CMD_REG_RPI;
+
+	reg_rpi->rpi = cpu_to_le16(rpi);
+
+	rportid_flags = nport_id & SLI4_REGRPI_REMOTE_N_PORTID;
+
+	if (update)
+		rportid_flags |= SLI4_REGRPI_UPD;
+	else
+		rportid_flags &= ~SLI4_REGRPI_UPD;
+
+	if (enable_t10_pi)
+		rportid_flags |= SLI4_REGRPI_ETOW;
+	else
+		rportid_flags &= ~SLI4_REGRPI_ETOW;
+
+	reg_rpi->dw2_rportid_flags = cpu_to_le32(rportid_flags);
+
+	reg_rpi->bde_64.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_MASK_BUFFER_LEN));
+	reg_rpi->bde_64.u.data.low  =
+		cpu_to_le32(lower_32_bits(dma->phys));
+	reg_rpi->bde_64.u.data.high =
+		cpu_to_le32(upper_32_bits(dma->phys));
+
+	reg_rpi->vpi = cpu_to_le16(vpi);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a REG_VFI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param domain Pointer to the domain object.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_reg_vfi(struct sli4_s *sli4, void *buf, size_t size,
+		u16 vfi, u16 fcfi, struct efc_dma_s dma,
+		u16 vpi, __be64 sli_wwpn, u32 fc_id)
+{
+	struct sli4_cmd_reg_vfi_s *reg_vfi = buf;
+
+	if (!sli4 || !buf)
+		return 0;
+
+	memset(buf, 0, size);
+
+	reg_vfi->hdr.command = MBX_CMD_REG_VFI;
+
+	reg_vfi->vfi = cpu_to_le16(vfi);
+
+	reg_vfi->fcfi = cpu_to_le16(fcfi);
+
+	reg_vfi->sparm.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_MASK_BUFFER_LEN));
+	reg_vfi->sparm.u.data.low  =
+		cpu_to_le32(lower_32_bits(dma.phys));
+	reg_vfi->sparm.u.data.high =
+		cpu_to_le32(upper_32_bits(dma.phys));
+
+	reg_vfi->e_d_tov = cpu_to_le32(sli4->e_d_tov);
+	reg_vfi->r_a_tov = cpu_to_le32(sli4->r_a_tov);
+
+	reg_vfi->dw0w1_flags |= SLI4_REGVFI_VP;
+	reg_vfi->vpi = cpu_to_le16(vpi);
+	memcpy(reg_vfi->wwpn, &sli_wwpn, sizeof(reg_vfi->wwpn));
+	reg_vfi->dw10_lportid_flags = cpu_to_le32(fc_id);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a REG_VPI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param sport Point to SLI Port object.
+ * @param update Boolean indicating whether to update the existing VPI (true)
+ * or create a new VPI (false).
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_reg_vpi(struct sli4_s *sli4, void *buf, size_t size,
+		u32 fc_id, __be64 sli_wwpn, u16 vpi, u16 vfi,
+		bool update)
+{
+	struct sli4_cmd_reg_vpi_s *reg_vpi = buf;
+	u32 flags = 0;
+
+	if (!sli4 || !buf)
+		return 0;
+
+	memset(buf, 0, size);
+
+	reg_vpi->hdr.command = MBX_CMD_REG_VPI;
+
+	flags = (fc_id & SLI4_REGVPI_LOCAL_N_PORTID);
+	if (update)
+		flags |= SLI4_REGVPI_UPD;
+	else
+		flags &= ~SLI4_REGVPI_UPD;
+
+	reg_vpi->dw2_lportid_flags = cpu_to_le32(flags);
+	memcpy(reg_vpi->wwpn, &sli_wwpn, sizeof(reg_vpi->wwpn));
+	reg_vpi->vpi = cpu_to_le16(vpi);
+	reg_vpi->vfi = cpu_to_le16(vfi);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a REQUEST_FEATURES command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param mask Features to request.
+ * @param query Use feature query mode (does not change FW).
+ *
+ * @return Returns the number of bytes written.
+ */
+static int
+sli_cmd_request_features(struct sli4_s *sli4, void *buf, size_t size,
+			 u32 features_mask, bool query)
+{
+	struct sli4_cmd_request_features_s *req_features = buf;
+
+	memset(buf, 0, size);
+
+	req_features->hdr.command = MBX_CMD_RQST_FEATURES;
+
+	if (query)
+		req_features->dw1_qry = cpu_to_le32(SLI4_REQFEAT_QRY);
+
+	req_features->cmd = cpu_to_le32(features_mask);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a UNREG_FCFI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param indicator Indicator value.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_unreg_fcfi(struct sli4_s *sli4, void *buf, size_t size,
+		   u16 indicator)
+{
+	struct sli4_cmd_unreg_fcfi_s *unreg_fcfi = buf;
+
+	if (!sli4 || !buf)
+		return 0;
+
+	memset(buf, 0, size);
+
+	unreg_fcfi->hdr.command = MBX_CMD_UNREG_FCFI;
+
+	unreg_fcfi->fcfi = cpu_to_le16(indicator);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write an UNREG_RPI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param indicator Indicator value.
+ * @param which Type of unregister, such as node, port, domain, or FCF.
+ * @param fc_id FC address.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_unreg_rpi(struct sli4_s *sli4, void *buf, size_t size,
+		  u16 indicator,
+		  enum sli4_resource_e which, u32 fc_id)
+{
+	struct sli4_cmd_unreg_rpi_s *unreg_rpi = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	unreg_rpi->hdr.command = MBX_CMD_UNREG_RPI;
+
+	switch (which) {
+	case SLI_RSRC_RPI:
+		flags |= UNREG_RPI_II_RPI;
+		if (fc_id == U32_MAX)
+			break;
+
+		flags |= UNREG_RPI_DP;
+		unreg_rpi->dw2_dest_n_portid =
+			cpu_to_le32(fc_id & UNREG_RPI_DEST_N_PORTID_MASK);
+		break;
+	case SLI_RSRC_VPI:
+		flags |= UNREG_RPI_II_VPI;
+		break;
+	case SLI_RSRC_VFI:
+		flags |= UNREG_RPI_II_VFI;
+		break;
+	case SLI_RSRC_FCFI:
+		flags |= UNREG_RPI_II_FCFI;
+		break;
+	default:
+		efc_log_info(sli4, "unknown type %#x\n", which);
+		return EFC_FAIL;
+	}
+
+	unreg_rpi->dw1w1_flags = cpu_to_le16(flags);
+	unreg_rpi->index = cpu_to_le16(indicator);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write an UNREG_VFI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param domain Pointer to the domain object
+ * @param which Type of unregister, such as domain, FCFI, or everything.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_unreg_vfi(struct sli4_s *sli4, void *buf, size_t size,
+		  u16 index, u32 which)
+{
+	struct sli4_cmd_unreg_vfi_s *unreg_vfi = buf;
+
+	memset(buf, 0, size);
+
+	unreg_vfi->hdr.command = MBX_CMD_UNREG_VFI;
+	switch (which) {
+	case SLI4_UNREG_TYPE_DOMAIN:
+		unreg_vfi->index = cpu_to_le16(index);
+		break;
+	case SLI4_UNREG_TYPE_FCF:
+		unreg_vfi->index = cpu_to_le16(index);
+		break;
+	case SLI4_UNREG_TYPE_ALL:
+		unreg_vfi->index = cpu_to_le16(U32_MAX);
+		break;
+	default:
+		return 0;
+	}
+
+	if (which != SLI4_UNREG_TYPE_DOMAIN)
+		unreg_vfi->dw2_flags =
+			cpu_to_le16(UNREG_VFI_II_FCFI);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write an UNREG_VPI command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to the destination buffer.
+ * @param size Buffer size, in bytes.
+ * @param indicator Indicator value.
+ * @param which Type of unregister: port, domain, FCFI, everything
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_unreg_vpi(struct sli4_s *sli4, void *buf, size_t size,
+		  u16 indicator, u32 which)
+{
+	struct sli4_cmd_unreg_vpi_s *unreg_vpi = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	unreg_vpi->hdr.command = MBX_CMD_UNREG_VPI;
+	unreg_vpi->index = cpu_to_le16(indicator);
+	switch (which) {
+	case SLI4_UNREG_TYPE_PORT:
+		flags |= UNREG_VPI_II_VPI;
+		break;
+	case SLI4_UNREG_TYPE_DOMAIN:
+		flags |= UNREG_VPI_II_VFI;
+		break;
+	case SLI4_UNREG_TYPE_FCF:
+		flags |= UNREG_VPI_II_FCFI;
+		break;
+	case SLI4_UNREG_TYPE_ALL:
+		/* override indicator */
+		unreg_vpi->index = cpu_to_le16(U32_MAX);
+		flags |= UNREG_VPI_II_FCFI;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+
+	unreg_vpi->dw2w0_flags = cpu_to_le16(flags);
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a COMMON_MODIFY_EQ_DELAY command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param q Queue object array.
+ * @param num_q Queue object array count.
+ * @param shift Phase shift for staggering interrupts.
+ * @param delay_mult Delay multiplier for limiting interrupt frequency.
+ *
+ * @return Returns the number of bytes written.
+ */
+static int
+sli_cmd_common_modify_eq_delay(struct sli4_s *sli4, void *buf, size_t size,
+			       struct sli4_queue_s *q, int num_q, u32 shift,
+			       u32 delay_mult)
+{
+	struct sli4_rqst_cmn_modify_eq_delay_s *modify_delay = NULL;
+	int i;
+
+	modify_delay = sli_config_cmd_init(sli4, buf, size,
+				SLI_CONFIG_PYLD_LENGTH(cmn_modify_eq_delay),
+				NULL);
+	if (!modify_delay)
+		return EFC_FAIL;
+
+	modify_delay->hdr.opcode = CMN_MODIFY_EQ_DELAY;
+	modify_delay->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	modify_delay->hdr.request_length =
+		CFG_RQST_PYLD_LEN(cmn_modify_eq_delay);
+	modify_delay->num_eq = cpu_to_le32(num_q);
+
+	for (i = 0; i < num_q; i++) {
+		modify_delay->eq_delay_record[i].eq_id = cpu_to_le32(q[i].id);
+		modify_delay->eq_delay_record[i].phase = cpu_to_le32(shift);
+		modify_delay->eq_delay_record[i].delay_multiplier =
+			cpu_to_le32(delay_mult);
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a LOWLEVEL_SET_WATCHDOG command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param timeout watchdog timer timeout in seconds
+ *
+ * @return void
+ */
+void
+sli4_cmd_lowlevel_set_watchdog(struct sli4_s *sli4, void *buf,
+			       size_t size, u16 timeout)
+{
+	struct sli4_rqst_lowlevel_set_watchdog_s *req = NULL;
+
+	req = sli_config_cmd_init(sli4, buf, size,
+			SLI_CONFIG_PYLD_LENGTH(lowlevel_set_watchdog),
+			NULL);
+	if (!req)
+		return;
+
+	req->hdr.opcode = SLI4_OPC_LOWLEVEL_SET_WATCHDOG;
+	req->hdr.subsystem = SLI4_SUBSYSTEM_LOWLEVEL;
+	req->hdr.request_length = CFG_RQST_PYLD_LEN(lowlevel_set_watchdog);
+	req->watchdog_timeout = cpu_to_le16(timeout);
+}
+
+static int
+sli_cmd_common_get_cntl_attributes(struct sli4_s *sli4, void *buf, size_t size,
+				   struct efc_dma_s *dma)
+{
+	struct sli4_rqst_hdr_s *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, size, CFG_RQST_CMDSZ(hdr), dma);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = CMN_GET_CNTL_ATTRIBUTES;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = cpu_to_le32(dma->size);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a COMMON_GET_CNTL_ADDL_ATTRIBUTES command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param dma DMA structure from which the data will be copied.
+ *
+ * @note This creates a Version 0 message.
+ *
+ * @return Returns the number of bytes written.
+ */
+static int
+sli_cmd_common_get_cntl_addl_attributes(struct sli4_s *sli4, void *buf,
+					size_t size, struct efc_dma_s *dma)
+{
+	struct sli4_rqst_hdr_s *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, size, CFG_RQST_CMDSZ(hdr), dma);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = CMN_GET_CNTL_ADDL_ATTRS;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = cpu_to_le32(dma->size);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a COMMON_NOP command
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param context NOP context value (passed to response, except on FC/FCoE).
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_common_nop(struct sli4_s *sli4, void *buf,
+		   size_t size, uint64_t context)
+{
+	struct sli4_rqst_cmn_nop_s *nop = NULL;
+
+	nop = sli_config_cmd_init(sli4, buf, size,
+				  SLI_CONFIG_PYLD_LENGTH(cmn_nop), NULL);
+	if (!nop)
+		return EFC_FAIL;
+
+	nop->hdr.opcode = CMN_NOP;
+	nop->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	nop->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_nop);
+
+	memcpy(&nop->context, &context, sizeof(context));
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a COMMON_GET_RESOURCE_EXTENT_INFO command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param rtype Resource type (for example, XRI, VFI, VPI, and RPI).
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_common_get_resource_extent_info(struct sli4_s *sli4, void *buf,
+					size_t size, u16 rtype)
+{
+	struct sli4_rqst_cmn_get_resource_extent_info_s *extent = NULL;
+
+	extent = sli_config_cmd_init(sli4, buf, size,
+			CFG_RQST_CMDSZ(cmn_get_resource_extent_info),
+				     NULL);
+	if (extent)
+		return EFC_FAIL;
+
+	extent->hdr.opcode = CMN_GET_RSC_EXTENT_INFO;
+	extent->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	extent->hdr.request_length =
+		CFG_RQST_PYLD_LEN(cmn_get_resource_extent_info);
+
+	extent->resource_type = cpu_to_le16(rtype);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a COMMON_GET_SLI4_PARAMETERS command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_common_get_sli4_parameters(struct sli4_s *sli4, void *buf,
+				   size_t size)
+{
+	struct sli4_rqst_hdr_s *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, size,
+				  SLI_CONFIG_PYLD_LENGTH(cmn_get_sli4_params),
+				  NULL);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = CMN_GET_SLI4_PARAMS;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = CFG_RQST_PYLD_LEN(cmn_get_sli4_params);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @brief Write a COMMON_GET_PORT_NAME command to the provided buffer.
+ *
+ * @param sli4 SLI context pointer.
+ * @param buf Virtual pointer to destination buffer.
+ * @param size Buffer size in bytes.
+ *
+ * @note Function supports both version 0 and 1 forms of this command via
+ * the IF_TYPE.
+ *
+ * @return Returns the number of bytes written.
+ */
+static int
+sli_cmd_common_get_port_name(struct sli4_s *sli4, void *buf, size_t size)
+{
+	struct sli4_rqst_cmn_get_port_name_s *pname;
+
+	pname = sli_config_cmd_init(sli4, buf, size,
+				    SLI_CONFIG_PYLD_LENGTH(cmn_get_port_name),
+				    NULL);
+	if (!pname)
+		return EFC_FAIL;
+
+	pname->hdr.opcode		= CMN_GET_PORT_NAME;
+	pname->hdr.subsystem	= SLI4_SUBSYSTEM_COMMON;
+	pname->hdr.request_length	= CFG_RQST_PYLD_LEN(cmn_get_port_name);
+	pname->hdr.dw3_version	= cpu_to_le32(CMD_V1);
+
+	/* Set the port type value (ethernet=0, FC=1) for V1 commands */
+	pname->port_type = PORT_TYPE_FC;
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a COMMON_WRITE_OBJECT command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param noc True if the object should be written but not committed to flash.
+ * @param eof True if this is the last write for this object.
+ * @param desired_write_length Number of bytes of data to write to the object.
+ * @param offset Offset, in bytes, from the start of the object.
+ * @param object_name Name of the object to write.
+ * @param dma DMA structure from which the data will be copied.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_common_write_object(struct sli4_s *sli4, void *buf, size_t size,
+			    u16 noc,
+			    u16 eof, u32 desired_write_length,
+			    u32 offset, char *object_name,
+			    struct efc_dma_s *dma)
+{
+	struct sli4_rqst_cmn_write_object_s *wr_obj = NULL;
+	struct sli4_bde_s *host_buf;
+	u32 dwflags = 0;
+
+	wr_obj = sli_config_cmd_init(sli4, buf, size,
+				     CFG_RQST_CMDSZ(cmn_write_object) +
+				     sizeof(*host_buf), NULL);
+	if (!wr_obj)
+		return EFC_FAIL;
+
+	wr_obj->hdr.opcode = CMN_WRITE_OBJECT;
+	wr_obj->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	wr_obj->hdr.request_length = CFG_RQST_PYLD_LEN_VAR(cmn_write_object,
+							   sizeof(*host_buf));
+	wr_obj->hdr.timeout = 0;
+	wr_obj->hdr.dw3_version = CMD_V0;
+
+	if (noc)
+		dwflags |= SLI4_RQ_DES_WRITE_LEN_NOC;
+	if (eof)
+		dwflags |= SLI4_RQ_DES_WRITE_LEN_EOF;
+	dwflags |= (desired_write_length & SLI4_RQ_DES_WRITE_LEN);
+
+	wr_obj->desired_write_len_dword = cpu_to_le32(dwflags);
+
+	wr_obj->write_offset = cpu_to_le32(offset);
+	strncpy(wr_obj->object_name, object_name,
+		sizeof(wr_obj->object_name));
+	wr_obj->host_buffer_descriptor_count = cpu_to_le32(1);
+
+	host_buf = (struct sli4_bde_s *)wr_obj->host_buffer_descriptor;
+
+	/* Setup to transfer xfer_size bytes to device */
+	host_buf->bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (desired_write_length & SLI4_BDE_MASK_BUFFER_LEN));
+	host_buf->u.data.low =
+		cpu_to_le32(lower_32_bits(dma->phys));
+	host_buf->u.data.high =
+		cpu_to_le32(upper_32_bits(dma->phys));
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a COMMON_DELETE_OBJECT command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param object_name Name of the object to write.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_common_delete_object(struct sli4_s *sli4, void *buf, size_t size,
+			     char *object_name)
+{
+	struct sli4_rqst_cmn_delete_object_s *del_obj = NULL;
+
+	del_obj = sli_config_cmd_init(sli4, buf, size,
+				      CFG_RQST_CMDSZ(cmn_delete_object), NULL);
+	if (!del_obj)
+		return EFC_FAIL;
+
+	del_obj->hdr.opcode = CMN_DELETE_OBJECT;
+	del_obj->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	del_obj->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_delete_object);
+	del_obj->hdr.timeout = 0;
+	del_obj->hdr.dw3_version = CMD_V0;
+
+	strncpy(del_obj->object_name, object_name,
+		sizeof(del_obj->object_name));
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a COMMON_READ_OBJECT command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param desired_read_length Number of bytes of data to read from the object.
+ * @param offset Offset, in bytes, from the start of the object.
+ * @param object_name Name of the object to read.
+ * @param dma DMA structure from which the data will be copied.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_common_read_object(struct sli4_s *sli4, void *buf, size_t size,
+			   u32 desired_read_length, u32 offset,
+			   char *object_name, struct efc_dma_s *dma)
+{
+	struct sli4_rqst_cmn_read_object_s *rd_obj = NULL;
+	struct sli4_bde_s *host_buf;
+
+	rd_obj = sli_config_cmd_init(sli4, buf, size,
+				     CFG_RQST_CMDSZ(cmn_read_object) +
+				     sizeof(*host_buf), NULL);
+	if (!rd_obj)
+		return EFC_FAIL;
+
+	rd_obj->hdr.opcode = CMN_READ_OBJECT;
+	rd_obj->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	rd_obj->hdr.request_length = CFG_RQST_PYLD_LEN_VAR(cmn_read_object,
+							   sizeof(*host_buf));
+	rd_obj->hdr.timeout = 0;
+	rd_obj->hdr.dw3_version = CMD_V0;
+
+	rd_obj->desired_read_length_dword =
+		cpu_to_le32(desired_read_length & SLI4_REQ_DESIRE_READLEN);
+
+	rd_obj->read_offset = cpu_to_le32(offset);
+	strncpy(rd_obj->object_name, object_name,
+		sizeof(rd_obj->object_name));
+	rd_obj->host_buffer_descriptor_count = cpu_to_le32(1);
+
+	host_buf = (struct sli4_bde_s *)rd_obj->host_buffer_descriptor;
+
+	/* Setup to transfer xfer_size bytes to device */
+	host_buf->bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (desired_read_length & SLI4_BDE_MASK_BUFFER_LEN));
+	if (dma) {
+		host_buf->u.data.low =
+			cpu_to_le32(lower_32_bits(dma->phys));
+		host_buf->u.data.high =
+			cpu_to_le32(upper_32_bits(dma->phys));
+	} else {
+		host_buf->u.data.low = 0;
+		host_buf->u.data.high = 0;
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a DMTF_EXEC_CLP_CMD command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param cmd DMA structure that describes the buffer for the command.
+ * @param resp DMA structure that describes the buffer for the response.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_dmtf_exec_clp_cmd(struct sli4_s *sli4, void *buf, size_t size,
+			  struct efc_dma_s *cmd,
+			  struct efc_dma_s *resp)
+{
+	struct sli4_rqst_dmtf_exec_clp_cmd_s *clp_cmd = NULL;
+
+	clp_cmd = sli_config_cmd_init(sli4, buf, size,
+				      CFG_RQST_CMDSZ(dmtf_exec_clp_cmd), NULL);
+	if (!clp_cmd)
+		return EFC_FAIL;
+
+	clp_cmd->hdr.opcode = DMTF_EXEC_CLP_CMD;
+	clp_cmd->hdr.subsystem = SLI4_SUBSYSTEM_DMTF;
+	clp_cmd->hdr.request_length = CFG_RQST_PYLD_LEN(dmtf_exec_clp_cmd);
+	clp_cmd->hdr.timeout = 0;
+	clp_cmd->hdr.dw3_version = CMD_V0;
+	clp_cmd->cmd_buf_length = cpu_to_le32(cmd->size);
+	clp_cmd->cmd_buf_addr_low =  cpu_to_le32(lower_32_bits(cmd->phys));
+	clp_cmd->cmd_buf_addr_high =  cpu_to_le32(upper_32_bits(cmd->phys));
+	clp_cmd->resp_buf_length = cpu_to_le32(resp->size);
+	clp_cmd->resp_buf_addr_low =  cpu_to_le32(lower_32_bits(resp->phys));
+	clp_cmd->resp_buf_addr_high =  cpu_to_le32(upper_32_bits(resp->phys));
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a COMMON_SET_DUMP_LOCATION command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param query Zero to set dump location, non-zero to query dump size
+ * @param is_buffer_list Set to one if the buffer
+ * is a set of buffer descriptors or
+ * set to 0 if the buffer is a contiguous dump area.
+ * @param buffer DMA structure to which the dump will be copied.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_common_set_dump_location(struct sli4_s *sli4, void *buf,
+				 size_t size, bool query,
+				 bool is_buffer_list,
+				 struct efc_dma_s *buffer, u8 fdb)
+{
+	struct sli4_rqst_cmn_set_dump_location_s *set_dump_loc = NULL;
+	u32 buffer_length_flag = 0;
+
+	set_dump_loc = sli_config_cmd_init(sli4, buf, size,
+					CFG_RQST_CMDSZ(cmn_set_dump_location),
+					NULL);
+	if (!set_dump_loc)
+		return EFC_FAIL;
+
+	set_dump_loc->hdr.opcode = CMN_SET_DUMP_LOCATION;
+	set_dump_loc->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	set_dump_loc->hdr.request_length =
+		CFG_RQST_PYLD_LEN(cmn_set_dump_location);
+	set_dump_loc->hdr.timeout = 0;
+	set_dump_loc->hdr.dw3_version = CMD_V0;
+
+	if (is_buffer_list)
+		buffer_length_flag |= SLI4_RQ_COM_SET_DUMP_BLP;
+
+	if (query)
+		buffer_length_flag |= SLI4_RQ_COM_SET_DUMP_QRY;
+
+	if (fdb)
+		buffer_length_flag |= SLI4_RQ_COM_SET_DUMP_FDB;
+
+	if (buffer) {
+		set_dump_loc->buf_addr_low =
+			cpu_to_le32(lower_32_bits(buffer->phys));
+		set_dump_loc->buf_addr_high =
+			cpu_to_le32(upper_32_bits(buffer->phys));
+
+		buffer_length_flag |= (buffer->len &
+				       SLI4_RQ_COM_SET_DUMP_BUFFER_LEN);
+	} else {
+		set_dump_loc->buf_addr_low = 0;
+		set_dump_loc->buf_addr_high = 0;
+		set_dump_loc->buffer_length_dword = 0;
+	}
+	set_dump_loc->buffer_length_dword = cpu_to_le32(buffer_length_flag);
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Write a COMMON_SET_FEATURES command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param feature Feature to set.
+ * @param param_len Length of the parameter (must be a multiple of 4 bytes).
+ * @param parameter Pointer to the parameter value.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_common_set_features(struct sli4_s *sli4, void *buf, size_t size,
+			    u32 feature,
+			    u32 param_len,
+			    void *parameter)
+{
+	struct sli4_rqst_cmn_set_features_s *cmd = NULL;
+
+	cmd = sli_config_cmd_init(sli4, buf, size,
+				  CFG_RQST_CMDSZ(cmn_set_features), NULL);
+	if (!cmd)
+		return EFC_FAIL;
+
+	cmd->hdr.opcode = CMN_SET_FEATURES;
+	cmd->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
+	cmd->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_set_features);
+	cmd->hdr.timeout = 0;
+	cmd->hdr.dw3_version = CMD_V0;
+
+	cmd->feature = cpu_to_le32(feature);
+	cmd->param_len = cpu_to_le32(param_len);
+	memcpy(cmd->params, parameter, param_len);
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli
+ * @brief Check the mailbox/queue completion entry.
+ *
+ * @param buf Pointer to the MCQE.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_cqe_mq(struct sli4_s *sli4, void *buf)
+{
+	struct sli4_mcqe_s *mcqe = buf;
+	u32 dwflags = le32_to_cpu(mcqe->dw3_flags);
+	/*
+	 * Firmware can split mbx completions into two MCQEs: first with only
+	 * the "consumed" bit set and a second with the "complete" bit set.
+	 * Thus, ignore MCQE unless "complete" is set.
+	 */
+	if (!(dwflags & SLI4_MCQE_COMPLETED))
+		return -2;
+
+	if (le16_to_cpu(mcqe->completion_status)) {
+		efc_log_info(sli4, "status(st=%#x ext=%#x con=%d cmp=%d ae=%d val=%d)\n",
+			le16_to_cpu(mcqe->completion_status),
+			      le16_to_cpu(mcqe->extended_status),
+			      (dwflags & SLI4_MCQE_CONSUMED),
+			      (dwflags & SLI4_MCQE_COMPLETED),
+			      (dwflags & SLI4_MCQE_AE),
+			      (dwflags & SLI4_MCQE_VALID));
+	}
+
+	return le16_to_cpu(mcqe->completion_status);
+}
+
+/**
+ * @ingroup sli
+ * @brief Check the asynchronous event completion entry.
+ *
+ * @param sli4 SLI context.
+ * @param buf Pointer to the ACQE.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_cqe_async(struct sli4_s *sli4, void *buf)
+{
+	struct sli4_acqe_s *acqe = buf;
+	int rc = -1;
+
+	if (!buf) {
+		efc_log_err(sli4, "bad parameter sli4=%p buf=%p\n", sli4, buf);
+		return -1;
+	}
+
+	switch (acqe->event_code) {
+	case SLI4_ACQE_EVENT_CODE_LINK_STATE:
+		rc = sli_fc_process_link_state(sli4, buf);
+		break;
+	case SLI4_ACQE_EVENT_CODE_GRP_5:
+		efc_log_info(sli4, "ACQE GRP5\n");
+		break;
+	case SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT:
+		efc_log_info(sli4, "ACQE SLI Port, type=0x%x, data1,2=0x%08x,0x%08x\n",
+			acqe->event_type,
+			le32_to_cpu(acqe->event_data[0]),
+			le32_to_cpu(acqe->event_data[1]));
+		break;
+	case SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT:
+		rc = sli_fc_process_link_attention(sli4, buf);
+		break;
+	default:
+		efc_log_info(sli4, "ACQE unknown=%#x\n",
+			acqe->event_code);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 20ab558db2d2..24ae702f9427 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -12,6 +12,8 @@
 #ifndef _SLI4_H
 #define _SLI4_H
 
+#include <linux/pci.h>
+#include <linux/delay.h>
 #include "scsi/fc/fc_els.h"
 #include "scsi/fc/fc_fs.h"
 #include "../include/efc_common.h"
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 07/32] elx: libefc_sli: APIs to setup SLI library
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (5 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 06/32] elx: libefc_sli: bmbx routines and SLI config commands James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 08/32] elx: libefc: Generic state machine framework James Smart
                   ` (25 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds APIS to initialize the library, initialize
the SLI Port, reset firmware, terminate the SLI Port, and
terminate the library.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1472 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h |  578 ++++++++++++++
 2 files changed, 2050 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 1306d0a335c6..e7e7ce6cbd90 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -6048,3 +6048,1475 @@ sli_cqe_async(struct sli4_s *sli4, void *buf)
 
 	return rc;
 }
+
+/**
+ * @ingroup sli
+ * @brief Determine if the chip FW is in a ready state
+ *
+ * @param sli4 SLI context.
+ *
+ * @return
+ * - 0 if call completed correctly and FW is not ready.
+ * - 1 if call completed correctly and FW is ready.
+ * - -1 if call failed.
+ */
+int
+sli_fw_ready(struct sli4_s *sli4)
+{
+	u32 val;
+	/*
+	 * Is firmware ready for operation? Check needed depends on IF_TYPE
+	 */
+	val = sli_reg_read_status(sli4);
+	return (val & SLI4_PORT_STATUS_RDY) ? 1 : 0;
+}
+
+/**
+ * @brief Initialize SLI Port control register.
+ *
+ * @param sli4 SLI context pointer.
+ * @param endian Endian value to write.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+
+static int
+sli_sliport_reset(struct sli4_s *sli4)
+{
+	u32 iter, val;
+	int rc = -1;
+
+	val = SLI4_PORT_CTRL_IP;
+	/* Initialize port, endian */
+	writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
+
+	for (iter = 0; iter < 3000; iter++) {
+		mdelay(10);	/* 10 ms */
+		if (sli_fw_ready(sli4) == 1) {
+			rc = 0;
+			break;
+		}
+	}
+
+	if (rc != 0)
+		efc_log_crit(sli4, "port failed to become ready after initialization\n");
+
+	return rc;
+}
+/**
+ * @brief check to see if the FW is ready.
+ *
+ * @par Description
+ * Based on <i>SLI-4 Architecture Specification, Revision 4.x0-13 (2012).</i>.
+ *
+ * @param sli4 SLI context.
+ * @param timeout_ms Time, in milliseconds, to wait for the port to be ready
+ * before failing.
+ *
+ * @return Returns TRUE for ready, or FALSE otherwise.
+ */
+static bool
+sli_wait_for_fw_ready(struct sli4_s *sli4, u32 timeout_ms)
+{
+	u32 iter = timeout_ms / (SLI4_INIT_PORT_DELAY_US / 1000);
+	bool ready = false;
+
+	do {
+		iter--;
+		mdelay(10);	/* 10 ms */
+		if (sli_fw_ready(sli4) == 1)
+			ready = true;
+	} while (!ready && (iter > 0));
+
+	return ready;
+}
+
+/**
+ * @brief Initialize the firmware.
+ *
+ * @par Description
+ * Based on <i>SLI-4 Architecture Specification, Revision 4.x0-13 (2012).</i>.
+ *
+ * @param sli4 SLI context.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+static int
+sli_fw_init(struct sli4_s *sli4)
+{
+	bool ready;
+
+	/*
+	 * Is firmware ready for operation?
+	 */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
+	if (!ready) {
+		efc_log_crit(sli4, "FW status is NOT ready\n");
+		return -1;
+	}
+
+	/*
+	 * Reset port to a known state
+	 */
+	if (sli_sliport_reset(sli4))
+		return -1;
+
+	return 0;
+}
+
+/**
+ * @brief Terminate the firmware.
+ *
+ * @param sli4 SLI context.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+static int
+sli_fw_term(struct sli4_s *sli4)
+{
+	/* type 2 etc. use SLIPORT_CONTROL to initialize port */
+	sli_sliport_reset(sli4);
+	return 0;
+}
+
+static int
+sli_request_features(struct sli4_s *sli4, u32 *features, bool query)
+{
+	if (!sli_cmd_request_features(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				     *features, query)) {
+		struct sli4_cmd_request_features_s *req_features =
+							sli4->bmbx.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return -1;
+		}
+		if (le16_to_cpu(req_features->hdr.status)) {
+			efc_log_err(sli4, "REQUEST_FEATURES bad status %#x\n",
+			       le16_to_cpu(req_features->hdr.status));
+			return -1;
+		}
+		*features = le32_to_cpu(req_features->resp);
+	} else {
+		efc_log_err(sli4, "bad REQUEST_FEATURES write\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/**
+ * @brief Calculate max queue entries.
+ *
+ * @param sli4 SLI context.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+void
+sli_calc_max_qentries(struct sli4_s *sli4)
+{
+	enum sli4_qtype_e q;
+	u32 alloc_size, qentries, qentry_size;
+
+	for (q = SLI_QTYPE_EQ; q < SLI_QTYPE_MAX; q++) {
+		sli4->qinfo.max_qentries[q] =
+			sli_convert_mask_to_count(sli4->qinfo.count_method[q],
+						  sli4->qinfo.count_mask[q]);
+	}
+
+	/* single, continguous DMA allocations will be called for each queue
+	 * of size (max_qentries * queue entry size); since these can be large,
+	 * check against the OS max DMA allocation size
+	 */
+	for (q = SLI_QTYPE_EQ; q < SLI_QTYPE_MAX; q++) {
+		qentries = sli4->qinfo.max_qentries[q];
+		qentry_size = sli_get_queue_entry_size(sli4, q);
+		alloc_size = qentries * qentry_size;
+
+		efc_log_info(sli4, "[%s]: max_qentries from %d to %d\n",
+			     SLI_QNAME[q],
+			     sli4->qinfo.max_qentries[q], qentries);
+		sli4->qinfo.max_qentries[q] = qentries;
+	}
+}
+
+static int
+sli_get_config(struct sli4_s *sli4)
+{
+	struct efc_dma_s data;
+	u32 psize;
+
+	/*
+	 * Read the device configuration
+	 */
+	if (!sli_cmd_read_config(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE)) {
+		struct sli4_rsp_read_config_s	*read_config = sli4->bmbx.virt;
+		u32 i;
+		u32 total, total_size;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "bootstrap mailbox fail (READ_CONFIG)\n");
+			return -1;
+		}
+		if (le16_to_cpu(read_config->hdr.status)) {
+			efc_log_err(sli4, "READ_CONFIG bad status %#x\n",
+			       le16_to_cpu(read_config->hdr.status));
+			return -1;
+		}
+
+		sli4->has_extents =
+			le32_to_cpu(read_config->ext_dword) &
+				    SLI4_READ_CFG_RESP_RESOURCE_EXT;
+		if (!sli4->has_extents) {
+			u32	i = 0, size = 0;
+			u32	*base = sli4->extent[0].base;
+
+			if (!base) {
+				size = SLI_RSRC_MAX * sizeof(u32);
+				base = kzalloc(size, GFP_ATOMIC);
+				if (!base)
+					return -1;
+
+				memset(base, 0,
+				       SLI_RSRC_MAX * sizeof(u32));
+			}
+
+			for (i = 0; i < SLI_RSRC_MAX; i++) {
+				sli4->extent[i].number = 1;
+				sli4->extent[i].n_alloc = 0;
+				sli4->extent[i].base = &base[i];
+			}
+
+			sli4->extent[SLI_RSRC_VFI].base[0] =
+				le16_to_cpu(read_config->vfi_base);
+			sli4->extent[SLI_RSRC_VFI].size =
+				le16_to_cpu(read_config->vfi_count);
+
+			sli4->extent[SLI_RSRC_VPI].base[0] =
+				le16_to_cpu(read_config->vpi_base);
+			sli4->extent[SLI_RSRC_VPI].size =
+				le16_to_cpu(read_config->vpi_count);
+
+			sli4->extent[SLI_RSRC_RPI].base[0] =
+				le16_to_cpu(read_config->rpi_base);
+			sli4->extent[SLI_RSRC_RPI].size =
+				le16_to_cpu(read_config->rpi_count);
+
+			sli4->extent[SLI_RSRC_XRI].base[0] =
+				le16_to_cpu(read_config->xri_base);
+			sli4->extent[SLI_RSRC_XRI].size =
+				le16_to_cpu(read_config->xri_count);
+
+			sli4->extent[SLI_RSRC_FCFI].base[0] = 0;
+			sli4->extent[SLI_RSRC_FCFI].size =
+				le16_to_cpu(read_config->fcfi_count);
+		} else {
+			;
+		}
+
+		for (i = 0; i < SLI_RSRC_MAX; i++) {
+			total = sli4->extent[i].number *
+				sli4->extent[i].size;
+			total_size = BITS_TO_LONGS(total) * sizeof(long);
+			sli4->extent[i].use_map =
+				kzalloc(total_size, GFP_ATOMIC);
+			if (!sli4->extent[i].use_map) {
+				efc_log_err(sli4, "bitmap memory allocation failed %d\n",
+				       i);
+				return -1;
+			}
+			sli4->extent[i].map_size = total;
+		}
+
+		sli4->topology =
+				(le32_to_cpu(read_config->topology_dword) &
+				 SLI4_READ_CFG_RESP_TOPOLOGY) >> 24;
+		switch (sli4->topology) {
+		case SLI4_READ_CFG_TOPO_FC:
+			efc_log_info(sli4, "FC (unknown)\n");
+			break;
+		case SLI4_READ_CFG_TOPO_FC_DA:
+			efc_log_info(sli4, "FC (direct attach)\n");
+			break;
+		case SLI4_READ_CFG_TOPO_FC_AL:
+			efc_log_info(sli4, "FC (arbitrated loop)\n");
+			break;
+		default:
+			efc_log_info(sli4, "bad topology %#x\n",
+				sli4->topology);
+		}
+
+		sli4->e_d_tov = le16_to_cpu(read_config->e_d_tov);
+		sli4->r_a_tov = le16_to_cpu(read_config->r_a_tov);
+
+		sli4->link_module_type = le16_to_cpu(read_config->lmt);
+
+		sli4->qinfo.max_qcount[SLI_QTYPE_EQ] =
+				le16_to_cpu(read_config->eq_count);
+		sli4->qinfo.max_qcount[SLI_QTYPE_CQ] =
+				le16_to_cpu(read_config->cq_count);
+		sli4->qinfo.max_qcount[SLI_QTYPE_WQ] =
+				le16_to_cpu(read_config->wq_count);
+		sli4->qinfo.max_qcount[SLI_QTYPE_RQ] =
+				le16_to_cpu(read_config->rq_count);
+
+		/*
+		 * READ_CONFIG doesn't give the max number of MQ. Applications
+		 * will typically want 1, but we may need another at some future
+		 * date. Dummy up a "max" MQ count here.
+		 */
+		sli4->qinfo.max_qcount[SLI_QTYPE_MQ] = SLI_USER_MQ_COUNT;
+	} else {
+		efc_log_err(sli4, "bad READ_CONFIG write\n");
+		return -1;
+	}
+
+	if (!sli_cmd_common_get_sli4_parameters(sli4, sli4->bmbx.virt,
+					       SLI4_BMBX_SIZE)) {
+		struct sli4_rsp_cmn_get_sli4_params_s	*parms =
+			(struct sli4_rsp_cmn_get_sli4_params_s *)
+			(((u8 *)sli4->bmbx.virt) +
+			offsetof(struct sli4_cmd_sli_config_s, payload.embed));
+		u32 dwflags_loopback;
+		u32 dwflags_eq_page_cnt;
+		u32 dwflags_cq_page_cnt;
+		u32 dwflags_mq_page_cnt;
+		u32 dwflags_wq_page_cnt;
+		u32 dwflags_rq_page_cnt;
+		u32 dwflags_sgl_page_cnt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return -1;
+		} else if (parms->hdr.status) {
+			efc_log_err(sli4, "COMMON_GET_SLI4_PARAMETERS bad status %#x",
+			       parms->hdr.status);
+			efc_log_err(sli4, "additional status %#x\n",
+			       parms->hdr.additional_status);
+			return -1;
+		}
+
+		dwflags_loopback = le32_to_cpu(parms->dw16_loopback_scope);
+		dwflags_eq_page_cnt = le32_to_cpu(parms->dw6_eq_page_cnt);
+		dwflags_cq_page_cnt = le32_to_cpu(parms->dw8_cq_page_cnt);
+		dwflags_mq_page_cnt = le32_to_cpu(parms->dw10_mq_page_cnt);
+		dwflags_wq_page_cnt = le32_to_cpu(parms->dw12_wq_page_cnt);
+		dwflags_rq_page_cnt = le32_to_cpu(parms->dw14_rq_page_cnt);
+
+		sli4->auto_reg =
+			(dwflags_loopback & RSP_GET_PARAM_AREG);
+		sli4->auto_xfer_rdy =
+			(dwflags_loopback & RSP_GET_PARAM_AGXF);
+		sli4->hdr_template_req =
+			(dwflags_loopback & RSP_GET_PARAM_HDRR);
+		sli4->t10_dif_inline_capable =
+			(dwflags_loopback & RSP_GET_PARAM_TIMM);
+		sli4->t10_dif_separate_capable =
+			(dwflags_loopback & RSP_GET_PARAM_TSMM);
+
+		sli4->mq_create_version =
+				GET_Q_CREATE_VERSION(dwflags_mq_page_cnt);
+		sli4->cq_create_version =
+				GET_Q_CREATE_VERSION(dwflags_cq_page_cnt);
+
+		sli4->rq_min_buf_size =
+			le16_to_cpu(parms->min_rq_buffer_size);
+		sli4->rq_max_buf_size =
+			le32_to_cpu(parms->max_rq_buffer_size);
+
+		sli4->qinfo.qpage_count[SLI_QTYPE_EQ] =
+			(dwflags_eq_page_cnt & RSP_GET_PARAM_EQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_CQ] =
+			(dwflags_cq_page_cnt & RSP_GET_PARAM_CQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_MQ] =
+			(dwflags_mq_page_cnt & RSP_GET_PARAM_MQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_WQ] =
+			(dwflags_wq_page_cnt & RSP_GET_PARAM_WQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_RQ] =
+			(dwflags_rq_page_cnt & RSP_GET_PARAM_RQ_PAGE_CNT_MASK);
+
+		/* save count methods and masks for each queue type */
+
+		sli4->qinfo.count_mask[SLI_QTYPE_EQ] =
+				le16_to_cpu(parms->eqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_EQ] =
+				GET_Q_CNT_METHOD(dwflags_eq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_CQ] =
+				le16_to_cpu(parms->cqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_CQ] =
+				GET_Q_CNT_METHOD(dwflags_cq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_MQ] =
+				le16_to_cpu(parms->mqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_MQ] =
+				GET_Q_CNT_METHOD(dwflags_mq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_WQ] =
+				le16_to_cpu(parms->wqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_WQ] =
+				GET_Q_CNT_METHOD(dwflags_wq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_RQ] =
+				le16_to_cpu(parms->rqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_RQ] =
+				GET_Q_CNT_METHOD(dwflags_rq_page_cnt);
+
+		/* now calculate max queue entries */
+		sli_calc_max_qentries(sli4);
+
+		dwflags_sgl_page_cnt = le32_to_cpu(parms->dw18_sgl_page_cnt);
+
+		/* max # of pages */
+		sli4->max_sgl_pages =
+				(dwflags_sgl_page_cnt &
+				 RSP_GET_PARAM_SGL_PAGE_CNT_MASK);
+
+		/* bit map of available sizes */
+		sli4->sgl_page_sizes =
+				(dwflags_sgl_page_cnt &
+				 RSP_GET_PARAM_SGL_PAGE_SZS_MASK) >> 8;
+		/* ignore HLM here. Use value from REQUEST_FEATURES */
+		sli4->sge_supported_length =
+				le32_to_cpu(parms->sge_supported_length);
+		sli4->sgl_pre_registration_required =
+			(dwflags_loopback & RSP_GET_PARAM_SGLR);
+		/* default to using pre-registered SGL's */
+		sli4->sgl_pre_registered = true;
+
+		sli4->perf_hint =
+			(dwflags_loopback & RSP_GET_PARAM_PHON);
+		sli4->perf_wq_id_association =
+			(dwflags_loopback & RSP_GET_PARAM_PHWQ);
+
+		sli4->rq_batch =
+			(le16_to_cpu(parms->dw15w1_rq_db_window) &
+			 RSP_GET_PARAM_RQ_DB_WINDOW_MASK) >> 12;
+
+		/* Use the highest available WQE size. */
+		if (((dwflags_wq_page_cnt &
+		    RSP_GET_PARAM_WQE_SZS_MASK) >> 8) &
+		    SLI4_128BYTE_WQE_SUPPORT)
+			sli4->wqe_size = SLI4_WQE_EXT_BYTES;
+		else
+			sli4->wqe_size = SLI4_WQE_BYTES;
+	}
+
+	sli4->port_number = 0;
+
+	/*
+	 * Issue COMMON_GET_CNTL_ATTRIBUTES to get port_number. Temporarily
+	 * uses VPD DMA buffer as the response won't fit in the embedded
+	 * buffer.
+	 */
+	if (!sli_cmd_common_get_cntl_attributes(sli4, sli4->bmbx.virt,
+					       SLI4_BMBX_SIZE,
+					       &sli4->vpd_data)) {
+		struct sli4_rsp_cmn_get_cntl_attributes_s *attr =
+			sli4->vpd_data.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return -1;
+		} else if (attr->hdr.status) {
+			efc_log_err(sli4, "COMMON_GET_CNTL_ATTRIBUTES bad status %#x",
+			       attr->hdr.status);
+			efc_log_err(sli4, "additional status %#x\n",
+			       attr->hdr.additional_status);
+			return -1;
+		}
+
+		sli4->port_number = (attr->port_num_type_flags &
+					    SLI4_CNTL_ATTR_PORTNUM);
+
+		memcpy(sli4->bios_version_string,
+		       attr->bios_version_str,
+		       sizeof(sli4->bios_version_string));
+	} else {
+		efc_log_err(sli4, "bad COMMON_GET_CNTL_ATTRIBUTES write\n");
+		return -1;
+	}
+
+	psize = sizeof(struct sli4_rsp_cmn_get_cntl_addl_attributes_s);
+	data.size = psize;
+	data.virt = dma_alloc_coherent(&sli4->pcidev->dev, data.size,
+				       &data.phys, GFP_DMA);
+	if (!data.virt) {
+		memset(&data, 0, sizeof(struct efc_dma_s));
+		efc_log_err(sli4, "Failed to allocate memory for GET_CNTL_ADDL_ATTR\n");
+	} else {
+		if (!sli_cmd_common_get_cntl_addl_attributes(sli4,
+							    sli4->bmbx.virt,
+							    SLI4_BMBX_SIZE,
+							    &data)) {
+			struct sli4_rsp_cmn_get_cntl_addl_attributes_s *attr;
+
+			attr = data.virt;
+			if (sli_bmbx_command(sli4)) {
+				efc_log_crit(sli4, "mailbox fail (GET_CNTL_ADDL_ATTR)\n");
+				dma_free_coherent(&sli4->pcidev->dev, data.size,
+						  data.virt, data.phys);
+				return -1;
+			}
+			if (attr->hdr.status) {
+				efc_log_err(sli4, "GET_CNTL_ADDL_ATTR bad status %#x\n",
+				       attr->hdr.status);
+				dma_free_coherent(&sli4->pcidev->dev, data.size,
+						  data.virt, data.phys);
+				return -1;
+			}
+
+			memcpy(sli4->ipl_name, attr->ipl_file_name,
+			       sizeof(sli4->ipl_name));
+
+			efc_log_info(sli4, "IPL:%s\n",
+				(char *)sli4->ipl_name);
+		} else {
+			efc_log_err(sli4, "bad GET_CNTL_ADDL_ATTR write\n");
+			dma_free_coherent(&sli4->pcidev->dev, data.size,
+					  data.virt, data.phys);
+			return -1;
+		}
+
+		dma_free_coherent(&sli4->pcidev->dev, data.size, data.virt,
+				  data.phys);
+		memset(&data, 0, sizeof(struct efc_dma_s));
+	}
+
+	if (!sli_cmd_common_get_port_name(sli4, sli4->bmbx.virt,
+					 SLI4_BMBX_SIZE)) {
+		struct sli4_rsp_cmn_get_port_name_s	*port_name =
+			(struct sli4_rsp_cmn_get_port_name_s *)
+			(((u8 *)sli4->bmbx.virt) +
+			offsetof(struct sli4_cmd_sli_config_s, payload.embed));
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return -1;
+		}
+
+		sli4->port_name[0] =
+			port_name->port_name[sli4->port_number];
+	}
+	sli4->port_name[1] = '\0';
+
+	if (!sli_cmd_read_rev(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+			     &sli4->vpd_data)) {
+		struct sli4_cmd_read_rev_s	*read_rev = sli4->bmbx.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "bootstrap mailbox write fail (READ_REV)\n");
+			return -1;
+		}
+		if (le16_to_cpu(read_rev->hdr.status)) {
+			efc_log_err(sli4, "READ_REV bad status %#x\n",
+			       le16_to_cpu(read_rev->hdr.status));
+			return -1;
+		}
+
+		sli4->fw_rev[0] =
+				le32_to_cpu(read_rev->first_fw_id);
+		memcpy(sli4->fw_name[0], read_rev->first_fw_name,
+		       sizeof(sli4->fw_name[0]));
+
+		sli4->fw_rev[1] =
+				le32_to_cpu(read_rev->second_fw_id);
+		memcpy(sli4->fw_name[1], read_rev->second_fw_name,
+		       sizeof(sli4->fw_name[1]));
+
+		sli4->hw_rev[0] = le32_to_cpu(read_rev->first_hw_rev);
+		sli4->hw_rev[1] = le32_to_cpu(read_rev->second_hw_rev);
+		sli4->hw_rev[2] = le32_to_cpu(read_rev->third_hw_rev);
+
+		efc_log_info(sli4, "FW1:%s (%08x) / FW2:%s (%08x)\n",
+			read_rev->first_fw_name,
+			      le32_to_cpu(read_rev->first_fw_id),
+			      read_rev->second_fw_name,
+			      le32_to_cpu(read_rev->second_fw_id));
+
+		efc_log_info(sli4, "HW1: %08x / HW2: %08x\n",
+			le32_to_cpu(read_rev->first_hw_rev),
+			      le32_to_cpu(read_rev->second_hw_rev));
+
+		/* Check that all VPD data was returned */
+		if (le32_to_cpu(read_rev->returned_vpd_length) !=
+		    le32_to_cpu(read_rev->actual_vpd_length)) {
+			efc_log_info(sli4, "VPD length: avail=%d returned=%d actual=%d\n",
+				le32_to_cpu(read_rev->available_length_dword) &
+					    SLI4_READ_REV_AVAILABLE_LENGTH,
+				le32_to_cpu(read_rev->returned_vpd_length),
+				le32_to_cpu(read_rev->actual_vpd_length));
+		}
+		sli4->vpd_length = le32_to_cpu(read_rev->returned_vpd_length);
+	} else {
+		efc_log_err(sli4, "bad READ_REV write\n");
+		return -1;
+	}
+
+	if (!sli_cmd_read_nvparms(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE)) {
+		struct sli4_cmd_read_nvparms_s *read_nvparms = sli4->bmbx.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "bootstrap mailbox fail (READ_NVPARMS)\n");
+			return -1;
+		}
+		if (le16_to_cpu(read_nvparms->hdr.status)) {
+			efc_log_err(sli4, "READ_NVPARMS bad status %#x\n",
+			       le16_to_cpu(read_nvparms->hdr.status));
+			return -1;
+		}
+
+		memcpy(sli4->wwpn, read_nvparms->wwpn,
+		       sizeof(sli4->wwpn));
+		memcpy(sli4->wwnn, read_nvparms->wwnn,
+		       sizeof(sli4->wwnn));
+
+		efc_log_info(sli4, "WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n",
+			sli4->wwpn[0],
+			      sli4->wwpn[1],
+			      sli4->wwpn[2],
+			      sli4->wwpn[3],
+			      sli4->wwpn[4],
+			      sli4->wwpn[5],
+			      sli4->wwpn[6],
+			      sli4->wwpn[7]);
+		efc_log_info(sli4, "WWNN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n",
+			sli4->wwnn[0],
+			      sli4->wwnn[1],
+			      sli4->wwnn[2],
+			      sli4->wwnn[3],
+			      sli4->wwnn[4],
+			      sli4->wwnn[5],
+			      sli4->wwnn[6],
+			      sli4->wwnn[7]);
+	} else {
+		efc_log_err(sli4, "bad READ_NVPARMS write\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/*
+ * Public functions
+ */
+
+/**
+ * @ingroup sli
+ * @brief Set up the SLI context.
+ *
+ * @param sli4 SLI context.
+ * @param os Device abstraction.
+ * @param port_type Protocol type of port (for example, FC and NIC).
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_setup(struct sli4_s *sli4, void *os, struct pci_dev  *pdev,
+	  void __iomem *reg[])
+{
+	u32 intf = U32_MAX;
+	u32 pci_class_rev = 0;
+	u32 rev_id = 0;
+	u32 family = 0;
+	u32 asic_id = 0;
+	u32 i;
+	struct sli4_asic_entry_t *asic;
+
+	memset(sli4, 0, sizeof(struct sli4_s));
+
+	sli4->os = os;
+	sli4->pcidev = pdev;
+
+	for (i = 0; i < 6; i++)
+		sli4->reg[i] = reg[i];
+	/*
+	 * Read the SLI_INTF register to discover the register layout
+	 * and other capability information
+	 */
+	pci_read_config_dword(pdev, SLI4_INTF_REG, &intf);
+
+	if ((intf & SLI4_INTF_VALID_MASK) != (u32)SLI4_INTF_VALID_VALUE) {
+		efc_log_err(sli4, "SLI_INTF is not valid\n");
+		return -1;
+	}
+
+	/* driver only support SLI-4 */
+	if ((intf & SLI4_INTF_REV_MASK) != SLI4_INTF_REV_S4) {
+		efc_log_err(sli4, "Unsupported SLI revision (intf=%#x)\n",
+		       intf);
+		return -1;
+	}
+
+	sli4->sli_family = intf & SLI4_INTF_FAMILY_MASK;
+
+	sli4->if_type = intf & SLI4_INTF_IF_TYPE_MASK;
+	efc_log_info(sli4, "status=%#x error1=%#x error2=%#x\n",
+		sli_reg_read_status(sli4),
+			sli_reg_read_err1(sli4),
+			sli_reg_read_err2(sli4));
+
+	/*
+	 * set the ASIC type and revision
+	 */
+	pci_read_config_dword(pdev, PCI_CLASS_REVISION, &pci_class_rev);
+	rev_id = pci_class_rev & 0xff;
+	family = sli4->sli_family;
+	if (family == SLI4_FAMILY_CHECK_ASIC_TYPE) {
+		pci_read_config_dword(pdev, SLI4_ASIC_ID_REG, &asic_id);
+
+		family = asic_id & SLI4_ASIC_GEN_MASK;
+	}
+
+	for (i = 0, asic = sli4_asic_table; i < ARRAY_SIZE(sli4_asic_table);
+	     i++, asic++) {
+		if (rev_id == asic->rev_id && family == asic->family) {
+			sli4->asic_type = family;
+			sli4->asic_rev = rev_id;
+			break;
+		}
+	}
+	/* Fail if no matching asic type/rev was found */
+	if (!sli4->asic_type || !sli4->asic_rev) {
+		efc_log_err(sli4, "no matching asic family/rev found: %02x/%02x\n",
+		       family, rev_id);
+		return -1;
+	}
+
+	/*
+	 * The bootstrap mailbox is equivalent to a MQ with a single 256 byte
+	 * entry, a CQ with a single 16 byte entry, and no event queue.
+	 * Alignment must be 16 bytes as the low order address bits in the
+	 * address register are also control / status.
+	 */
+	sli4->bmbx.size = SLI4_BMBX_SIZE + sizeof(struct sli4_mcqe_s);
+	sli4->bmbx.virt = dma_alloc_coherent(&pdev->dev, sli4->bmbx.size,
+					     &sli4->bmbx.phys, GFP_DMA);
+	if (!sli4->bmbx.virt) {
+		memset(&sli4->bmbx, 0, sizeof(struct efc_dma_s));
+		efc_log_err(sli4, "bootstrap mailbox allocation failed\n");
+		return -1;
+	}
+
+	if (sli4->bmbx.phys & SLI4_BMBX_MASK_LO) {
+		efc_log_err(sli4, "bad alignment for bootstrap mailbox\n");
+		return -1;
+	}
+
+	efc_log_info(sli4, "bmbx v=%p p=0x%x %08x s=%zd\n", sli4->bmbx.virt,
+		upper_32_bits(sli4->bmbx.phys),
+		      lower_32_bits(sli4->bmbx.phys), sli4->bmbx.size);
+
+	/* 4096 is arbitrary. What should this value actually be? */
+	sli4->vpd_data.size = 4096;
+	sli4->vpd_data.virt = dma_alloc_coherent(&pdev->dev,
+						 sli4->vpd_data.size,
+						 &sli4->vpd_data.phys,
+						 GFP_DMA);
+	if (!sli4->vpd_data.virt) {
+		memset(&sli4->vpd_data, 0, sizeof(struct efc_dma_s));
+		/* Note that failure isn't fatal in this specific case */
+		efc_log_info(sli4, "VPD buffer allocation failed\n");
+	}
+
+	if (sli_fw_init(sli4)) {
+		efc_log_err(sli4, "FW initialization failed\n");
+		return -1;
+	}
+
+	/*
+	 * Set one of fcpi(initiator), fcpt(target), fcpc(combined) to true
+	 * in addition to any other desired features
+	 */
+	sli4->features = (SLI4_REQFEAT_IAAB | SLI4_REQFEAT_NPIV |
+				 SLI4_REQFEAT_DIF | SLI4_REQFEAT_VF |
+				 SLI4_REQFEAT_FCPC | SLI4_REQFEAT_IAAR |
+				 SLI4_REQFEAT_HLM | SLI4_REQFEAT_PERFH |
+				 SLI4_REQFEAT_RXSEQ | SLI4_REQFEAT_RXRI |
+				 SLI4_REQFEAT_MRQP);
+
+	/* use performance hints if available */
+	if (sli4->perf_hint)
+		sli4->features |= SLI4_REQFEAT_PERFH;
+
+	if (sli_request_features(sli4, &sli4->features, true))
+		return -1;
+
+	if (sli_get_config(sli4))
+		return -1;
+
+	return 0;
+}
+
+int
+sli_init(struct sli4_s *sli4)
+{
+	if (sli4->has_extents) {
+		efc_log_info(sli4, "XXX need to implement extent allocation\n");
+		return -1;
+	}
+
+	if (sli4->high_login_mode)
+		sli4->features |= SLI4_REQFEAT_HLM;
+	else
+		sli4->features &= (~SLI4_REQFEAT_HLM);
+	sli4->features &= (~SLI4_REQFEAT_RXSEQ);
+	sli4->features &= (~SLI4_REQFEAT_RXRI);
+
+	if (sli_request_features(sli4, &sli4->features, false))
+		return -1;
+
+	return 0;
+}
+
+int
+sli_reset(struct sli4_s *sli4)
+{
+	u32	i;
+
+	if (sli_fw_init(sli4)) {
+		efc_log_crit(sli4, "FW initialization failed\n");
+		return -1;
+	}
+
+	kfree(sli4->extent[0].base);
+	sli4->extent[0].base = NULL;
+
+	for (i = 0; i < SLI_RSRC_MAX; i++) {
+		kfree(sli4->extent[i].use_map);
+		sli4->extent[i].use_map = NULL;
+		sli4->extent[i].base = NULL;
+	}
+
+	if (sli_get_config(sli4))
+		return -1;
+
+	return 0;
+}
+
+/**
+ * @ingroup sli
+ * @brief Issue a Firmware Reset.
+ *
+ * @par Description
+ * Issues a Firmware Reset to the chip.  This reset affects the entire chip,
+ * so all PCI function on the same PCI bus and device are affected.
+ * @n @n This type of reset can be used to activate newly downloaded firmware.
+ * @n @n The driver should be considered to be in an unknown state after this
+ * reset and should be reloaded.
+ *
+ * @param sli4 SLI context.
+ *
+ * @return Returns 0 on success, or -1 otherwise.
+ */
+
+int
+sli_fw_reset(struct sli4_s *sli4)
+{
+	u32 val;
+	bool ready;
+
+	/*
+	 * Firmware must be ready before issuing the reset.
+	 */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
+	if (!ready) {
+		efc_log_crit(sli4, "FW status is NOT ready\n");
+		return -1;
+	}
+	/* Lancer uses PHYDEV_CONTROL */
+
+	val = SLI4_PHYDEV_CTRL_FRST;
+	writel(val, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
+
+	/* wait for the FW to become ready after the reset */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
+	if (!ready) {
+		efc_log_crit(sli4, "Failed to become ready after firmware reset\n");
+		return -1;
+	}
+	return 0;
+}
+
+/**
+ * @ingroup sli
+ * @brief Tear down a SLI context.
+ *
+ * @param sli4 SLI context.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+sli_teardown(struct sli4_s *sli4)
+{
+	u32 i;
+
+	kfree(sli4->extent[0].base);
+	sli4->extent[0].base = NULL;
+
+	for (i = 0; i < SLI_RSRC_MAX; i++) {
+		sli4->extent[i].base = NULL;
+
+		kfree(sli4->extent[i].use_map);
+		sli4->extent[i].use_map = NULL;
+	}
+
+	if (sli_fw_term(sli4))
+		efc_log_err(sli4, "FW deinitialization failed\n");
+
+	dma_free_coherent(&sli4->pcidev->dev, sli4->vpd_data.size,
+			  sli4->vpd_data.virt, sli4->vpd_data.phys);
+	dma_free_coherent(&sli4->pcidev->dev, sli4->bmbx.size,
+			  sli4->bmbx.virt, sli4->bmbx.phys);
+
+	return 0;
+}
+
+/**
+ * @ingroup sli
+ * @brief Register a callback for the given event.
+ *
+ * @param sli4 SLI context.
+ * @param which Event of interest.
+ * @param func Function to call when the event occurs.
+ * @param arg Argument passed to the callback function.
+ *
+ * @return Returns 0 on success, or non-zero otherwise.
+ */
+int
+sli_callback(struct sli4_s *sli4, enum sli4_callback_e which,
+	     void *func, void *arg)
+{
+	if (!func) {
+		efc_log_err(sli4, "bad parameter sli4=%p which=%#x func=%p\n",
+		       sli4, which, func);
+		return -1;
+	}
+
+	switch (which) {
+	case SLI4_CB_LINK:
+		sli4->link = func;
+		sli4->link_arg = arg;
+		break;
+	default:
+		efc_log_info(sli4, "unknown callback %#x\n", which);
+		return -1;
+	}
+
+	return 0;
+}
+
+/**
+ * @ingroup sli
+ * @brief Modify the delay timer for all the EQs
+ *
+ * @param sli4 SLI context.
+ * @param eq Array of EQs.
+ * @param num_eq Count of EQs.
+ * @param shift Phase shift for staggering interrupts.
+ * @param delay_mult Delay multiplier for limiting interrupt frequency.
+ *
+ * @return Returns 0 on success, or -1 otherwise.
+ */
+int
+sli_eq_modify_delay(struct sli4_s *sli4, struct sli4_queue_s *eq,
+		    u32 num_eq, u32 shift, u32 delay_mult)
+{
+	sli_cmd_common_modify_eq_delay(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				       eq, num_eq, shift, delay_mult);
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail (MODIFY EQ DELAY)\n");
+		return -1;
+	}
+	if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad status MODIFY EQ DELAY\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/**
+ * @ingroup sli
+ * @brief Allocate SLI Port resources.
+ *
+ * @par Description
+ * Allocate port-related resources, such as VFI, RPI, XRI, and so on.
+ * Resources are modeled using extents, regardless of whether the underlying
+ * device implements resource extents. If the device does not implement
+ * extents, the SLI layer models this as a single (albeit large) extent.
+ *
+ * @param sli4 SLI context.
+ * @param rtype Resource type (for example, RPI or XRI)
+ * @param rid Allocated resource ID.
+ * @param index Index into the bitmap.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_resource_alloc(struct sli4_s *sli4, enum sli4_resource_e rtype,
+		   u32 *rid, u32 *index)
+{
+	int rc = 0;
+	u32 size;
+	u32 extent_idx;
+	u32 item_idx;
+	u32 position;
+
+	*rid = U32_MAX;
+	*index = U32_MAX;
+
+	switch (rtype) {
+	case SLI_RSRC_VFI:
+	case SLI_RSRC_VPI:
+	case SLI_RSRC_RPI:
+	case SLI_RSRC_XRI:
+		position =
+		find_first_zero_bit(sli4->extent[rtype].use_map,
+				    sli4->extent[rtype].map_size);
+		if (position >= sli4->extent[rtype].map_size) {
+			efc_log_err(sli4, "out of resource %d (alloc=%d)\n",
+				    rtype, sli4->extent[rtype].n_alloc);
+			rc = -1;
+			break;
+		}
+		set_bit(position, sli4->extent[rtype].use_map);
+		*index = position;
+
+		size = sli4->extent[rtype].size;
+
+		extent_idx = *index / size;
+		item_idx   = *index % size;
+
+		*rid = sli4->extent[rtype].base[extent_idx] + item_idx;
+
+		sli4->extent[rtype].n_alloc++;
+		break;
+	default:
+		rc = -1;
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup sli
+ * @brief Free the SLI Port resources.
+ *
+ * @par Description
+ * Free port-related resources, such as VFI, RPI, XRI, and so.
+ * See discussion of "extent" usage in sli_resource_alloc.
+ *
+ * @param sli4 SLI context.
+ * @param rtype Resource type (for example, RPI or XRI).
+ * @param rid Allocated resource ID.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+sli_resource_free(struct sli4_s *sli4,
+		  enum sli4_resource_e rtype, u32 rid)
+{
+	int rc = -1;
+	u32 x;
+	u32 size, *base;
+
+	switch (rtype) {
+	case SLI_RSRC_VFI:
+	case SLI_RSRC_VPI:
+	case SLI_RSRC_RPI:
+	case SLI_RSRC_XRI:
+		/*
+		 * Figure out which extent contains the resource ID. I.e. find
+		 * the extent such that
+		 *   extent->base <= resource ID < extent->base + extent->size
+		 */
+		base = sli4->extent[rtype].base;
+		size = sli4->extent[rtype].size;
+
+		/*
+		 * In the case of FW reset, this may be cleared
+		 * but the force_free path will still attempt to
+		 * free the resource. Prevent a NULL pointer access.
+		 */
+		if (base) {
+			for (x = 0; x < sli4->extent[rtype].number;
+			     x++) {
+				if (rid >= base[x] &&
+				    (rid < (base[x] + size))) {
+					rid -= base[x];
+					clear_bit((x * size) + rid,
+						  sli4->extent[rtype].use_map);
+					rc = 0;
+					break;
+				}
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int
+sli_resource_reset(struct sli4_s *sli4, enum sli4_resource_e rtype)
+{
+	int rc = -1;
+	u32 i;
+
+	switch (rtype) {
+	case SLI_RSRC_VFI:
+	case SLI_RSRC_VPI:
+	case SLI_RSRC_RPI:
+	case SLI_RSRC_XRI:
+		for (i = 0; i < sli4->extent[rtype].map_size; i++)
+			clear_bit(i, sli4->extent[rtype].use_map);
+		rc = 0;
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup sli
+ * @brief Cause chip to enter an unrecoverable error state.
+ *
+ * @par Description
+ * Cause chip to enter an unrecoverable error state. This is
+ * used when detecting unexpected FW behavior so FW can be
+ * hwted from the driver as soon as error is detected.
+ *
+ * @param sli4 SLI context.
+ * @param dump Generate dump as part of reset.
+ *
+ * @return Returns 0 if call completed correctly,
+ * or -1 if call failed (unsupported chip).
+ */
+int sli_raise_ue(struct sli4_s *sli4, u8 dump)
+{
+	u32 val = 0;
+#define FDD 2
+	if (dump == FDD) {
+		val = SLI4_PORT_CTRL_FDD | SLI4_PORT_CTRL_IP;
+		writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
+	} else {
+		val = SLI4_PHYDEV_CTRL_FRST;
+
+		if (dump == 1)
+			val |= SLI4_PHYDEV_CTRL_DD;
+		writel(val, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
+	}
+
+	return 0;
+}
+
+/**
+ * @ingroup sli
+ * @brief Read the SLIPORT_STATUS register to to check if a dump is present.
+ *
+ * @param sli4 SLI context.
+ *
+ * @return  Returns 1 if the chip is ready,
+ * or 0 if the chip is not ready, 2 if fdp is present.
+ */
+int sli_dump_is_ready(struct sli4_s *sli4)
+{
+	int rc = 0;
+	u32 port_val;
+	u32 bmbx_val;
+
+	/*
+	 * Ensure that the port is ready AND the mailbox is
+	 * ready before signaling that the dump is ready to go.
+	 */
+	port_val = sli_reg_read_status(sli4);
+	bmbx_val = readl(sli4->reg[0] + SLI4_BMBX_REG);
+
+	if ((bmbx_val & SLI4_BMBX_RDY) &&
+	    (port_val & SLI4_PORT_STATUS_RDY)) {
+		if (port_val & SLI4_PORT_STATUS_DIP)
+			rc = 1;
+		else if (port_val & SLI4_PORT_STATUS_FDP)
+			rc = 2;
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup sli
+ * @brief Read the SLIPORT_STATUS register to check if a dump is present.
+ *
+ * @param sli4 SLI context.
+ *
+ * @return
+ * - 0 if call completed correctly and no dump is present.
+ * - 1 if call completed and dump is present.
+ * - -1 if call failed (unsupported chip).
+ */
+int sli_dump_is_present(struct sli4_s *sli4)
+{
+	u32 val;
+	bool ready;
+
+	/* If the chip is not ready, then there cannot be a dump */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_INIT_PORT_DELAY_US);
+	if (!ready)
+		return 0;
+
+	val = sli_reg_read_status(sli4);
+	if (val == U32_MAX) {
+		efc_log_err(sli4, "error reading SLIPORT_STATUS\n");
+		return -1;
+	} else {
+		return (val & SLI4_PORT_STATUS_DIP) ? 1 : 0;
+	}
+}
+
+/**
+ * @ingroup sli
+ * @brief Read the SLIPORT_STATUS register to check if
+ * the reset required is set.
+ *
+ * @param sli4 SLI context.
+ *
+ * @return
+ * - 0 if call completed correctly and reset is not required.
+ * - 1 if call completed and reset is required.
+ * - -1 if call failed.
+ */
+int sli_reset_required(struct sli4_s *sli4)
+{
+	u32 val;
+
+	val = sli_reg_read_status(sli4);
+	if (val == U32_MAX) {
+		efc_log_err(sli4, "error reading SLIPORT_STATUS\n");
+		return -1;
+	} else {
+		return (val & SLI4_PORT_STATUS_RN) ? 1 : 0;
+	}
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an POST_SGL_PAGES command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param xri starting XRI
+ * @param xri_count XRI
+ * @param page0 First SGL memory page.
+ * @param page1 Second SGL memory page (optional).
+ * @param dma DMA buffer for non-embedded mailbox command (options)
+ *
+ * if non-embedded mbx command is used, dma buffer must be at least
+ * (32 + xri_count*16) in length
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_post_sgl_pages(struct sli4_s *sli4, void *buf, size_t size,
+		       u16 xri,
+		       u32 xri_count, struct efc_dma_s *page0[],
+		       struct efc_dma_s *page1[], struct efc_dma_s *dma)
+{
+	struct sli4_rqst_post_sgl_pages_s *post = NULL;
+	u32 i;
+
+	post = sli_config_cmd_init(sli4, buf, size,
+				   SLI_CONFIG_PYLD_LENGTH(post_sgl_pages),
+				   dma);
+	if (!post)
+		return EFC_FAIL;
+
+	post->hdr.opcode = SLI4_OPC_POST_SGL_PAGES;
+	post->hdr.subsystem = SLI4_SUBSYSTEM_FC;
+	/* payload size calculation */
+	/* 4 = xri_start + xri_count */
+	/* xri_count = # of XRI's registered */
+	/* sizeof(uint64_t) = physical address size */
+	/* 2 = # of physical addresses per page set */
+	post->hdr.request_length =
+		cpu_to_le32(4 + (xri_count * (sizeof(uint64_t) * 2)));
+
+	post->xri_start = cpu_to_le16(xri);
+	post->xri_count = cpu_to_le16(xri_count);
+
+	for (i = 0; i < xri_count; i++) {
+		post->page_set[i].page0_low  =
+				cpu_to_le32(lower_32_bits(page0[i]->phys));
+		post->page_set[i].page0_high =
+				cpu_to_le32(upper_32_bits(page0[i]->phys));
+	}
+
+	if (page1) {
+		for (i = 0; i < xri_count; i++) {
+			post->page_set[i].page1_low =
+				lower_32_bits(page1[i]->phys);
+			post->page_set[i].page1_high =
+				upper_32_bits(page1[i]->phys);
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Write an POST_HDR_TEMPLATES command.
+ *
+ * @param sli4 SLI context.
+ * @param buf Destination buffer for the command.
+ * @param size Buffer size, in bytes.
+ * @param dma Pointer to DMA memory structure. This is allocated by the caller.
+ * @param rpi Starting RPI index for the header templates.
+ * @param payload_dma Pointer to DMA memory used to hold larger descriptor
+ * counts.
+ *
+ * @return Returns the number of bytes written.
+ */
+int
+sli_cmd_post_hdr_templates(struct sli4_s *sli4, void *buf,
+			   size_t size, struct efc_dma_s *dma,
+			   u16 rpi,
+			   struct efc_dma_s *payload_dma)
+{
+	struct sli4_rqst_post_hdr_templates_s *template = NULL;
+	uintptr_t phys = 0;
+	u32 i = 0;
+	u32 page_count;
+	u32 payload_size;
+
+	page_count = sli_page_count(dma->size, SLI_PAGE_SIZE);
+
+	payload_size = CFG_RQST_PYLD_LEN_VAR(post_hdr_templates,
+					     page_count * SZ_DMAADDR);
+
+	if (page_count > 16) {
+		/*
+		 * We can't fit more than 16 descriptors into an embedded mbox
+		 * command, it has to be non-embedded
+		 */
+		payload_dma->size = payload_size;
+		payload_dma->virt = dma_alloc_coherent(&sli4->pcidev->dev,
+						       payload_dma->size,
+					     &payload_dma->phys, GFP_DMA);
+		if (!payload_dma->virt) {
+			memset(payload_dma, 0, sizeof(struct efc_dma_s));
+			efc_log_err(sli4, "mbox payload memory allocation fail\n");
+			return EFC_FAIL;
+		}
+		template = sli_config_cmd_init(sli4, buf, size,
+					       payload_size, payload_dma);
+	} else {
+		template = sli_config_cmd_init(sli4, buf, size,
+					       payload_size, NULL);
+	}
+
+	if (!template)
+		return EFC_FAIL;
+
+	if (rpi == U16_MAX)
+		rpi = sli4->extent[SLI_RSRC_RPI].base[0];
+
+	template->hdr.opcode = SLI4_OPC_POST_HDR_TEMPLATES;
+	template->hdr.subsystem = SLI4_SUBSYSTEM_FC;
+	template->hdr.request_length = CFG_RQST_PYLD_LEN(post_hdr_templates);
+
+	template->rpi_offset = cpu_to_le16(rpi);
+	template->page_count = cpu_to_le16(page_count);
+	phys = dma->phys;
+	for (i = 0; i < page_count; i++) {
+		template->page_descriptor[i].low  =
+				cpu_to_le32(lower_32_bits(phys));
+		template->page_descriptor[i].high =
+				cpu_to_le32(upper_32_bits(phys));
+
+		phys += SLI_PAGE_SIZE;
+	}
+
+	return EFC_SUCCESS;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Get the RPI resource requirements.
+ *
+ * @param sli4 SLI context.
+ * @param n_rpi Number of RPIs desired.
+ *
+ * @return Returns the number of bytes needed. This value may be zero.
+ */
+u32
+sli_fc_get_rpi_requirements(struct sli4_s *sli4, u32 n_rpi)
+{
+	u32 bytes = 0;
+
+	/* Check if header templates needed */
+	if (sli4->hdr_template_req)
+		/* round up to a page */
+		bytes = SLI_ROUND_PAGE(n_rpi * SLI4_HDR_TEMPLATE_SIZE);
+
+	return bytes;
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Return a text string corresponding to a CQE status value
+ *
+ * @param status Status value
+ *
+ * @return Returns corresponding string, otherwise "unknown"
+ */
+const char *
+sli_fc_get_status_string(u32 status)
+{
+	static struct {
+		u32 code;
+		const char *label;
+	} lookup[] = {
+		{SLI4_FC_WCQE_STATUS_SUCCESS,		"SUCCESS"},
+		{SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE,	"FCP_RSP_FAILURE"},
+		{SLI4_FC_WCQE_STATUS_REMOTE_STOP,	"REMOTE_STOP"},
+		{SLI4_FC_WCQE_STATUS_LOCAL_REJECT,	"LOCAL_REJECT"},
+		{SLI4_FC_WCQE_STATUS_NPORT_RJT,		"NPORT_RJT"},
+		{SLI4_FC_WCQE_STATUS_FABRIC_RJT,	"FABRIC_RJT"},
+		{SLI4_FC_WCQE_STATUS_NPORT_BSY,		"NPORT_BSY"},
+		{SLI4_FC_WCQE_STATUS_FABRIC_BSY,	"FABRIC_BSY"},
+		{SLI4_FC_WCQE_STATUS_LS_RJT,		"LS_RJT"},
+		{SLI4_FC_WCQE_STATUS_CMD_REJECT,	"CMD_REJECT"},
+		{SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK,	"FCP_TGT_LENCHECK"},
+		{SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED, "BUF_LEN_EXCEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED,
+				"RQ_INSUFF_BUF_NEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC, "RQ_INSUFF_FRM_DESC"},
+		{SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE,	"RQ_DMA_FAILURE"},
+		{SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE,	"FCP_RSP_TRUNCATE"},
+		{SLI4_FC_WCQE_STATUS_DI_ERROR,		"DI_ERROR"},
+		{SLI4_FC_WCQE_STATUS_BA_RJT,		"BA_RJT"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED,
+				"RQ_INSUFF_XRI_NEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC, "INSUFF_XRI_DISC"},
+		{SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT,	"RX_ERROR_DETECT"},
+		{SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST,	"RX_ABORT_REQUEST"},
+		};
+	u32 i;
+
+	for (i = 0; i < ARRAY_SIZE(lookup); i++) {
+		if (status == lookup[i].code)
+			return lookup[i].label;
+	}
+	return "unknown";
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 24ae702f9427..6fff0aaa2463 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -4264,4 +4264,582 @@ struct sli4_s {
 	u32	vpd_length;
 };
 
+/**
+ * Get / set parameter functions
+ */
+
+static inline int
+sli_set_hlm(struct sli4_s *sli4, u32 value)
+{
+	if (value && !(sli4->features & SLI4_REQFEAT_HLM)) {
+		efc_log_err(sli4, "HLM not supported\n");
+		return -1;
+	}
+
+	sli4->high_login_mode = value != 0 ? true : false;
+
+	return 0;
+}
+
+static inline int
+sli_set_sgl_preregister(struct sli4_s *sli4, u32 value)
+{
+	if (value == 0 && sli4->sgl_pre_registration_required) {
+		efc_log_err(sli4, "SGL pre-registration required\n");
+		return -1;
+	}
+
+	sli4->sgl_pre_registered = value != 0 ? true : false;
+
+	return 0;
+}
+
+static inline u32
+sli_get_max_sgl(struct sli4_s *sli4)
+{
+	if (sli4->sgl_page_sizes != 1) {
+		efc_log_err(sli4, "unsupported SGL page sizes %#x\n",
+			sli4->sgl_page_sizes);
+		return 0;
+	}
+
+	return ((sli4->max_sgl_pages * SLI_PAGE_SIZE)
+		/ sizeof(struct sli4_sge_s));
+}
+
+static inline enum sli4_link_medium_e
+sli_get_medium(struct sli4_s *sli4)
+{
+	switch (sli4->topology) {
+	case SLI4_READ_CFG_TOPO_FC:
+	case SLI4_READ_CFG_TOPO_FC_DA:
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		return SLI_LINK_MEDIUM_FC;
+	default:
+		return SLI_LINK_MEDIUM_MAX;
+	}
+}
+
+static inline int
+sli_set_topology(struct sli4_s *sli4, u32 value)
+{
+	int	rc = 0;
+
+	switch (value) {
+	case SLI4_READ_CFG_TOPO_FC:
+	case SLI4_READ_CFG_TOPO_FC_DA:
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		sli4->topology = value;
+		break;
+	default:
+		efc_log_err(sli4, "unsupported topology %#x\n", value);
+		rc = -1;
+	}
+
+	return rc;
+}
+
+static inline u32
+sli_convert_mask_to_count(u32 method, u32 mask)
+{
+	u32 count = 0;
+
+	if (method) {
+		count = 1 << (31 - __builtin_clz(mask));
+		count *= 16;
+	} else {
+		count = mask;
+	}
+
+	return count;
+}
+
+static inline u32
+sli_reg_read_status(struct sli4_s *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_STATUS_REGOFF);
+}
+
+static inline int
+sli_fw_error_status(struct sli4_s *sli4)
+{
+	return ((sli_reg_read_status(sli4) & SLI4_PORT_STATUS_ERR) ? 1 : 0);
+}
+
+static inline u32
+sli_reg_read_err1(struct sli4_s *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_ERROR1);
+}
+
+static inline u32
+sli_reg_read_err2(struct sli4_s *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_ERROR2);
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Retrieve the received header and payload length.
+ *
+ * @param sli4 SLI context.
+ * @param cqe Pointer to the CQ entry.
+ * @param len_hdr Pointer where the header length is written.
+ * @param len_data Pointer where the payload length is written.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+static inline int
+sli_fc_rqe_length(struct sli4_s *sli4, void *cqe, u32 *len_hdr,
+		  u32 *len_data)
+{
+	struct sli4_fc_async_rcqe_s	*rcqe = cqe;
+
+	*len_hdr = *len_data = 0;
+
+	if (rcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+		*len_hdr  = rcqe->hdpl_byte & SLI4_RACQE_HDPL;
+		*len_data = le16_to_cpu(rcqe->data_placement_length);
+		return 0;
+	} else {
+		return -1;
+	}
+}
+
+/**
+ * @ingroup sli_fc
+ * @brief Retrieve the received FCFI.
+ *
+ * @param sli4 SLI context.
+ * @param cqe Pointer to the CQ entry.
+ *
+ * @return Returns the FCFI in the CQE. or U8_MAX if invalid CQE code.
+ */
+static inline u8
+sli_fc_rqe_fcfi(struct sli4_s *sli4, void *cqe)
+{
+	u8 code = ((u8 *)cqe)[SLI4_CQE_CODE_OFFSET];
+	u8 fcfi = U8_MAX;
+
+	switch (code) {
+	case SLI4_CQE_CODE_RQ_ASYNC: {
+		struct sli4_fc_async_rcqe_s *rcqe = cqe;
+
+		fcfi = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_FCFI;
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC_V1: {
+		struct sli4_fc_async_rcqe_v1_s *rcqev1 = cqe;
+
+		fcfi = rcqev1->fcfi_byte & SLI4_RACQE_FCFI;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD: {
+		struct sli4_fc_optimized_write_cmd_cqe_s *opt_wr = cqe;
+
+		fcfi = opt_wr->flags0 & SLI4_OCQE_FCFI;
+		break;
+	}
+	}
+
+	return fcfi;
+}
+
+/****************************************************************************
+ * Function prototypes
+ */
+extern int
+sli_cmd_config_link(struct sli4_s *sli4, void *buf, size_t size);
+extern int
+sli_cmd_down_link(struct sli4_s *sli4, void *buf, size_t size);
+extern int
+sli_cmd_dump_type4(struct sli4_s *sli4, void *buf,
+		   size_t size, u16 wki);
+extern int
+sli_cmd_common_read_transceiver_data(struct sli4_s *sli4, void *buf,
+				     size_t size, u32 page_num,
+				     struct efc_dma_s *dma);
+extern int
+sli_cmd_read_link_stats(struct sli4_s *sli4, void *buf, size_t size,
+			u8 req_ext_counters, u8 clear_overflow_flags,
+			u8 clear_all_counters);
+extern int
+sli_cmd_read_status(struct sli4_s *sli4, void *buf, size_t size,
+		    u8 clear_counters);
+extern int
+sli_cmd_init_link(struct sli4_s *sli4, void *buf, size_t size,
+		  u32 speed, u8 reset_alpa);
+extern int
+sli_cmd_init_vfi(struct sli4_s *sli4, void *buf, size_t size, u16 vfi,
+		 u16 fcfi, u16 vpi);
+extern int
+sli_cmd_init_vpi(struct sli4_s *sli4, void *buf, size_t size, u16 vpi,
+		 u16 vfi);
+extern int
+sli_cmd_post_xri(struct sli4_s *sli4, void *buf, size_t size,
+		 u16 xri_base, u16 xri_count);
+extern int
+sli_cmd_release_xri(struct sli4_s *sli4, void *buf, size_t size,
+		    u8 num_xri);
+extern int
+sli_cmd_read_sparm64(struct sli4_s *sli4, void *buf, size_t size,
+		     struct efc_dma_s *dma, u16 vpi);
+extern int
+sli_cmd_read_topology(struct sli4_s *sli4, void *buf, size_t size,
+		      struct efc_dma_s *dma);
+extern int
+sli_cmd_read_nvparms(struct sli4_s *sli4, void *buf, size_t size);
+extern int
+sli_cmd_write_nvparms(struct sli4_s *sli4, void *buf, size_t size,
+		      u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		      u32 preferred_d_id);
+struct sli4_cmd_rq_cfg_s {
+	__le16	rq_id;
+	u8	r_ctl_mask;
+	u8	r_ctl_match;
+	u8	type_mask;
+	u8	type_match;
+};
+
+extern int
+sli_cmd_reg_fcfi(struct sli4_s *sli4, void *buf, size_t size,
+		 u16 index,
+		struct sli4_cmd_rq_cfg_s rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG]);
+extern int
+sli_cmd_reg_fcfi_mrq(struct sli4_s *sli4, void *buf, size_t size,
+		     u8 mode, u16 fcf_index,
+	    u8 rq_selection_policy, u8 mrq_bit_mask,
+	    u16 num_mrqs,
+	    struct sli4_cmd_rq_cfg_s rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG]);
+
+extern int
+sli_cmd_reg_rpi(struct sli4_s *sli4, void *buf, size_t size,
+		u32 nport_id, u16 rpi, u16 vpi,
+		     struct efc_dma_s *dma, u8 update,
+		     u8 enable_t10_pi);
+extern int
+sli_cmd_sli_config(struct sli4_s *sli4, void *buf, size_t size,
+		   u32 length, struct efc_dma_s *dma);
+extern int
+sli_cmd_unreg_fcfi(struct sli4_s *sli4, void *buf, size_t size,
+		   u16 indicator);
+extern int
+sli_cmd_unreg_rpi(struct sli4_s *sli4, void *buf, size_t size,
+		  u16 indicator,
+		  enum sli4_resource_e which, u32 fc_id);
+extern int
+sli_cmd_reg_vpi(struct sli4_s *sli4, void *buf, size_t size,
+		u32 fc_id, __be64 sli_wwpn, u16 vpi, u16 vfi,
+		bool update);
+extern int
+sli_cmd_reg_vfi(struct sli4_s *sli4, void *buf, size_t size,
+		u16 vfi, u16 fcfi, struct efc_dma_s dma,
+		u16 vpi, __be64 sli_wwpn, u32 fc_id);
+extern int
+sli_cmd_unreg_vpi(struct sli4_s *sli4, void *buf, size_t size,
+		  u16 indicator, u32 which);
+extern int
+sli_cmd_unreg_vfi(struct sli4_s *sli4, void *buf, size_t size,
+		  u16 index, u32 which);
+extern int
+sli_cmd_common_nop(struct sli4_s *sli4, void *buf, size_t size,
+		   uint64_t context);
+extern int
+sli_cmd_common_get_resource_extent_info(struct sli4_s *sli4, void *buf,
+					size_t size, u16 rtype);
+extern int
+sli_cmd_common_get_sli4_parameters(struct sli4_s *sli4,
+				   void *buf, size_t size);
+extern int
+sli_cmd_common_write_object(struct sli4_s *sli4, void *buf, size_t size,
+			    u16 noc, u16 eof, u32 desired_write_length,
+		u32 offset, char *object_name, struct efc_dma_s *dma);
+extern int
+sli_cmd_common_delete_object(struct sli4_s *sli4, void *buf, size_t size,
+			     char *object_name);
+extern int
+sli_cmd_common_read_object(struct sli4_s *sli4, void *buf, size_t size,
+			   u32 desired_read_length, u32 offset,
+			   char *object_name, struct efc_dma_s *dma);
+extern int
+sli_cmd_dmtf_exec_clp_cmd(struct sli4_s *sli4, void *buf, size_t size,
+			  struct efc_dma_s *cmd, struct efc_dma_s *resp);
+extern int
+sli_cmd_common_set_dump_location(struct sli4_s *sli4,
+				 void *buf, size_t size, bool query,
+				 bool is_buffer_list,
+				 struct efc_dma_s *buffer, u8 fdb);
+extern int
+sli_cmd_common_set_features(struct sli4_s *sli4, void *buf, size_t size,
+			    u32 feature, u32 param_len,
+			    void *parameter);
+
+int sli_cqe_mq(struct sli4_s *sli4, void *buf);
+int sli_cqe_async(struct sli4_s *sli4, void *buf);
+
+extern int
+sli_setup(struct sli4_s *sli4, void *os, struct pci_dev  *pdev,
+	  void __iomem *reg[]);
+void sli_calc_max_qentries(struct sli4_s *sli4);
+int sli_init(struct sli4_s *sli4);
+int sli_reset(struct sli4_s *sli4);
+int sli_fw_reset(struct sli4_s *sli4);
+int sli_teardown(struct sli4_s *sli4);
+extern int
+sli_callback(struct sli4_s *sli4, enum sli4_callback_e which,
+	     void *func, void *arg);
+extern int
+sli_bmbx_command(struct sli4_s *sli4);
+extern int
+__sli_queue_init(struct sli4_s *sli4, struct sli4_queue_s *q,
+		 u32 qtype, size_t size, u32 n_entries,
+		      u32 align);
+extern int
+__sli_create_queue(struct sli4_s *sli4, struct sli4_queue_s *q);
+extern int
+sli_eq_modify_delay(struct sli4_s *sli4, struct sli4_queue_s *eq,
+		    u32 num_eq, u32 shift, u32 delay_mult);
+extern int
+sli_queue_alloc(struct sli4_s *sli4, u32 qtype,
+		struct sli4_queue_s *q, u32 n_entries,
+		     struct sli4_queue_s *assoc);
+extern int
+sli_cq_alloc_set(struct sli4_s *sli4, struct sli4_queue_s *qs[],
+		 u32 num_cqs, u32 n_entries, struct sli4_queue_s *eqs[]);
+extern int
+sli_get_queue_entry_size(struct sli4_s *sli4, u32 qtype);
+extern int
+sli_queue_free(struct sli4_s *sli4, struct sli4_queue_s *q,
+	       u32 destroy_queues, u32 free_memory);
+extern int
+sli_queue_eq_arm(struct sli4_s *sli4, struct sli4_queue_s *q, bool arm);
+extern int
+sli_queue_arm(struct sli4_s *sli4, struct sli4_queue_s *q, bool arm);
+
+extern int
+sli_wq_write(struct sli4_s *sli4, struct sli4_queue_s *q,
+	     u8 *entry);
+extern int
+sli_mq_write(struct sli4_s *sli4, struct sli4_queue_s *q,
+	     u8 *entry);
+extern int
+sli_rq_write(struct sli4_s *sli4, struct sli4_queue_s *q,
+	     u8 *entry);
+extern int
+sli_eq_read(struct sli4_s *sli4, struct sli4_queue_s *q,
+	    u8 *entry);
+extern int
+sli_cq_read(struct sli4_s *sli4, struct sli4_queue_s *q,
+	    u8 *entry);
+extern int
+sli_mq_read(struct sli4_s *sli4, struct sli4_queue_s *q,
+	    u8 *entry);
+extern int
+sli_queue_index(struct sli4_s *sli4, struct sli4_queue_s *q);
+extern int
+_sli_queue_poke(struct sli4_s *sli4, struct sli4_queue_s *q,
+		u32 index, u8 *entry);
+extern int
+sli_queue_poke(struct sli4_s *sli4, struct sli4_queue_s *q, u32 index,
+	       u8 *entry);
+extern int
+sli_resource_alloc(struct sli4_s *sli4, enum sli4_resource_e rtype,
+		   u32 *rid, u32 *index);
+extern int
+sli_resource_free(struct sli4_s *sli4, enum sli4_resource_e rtype,
+		  u32 rid);
+extern int
+sli_resource_reset(struct sli4_s *sli4, enum sli4_resource_e rtype);
+extern int
+sli_eq_parse(struct sli4_s *sli4, u8 *buf, u16 *cq_id);
+extern int
+sli_cq_parse(struct sli4_s *sli4, struct sli4_queue_s *cq, u8 *cqe,
+	     enum sli4_qentry_e *etype, u16 *q_id);
+
+int sli_raise_ue(struct sli4_s *sli4, u8 dump);
+int sli_dump_is_ready(struct sli4_s *sli4);
+int sli_dump_is_present(struct sli4_s *sli4);
+int sli_reset_required(struct sli4_s *sli4);
+int sli_fw_ready(struct sli4_s *sli4);
+
+extern int
+sli_fc_process_link_state(struct sli4_s *sli4, void *acqe);
+extern int
+sli_fc_process_link_attention(struct sli4_s *sli4, void *acqe);
+extern int
+sli_fc_cqe_parse(struct sli4_s *sli4, struct sli4_queue_s *cq,
+		 u8 *cqe, enum sli4_qentry_e *etype,
+		 u16 *rid);
+u32 sli_fc_response_length(struct sli4_s *sli4, u8 *cqe);
+u32 sli_fc_io_length(struct sli4_s *sli4, u8 *cqe);
+int sli_fc_els_did(struct sli4_s *sli4, u8 *cqe,
+		   u32 *d_id);
+u32 sli_fc_ext_status(struct sli4_s *sli4, u8 *cqe);
+extern int
+sli_fc_rqe_rqid_and_index(struct sli4_s *sli4, u8 *cqe,
+			  u16 *rq_id, u32 *index);
+extern int
+sli_cmd_wq_create(struct sli4_s *sli4, void *buf, size_t size,
+		  struct efc_dma_s *qmem, u16 cq_id);
+extern int
+sli_cmd_wq_create_v1(struct sli4_s *sli4, void *buf, size_t size,
+		     struct efc_dma_s *qmem,
+			  u16 cq_id);
+int sli_cmd_wq_destroy(struct sli4_s *sli4, void *buf,
+		       size_t size, u16 wq_id);
+int sli_cmd_post_sgl_pages(struct sli4_s *sli4, void *buf,
+			   size_t size, u16 xri, u32 xri_count,
+			   struct efc_dma_s *page0[],
+			   struct efc_dma_s *page1[], struct efc_dma_s *dma);
+extern int
+sli_cmd_rq_create(struct sli4_s *sli4, void *buf, size_t size,
+		  struct efc_dma_s *qmem,
+		       u16 cq_id, u16 buffer_size);
+extern int
+sli_cmd_rq_create_v1(struct sli4_s *sli4, void *buf, size_t size,
+		     struct efc_dma_s *qmem, u16 cq_id,
+			  u16 buffer_size);
+int sli_cmd_rq_destroy(struct sli4_s *sli4, void *buf,
+		       size_t size, u16 rq_id);
+extern int
+sli_cmd_read_fcf_table(struct sli4_s *sli4, void *buf, size_t size,
+		       struct efc_dma_s *dma, u16 index);
+extern int
+sli_cmd_post_hdr_templates(struct sli4_s *sli4, void *buf,
+			   size_t size, struct efc_dma_s *dma,
+				     u16 rpi,
+				     struct efc_dma_s *payload_dma);
+extern int
+sli_cmd_rediscover_fcf(struct sli4_s *sli4, void *buf, size_t size,
+		       u16 index);
+extern int
+sli_fc_rq_alloc(struct sli4_s *sli4, struct sli4_queue_s *q,
+		u32 n_entries, u32 buffer_size,
+		struct sli4_queue_s *cq, bool is_hdr);
+extern int
+sli_fc_rq_set_alloc(struct sli4_s *sli4, u32 num_rq_pairs,
+		    struct sli4_queue_s *qs[], u32 base_cq_id,
+		    u32 n_entries, u32 header_buffer_size,
+		    u32 payload_buffer_size);
+u32 sli_fc_get_rpi_requirements(struct sli4_s *sli4,
+				u32 n_rpi);
+extern int
+sli_abort_wqe(struct sli4_s *sli4, void *buf, size_t size,
+	      enum sli4_abort_type_e type, bool send_abts,
+	u32 ids, u32 mask, u16 tag, u16 cq_id);
+
+extern int
+sli_send_frame_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		   u8 sof, u8 eof, u32 *hdr,
+			struct efc_dma_s *payload, u32 req_len,
+			u8 timeout, u16 xri, u16 req_tag);
+
+extern int
+sli_xmit_els_rsp64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		       struct efc_dma_s *rsp, u32 rsp_len,
+		u16 xri, u16 tag, u16 cq_id,
+		u16 ox_id, u16 rnodeindicator,
+		u16 sportindicator, bool hlm, bool rnodeattached,
+		u32 rnode_fcid, u32 flags, u32 s_id);
+
+extern int
+sli_els_request64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		      struct efc_dma_s *sgl,
+		u8 req_type, u32 req_len, u32 max_rsp_len,
+		u8 timeout, u16 xri, u16 tag,
+		u16 cq_id, u16 rnodeindicator,
+		u16 sportindicator, bool hlm, bool rnodeattached,
+		u32 rnode_fcid, u32 sport_fcid);
+
+extern int
+sli_fcp_icmnd64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		    struct efc_dma_s *sgl, u16 xri, u16 tag,
+		u16 cq_id, u32 rpi, bool hlm,
+		u32 rnode_fcid, u8 timeout);
+
+extern int
+sli_fcp_iread64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		    struct efc_dma_s *sgl, u32 first_data_sge,
+		u32 xfer_len, u16 xri, u16 tag,
+		u16 cq_id, u32 rpi, bool hlm, u32 rnode_fcid,
+		u8 dif, u8 bs, u8 timeout);
+
+extern int
+sli_fcp_iwrite64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		     struct efc_dma_s *sgl,
+		u32 first_data_sge, u32 xfer_len,
+		u32 first_burst, u16 xri, u16 tag,
+		u16 cq_id, u32 rpi,
+		bool hlm, u32 rnode_fcid,
+		u8 dif, u8 bs, u8 timeout);
+
+extern int
+sli_fcp_treceive64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		       struct efc_dma_s *sgl,
+		u32 first_data_sge, u32 relative_off,
+		u32 xfer_len, u16 xri, u16 tag,
+		u16 cq_id, u16 xid, u32 rpi, bool hlm,
+		u32 rnode_fcid, u32 flags, u8 dif,
+		u8 bs, u8 csctl, u32 app_id);
+
+extern int
+sli_fcp_cont_treceive64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+			    struct efc_dma_s *sgl, u32 first_data_sge,
+		u32 relative_off, u32 xfer_len,
+		u16 xri, u16 sec_xri, u16 tag,
+		u16 cq_id, u16 xid, u32 rpi,
+		bool hlm, u32 rnode_fcid, u32 flags,
+		u8 dif, u8 bs, u8 csctl,
+		u32 app_id);
+
+extern int
+sli_fcp_trsp64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		   struct efc_dma_s *sgl,
+		u32 rsp_len, u16 xri, u16 tag, u16 cq_id,
+		u16 xid, u32 rpi, bool hlm, u32 rnode_fcid,
+		u32 flags, u8 csctl, u8 port_owned,
+		u32 app_id);
+
+extern int
+sli_fcp_tsend64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		    struct efc_dma_s *sgl,
+		u32 first_data_sge, u32 relative_off,
+		u32 xfer_len, u16 xri, u16 tag,
+		u16 cq_id, u16 xid, u32 rpi,
+		bool hlm, u32 rnode_fcid, u32 flags, u8 dif,
+		u8 bs, u8 csctl, u32 app_id);
+
+extern int
+sli_gen_request64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		      struct efc_dma_s *sgl, u32 req_len,
+		u32 max_rsp_len, u8 timeout, u16 xri,
+		u16 tag, u16 cq_id, bool hlm, u32 rnode_fcid,
+		u16 rnodeindicator, u8 r_ctl, u8 type,
+		u8 df_ctl);
+
+extern int
+sli_xmit_bls_rsp64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		       struct sli_bls_payload_s *payload, u16 xri,
+		u16 tag, u16 cq_id,
+		bool rnodeattached, bool hlm, u16 rnodeindicator,
+		u16 sportindicator, u32 rnode_fcid,
+		u32 sport_fcid, u32 s_id);
+
+extern int
+sli_xmit_sequence64_wqe(struct sli4_s *sli4, void *buf, size_t size,
+			struct efc_dma_s *payload, u32 payload_len,
+		u8 timeout, u16 ox_id, u16 xri,
+		u16 tag, bool hlm, u32 rnode_fcid,
+		u16 rnodeindicator, u8 r_ctl,
+		u8 type, u8 df_ctl);
+
+extern int
+sli_requeue_xri_wqe(struct sli4_s *sli4, void *buf, size_t size,
+		    u16 xri, u16 tag, u16 cq_id);
+extern void
+sli4_cmd_lowlevel_set_watchdog(struct sli4_s *sli4, void *buf,
+			       size_t size, u16 timeout);
+
+const char *sli_fc_get_status_string(u32 status);
+
 #endif /* !_SLI4_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 08/32] elx: libefc: Generic state machine framework
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (6 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 07/32] elx: libefc_sli: APIs to setup SLI library James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 09/32] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
                   ` (24 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch starts the population of the libefc library.
The library will contain common tasks usable by a target or initiator
driver. The library will also contain a FC discovery state machine
interface.

This patch creates the library directory and adds definitions
for the discovery state machine interface.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_sm.c | 275 +++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_sm.h | 171 ++++++++++++++++++++++++
 2 files changed, 446 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.h

diff --git a/drivers/scsi/elx/libefc/efc_sm.c b/drivers/scsi/elx/libefc/efc_sm.c
new file mode 100644
index 000000000000..4c2b844a23df
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sm.c
@@ -0,0 +1,275 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Generic state machine framework.
+ */
+#include "efc.h"
+#include "efc_sm.h"
+
+const char *efc_sm_id[] = {
+	"common",
+	"domain",
+	"login"
+};
+
+/**
+ * @brief Post an event to a context.
+ *
+ * @param ctx State machine context
+ * @param evt Event to post
+ * @param data Event-specific data (if any)
+ *
+ * @return 0 if successfully posted event; -1 if state machine
+ *         is disabled
+ */
+int
+efc_sm_post_event(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *data)
+{
+	if (ctx->current_state) {
+		ctx->current_state(ctx, evt, data);
+		return 0;
+	} else {
+		return -1;
+	}
+}
+
+/**
+ * @brief Transition to a new state.
+ */
+void
+efc_sm_transition(struct efc_sm_ctx_s *ctx,
+		  void *(*state)(struct efc_sm_ctx_s *,
+				 enum efc_sm_event_e, void *), void *data)
+
+{
+	if (ctx->current_state == state) {
+		efc_sm_post_event(ctx, EFC_EVT_REENTER, data);
+	} else {
+		efc_sm_post_event(ctx, EFC_EVT_EXIT, data);
+		ctx->current_state = state;
+		efc_sm_post_event(ctx, EFC_EVT_ENTER, data);
+	}
+}
+
+/**
+ * @brief Disable further state machine processing.
+ */
+void
+efc_sm_disable(struct efc_sm_ctx_s *ctx)
+{
+	ctx->current_state = NULL;
+}
+
+const char *efc_sm_event_name(enum efc_sm_event_e evt)
+{
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		return "EFC_EVT_ENTER";
+	case EFC_EVT_REENTER:
+		return "EFC_EVT_REENTER";
+	case EFC_EVT_EXIT:
+		return "EFC_EVT_EXIT";
+	case EFC_EVT_SHUTDOWN:
+		return "EFC_EVT_SHUTDOWN";
+	case EFC_EVT_RESPONSE:
+		return "EFC_EVT_RESPONSE";
+	case EFC_EVT_RESUME:
+		return "EFC_EVT_RESUME";
+	case EFC_EVT_TIMER_EXPIRED:
+		return "EFC_EVT_TIMER_EXPIRED";
+	case EFC_EVT_ERROR:
+		return "EFC_EVT_ERROR";
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		return "EFC_EVT_SRRS_ELS_REQ_OK";
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		return "EFC_EVT_SRRS_ELS_CMPL_OK";
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		return "EFC_EVT_SRRS_ELS_REQ_FAIL";
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		return "EFC_EVT_SRRS_ELS_CMPL_FAIL";
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		return "EFC_EVT_SRRS_ELS_REQ_RJT";
+	case EFC_EVT_NODE_ATTACH_OK:
+		return "EFC_EVT_NODE_ATTACH_OK";
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		return "EFC_EVT_NODE_ATTACH_FAIL";
+	case EFC_EVT_NODE_FREE_OK:
+		return "EFC_EVT_NODE_FREE_OK";
+	case EFC_EVT_ELS_REQ_TIMEOUT:
+		return "EFC_EVT_ELS_REQ_TIMEOUT";
+	case EFC_EVT_ELS_REQ_ABORTED:
+		return "EFC_EVT_ELS_REQ_ABORTED";
+	case EFC_EVT_ABORT_ELS:
+		return "EFC_EVT_ABORT_ELS";
+	case EFC_EVT_ELS_ABORT_CMPL:
+		return "EFC_EVT_ELS_ABORT_CMPL";
+
+	case EFC_EVT_DOMAIN_FOUND:
+		return "EFC_EVT_DOMAIN_FOUND";
+	case EFC_EVT_DOMAIN_ALLOC_OK:
+		return "EFC_EVT_DOMAIN_ALLOC_OK";
+	case EFC_EVT_DOMAIN_ALLOC_FAIL:
+		return "EFC_EVT_DOMAIN_ALLOC_FAIL";
+	case EFC_EVT_DOMAIN_REQ_ATTACH:
+		return "EFC_EVT_DOMAIN_REQ_ATTACH";
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		return "EFC_EVT_DOMAIN_ATTACH_OK";
+	case EFC_EVT_DOMAIN_ATTACH_FAIL:
+		return "EFC_EVT_DOMAIN_ATTACH_FAIL";
+	case EFC_EVT_DOMAIN_LOST:
+		return "EFC_EVT_DOMAIN_LOST";
+	case EFC_EVT_DOMAIN_FREE_OK:
+		return "EFC_EVT_DOMAIN_FREE_OK";
+	case EFC_EVT_DOMAIN_FREE_FAIL:
+		return "EFC_EVT_DOMAIN_FREE_FAIL";
+	case EFC_EVT_HW_DOMAIN_REQ_ATTACH:
+		return "EFC_EVT_HW_DOMAIN_REQ_ATTACH";
+	case EFC_EVT_HW_DOMAIN_REQ_FREE:
+		return "EFC_EVT_HW_DOMAIN_REQ_FREE";
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		return "EFC_EVT_ALL_CHILD_NODES_FREE";
+
+	case EFC_EVT_SPORT_ALLOC_OK:
+		return "EFC_EVT_SPORT_ALLOC_OK";
+	case EFC_EVT_SPORT_ALLOC_FAIL:
+		return "EFC_EVT_SPORT_ALLOC_FAIL";
+	case EFC_EVT_SPORT_ATTACH_OK:
+		return "EFC_EVT_SPORT_ATTACH_OK";
+	case EFC_EVT_SPORT_ATTACH_FAIL:
+		return "EFC_EVT_SPORT_ATTACH_FAIL";
+	case EFC_EVT_SPORT_FREE_OK:
+		return "EFC_EVT_SPORT_FREE_OK";
+	case EFC_EVT_SPORT_FREE_FAIL:
+		return "EFC_EVT_SPORT_FREE_FAIL";
+	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY:
+		return "EFC_EVT_SPORT_TOPOLOGY_NOTIFY";
+	case EFC_EVT_HW_PORT_ALLOC_OK:
+		return "EFC_EVT_HW_PORT_ALLOC_OK";
+	case EFC_EVT_HW_PORT_ALLOC_FAIL:
+		return "EFC_EVT_HW_PORT_ALLOC_FAIL";
+	case EFC_EVT_HW_PORT_ATTACH_OK:
+		return "EFC_EVT_HW_PORT_ATTACH_OK";
+	case EFC_EVT_HW_PORT_REQ_ATTACH:
+		return "EFC_EVT_HW_PORT_REQ_ATTACH";
+	case EFC_EVT_HW_PORT_REQ_FREE:
+		return "EFC_EVT_HW_PORT_REQ_FREE";
+	case EFC_EVT_HW_PORT_FREE_OK:
+		return "EFC_EVT_HW_PORT_FREE_OK";
+
+	case EFC_EVT_NODE_FREE_FAIL:
+		return "EFC_EVT_NODE_FREE_FAIL";
+
+	case EFC_EVT_ABTS_RCVD:
+		return "EFC_EVT_ABTS_RCVD";
+
+	case EFC_EVT_NODE_MISSING:
+		return "EFC_EVT_NODE_MISSING";
+	case EFC_EVT_NODE_REFOUND:
+		return "EFC_EVT_NODE_REFOUND";
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		return "EFC_EVT_SHUTDOWN_IMPLICIT_LOGO";
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+		return "EFC_EVT_SHUTDOWN_EXPLICIT_LOGO";
+
+	case EFC_EVT_ELS_FRAME:
+		return "EFC_EVT_ELS_FRAME";
+	case EFC_EVT_PLOGI_RCVD:
+		return "EFC_EVT_PLOGI_RCVD";
+	case EFC_EVT_FLOGI_RCVD:
+		return "EFC_EVT_FLOGI_RCVD";
+	case EFC_EVT_LOGO_RCVD:
+		return "EFC_EVT_LOGO_RCVD";
+	case EFC_EVT_PRLI_RCVD:
+		return "EFC_EVT_PRLI_RCVD";
+	case EFC_EVT_PRLO_RCVD:
+		return "EFC_EVT_PRLO_RCVD";
+	case EFC_EVT_PDISC_RCVD:
+		return "EFC_EVT_PDISC_RCVD";
+	case EFC_EVT_FDISC_RCVD:
+		return "EFC_EVT_FDISC_RCVD";
+	case EFC_EVT_ADISC_RCVD:
+		return "EFC_EVT_ADISC_RCVD";
+	case EFC_EVT_RSCN_RCVD:
+		return "EFC_EVT_RSCN_RCVD";
+	case EFC_EVT_SCR_RCVD:
+		return "EFC_EVT_SCR_RCVD";
+	case EFC_EVT_ELS_RCVD:
+		return "EFC_EVT_ELS_RCVD";
+	case EFC_EVT_LAST:
+		return "EFC_EVT_LAST";
+	case EFC_EVT_FCP_CMD_RCVD:
+		return "EFC_EVT_FCP_CMD_RCVD";
+
+	case EFC_EVT_RFT_ID_RCVD:
+		return "EFC_EVT_RFT_ID_RCVD";
+	case EFC_EVT_RFF_ID_RCVD:
+		return "EFC_EVT_RFF_ID_RCVD";
+	case EFC_EVT_GNN_ID_RCVD:
+		return "EFC_EVT_GNN_ID_RCVD";
+	case EFC_EVT_GPN_ID_RCVD:
+		return "EFC_EVT_GPN_ID_RCVD";
+	case EFC_EVT_GFPN_ID_RCVD:
+		return "EFC_EVT_GFPN_ID_RCVD";
+	case EFC_EVT_GFF_ID_RCVD:
+		return "EFC_EVT_GFF_ID_RCVD";
+	case EFC_EVT_GID_FT_RCVD:
+		return "EFC_EVT_GID_FT_RCVD";
+	case EFC_EVT_GID_PT_RCVD:
+		return "EFC_EVT_GID_PT_RCVD";
+	case EFC_EVT_RPN_ID_RCVD:
+		return "EFC_EVT_RPN_ID_RCVD";
+	case EFC_EVT_RNN_ID_RCVD:
+		return "EFC_EVT_RNN_ID_RCVD";
+	case EFC_EVT_RCS_ID_RCVD:
+		return "EFC_EVT_RCS_ID_RCVD";
+	case EFC_EVT_RSNN_NN_RCVD:
+		return "EFC_EVT_RSNN_NN_RCVD";
+	case EFC_EVT_RSPN_ID_RCVD:
+		return "EFC_EVT_RSPN_ID_RCVD";
+	case EFC_EVT_RHBA_RCVD:
+		return "EFC_EVT_RHBA_RCVD";
+	case EFC_EVT_RPA_RCVD:
+		return "EFC_EVT_RPA_RCVD";
+
+	case EFC_EVT_GIDPT_DELAY_EXPIRED:
+		return "EFC_EVT_GIDPT_DELAY_EXPIRED";
+
+	case EFC_EVT_ABORT_IO:
+		return "EFC_EVT_ABORT_IO";
+	case EFC_EVT_ABORT_IO_NO_RESP:
+		return "EFC_EVT_ABORT_IO_NO_RESP";
+	case EFC_EVT_IO_CMPL:
+		return "EFC_EVT_IO_CMPL";
+	case EFC_EVT_IO_CMPL_ERRORS:
+		return "EFC_EVT_IO_CMPL_ERRORS";
+	case EFC_EVT_RESP_CMPL:
+		return "EFC_EVT_RESP_CMPL";
+	case EFC_EVT_ABORT_CMPL:
+		return "EFC_EVT_ABORT_CMPL";
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		return "EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY";
+	case EFC_EVT_NODE_DEL_INI_COMPLETE:
+		return "EFC_EVT_NODE_DEL_INI_COMPLETE";
+	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
+		return "EFC_EVT_NODE_DEL_TGT_COMPLETE";
+	case EFC_EVT_IO_ABORTED_BY_TMF:
+		return "EFC_EVT_IO_ABORTED_BY_TMF";
+	case EFC_EVT_IO_ABORT_IGNORED:
+		return "EFC_EVT_IO_ABORT_IGNORED";
+	case EFC_EVT_IO_FIRST_BURST:
+		return "EFC_EVT_IO_FIRST_BURST";
+	case EFC_EVT_IO_FIRST_BURST_ERR:
+		return "EFC_EVT_IO_FIRST_BURST_ERR";
+	case EFC_EVT_IO_FIRST_BURST_ABORTED:
+		return "EFC_EVT_IO_FIRST_BURST_ABORTED";
+
+	default:
+		break;
+	}
+	return "unknown";
+}
diff --git a/drivers/scsi/elx/libefc/efc_sm.h b/drivers/scsi/elx/libefc/efc_sm.h
new file mode 100644
index 000000000000..4e9370a8e362
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sm.h
@@ -0,0 +1,171 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ *
+ */
+
+/**
+ * Generic state machine framework declarations.
+ */
+
+#ifndef _EFC_SM_H
+#define _EFC_SM_H
+
+/**
+ * State Machine (SM) IDs.
+ */
+enum {
+	EFC_SM_COMMON = 0,
+	EFC_SM_DOMAIN,
+	EFC_SM_PORT,
+	EFC_SM_LOGIN,
+	EFC_SM_LAST
+};
+
+#define EFC_SM_EVENT_SHIFT		24
+#define EFC_SM_EVENT_START(id)		((id) << EFC_SM_EVENT_SHIFT)
+
+/* String format of the above enums. */
+extern const char *efc_sm_id[];
+
+struct efc_sm_ctx_s;
+
+/*
+ * State Machine events.
+ */
+enum efc_sm_event_e {
+	/* Common Events */
+	EFC_EVT_ENTER = EFC_SM_EVENT_START(EFC_SM_COMMON),
+	EFC_EVT_REENTER,
+	EFC_EVT_EXIT,
+	EFC_EVT_SHUTDOWN,
+	EFC_EVT_ALL_CHILD_NODES_FREE,
+	EFC_EVT_RESUME,
+	EFC_EVT_TIMER_EXPIRED,
+
+	/* Domain Events */
+	EFC_EVT_RESPONSE = EFC_SM_EVENT_START(EFC_SM_DOMAIN),
+	EFC_EVT_ERROR,
+
+	EFC_EVT_DOMAIN_FOUND,
+	EFC_EVT_DOMAIN_ALLOC_OK,
+	EFC_EVT_DOMAIN_ALLOC_FAIL,
+	EFC_EVT_DOMAIN_REQ_ATTACH,
+	EFC_EVT_DOMAIN_ATTACH_OK,
+	EFC_EVT_DOMAIN_ATTACH_FAIL,
+	EFC_EVT_DOMAIN_LOST,
+	EFC_EVT_DOMAIN_FREE_OK,
+	EFC_EVT_DOMAIN_FREE_FAIL,
+	EFC_EVT_HW_DOMAIN_REQ_ATTACH,
+	EFC_EVT_HW_DOMAIN_REQ_FREE,
+
+	/* Sport Events */
+	EFC_EVT_SPORT_ALLOC_OK = EFC_SM_EVENT_START(EFC_SM_PORT),
+	EFC_EVT_SPORT_ALLOC_FAIL,
+	EFC_EVT_SPORT_ATTACH_OK,
+	EFC_EVT_SPORT_ATTACH_FAIL,
+	EFC_EVT_SPORT_FREE_OK,
+	EFC_EVT_SPORT_FREE_FAIL,
+	EFC_EVT_SPORT_TOPOLOGY_NOTIFY,
+	EFC_EVT_HW_PORT_ALLOC_OK,
+	EFC_EVT_HW_PORT_ALLOC_FAIL,
+	EFC_EVT_HW_PORT_ATTACH_OK,
+	EFC_EVT_HW_PORT_REQ_ATTACH,
+	EFC_EVT_HW_PORT_REQ_FREE,
+	EFC_EVT_HW_PORT_FREE_OK,
+
+	/* Login Events */
+	EFC_EVT_SRRS_ELS_REQ_OK = EFC_SM_EVENT_START(EFC_SM_LOGIN),
+	EFC_EVT_SRRS_ELS_CMPL_OK,
+	EFC_EVT_SRRS_ELS_REQ_FAIL,
+	EFC_EVT_SRRS_ELS_CMPL_FAIL,
+	EFC_EVT_SRRS_ELS_REQ_RJT,
+	EFC_EVT_NODE_ATTACH_OK,
+	EFC_EVT_NODE_ATTACH_FAIL,
+	EFC_EVT_NODE_FREE_OK,
+	EFC_EVT_NODE_FREE_FAIL,
+	EFC_EVT_ELS_FRAME,
+	EFC_EVT_ELS_REQ_TIMEOUT,
+	EFC_EVT_ELS_REQ_ABORTED,
+	/* request an ELS IO be aborted */
+	EFC_EVT_ABORT_ELS,
+	/* ELS abort process complete */
+	EFC_EVT_ELS_ABORT_CMPL,
+
+	EFC_EVT_ABTS_RCVD,
+
+	/* node is not in the GID_PT payload */
+	EFC_EVT_NODE_MISSING,
+	/* node is allocated and in the GID_PT payload */
+	EFC_EVT_NODE_REFOUND,
+	/* node shutting down due to PLOGI recvd (implicit logo) */
+	EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+	/* node shutting down due to LOGO recvd/sent (explicit logo) */
+	EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+
+	EFC_EVT_PLOGI_RCVD,
+	EFC_EVT_FLOGI_RCVD,
+	EFC_EVT_LOGO_RCVD,
+	EFC_EVT_PRLI_RCVD,
+	EFC_EVT_PRLO_RCVD,
+	EFC_EVT_PDISC_RCVD,
+	EFC_EVT_FDISC_RCVD,
+	EFC_EVT_ADISC_RCVD,
+	EFC_EVT_RSCN_RCVD,
+	EFC_EVT_SCR_RCVD,
+	EFC_EVT_ELS_RCVD,
+
+	EFC_EVT_FCP_CMD_RCVD,
+
+	/* Used by fabric emulation */
+	EFC_EVT_RFT_ID_RCVD,
+	EFC_EVT_RFF_ID_RCVD,
+	EFC_EVT_GNN_ID_RCVD,
+	EFC_EVT_GPN_ID_RCVD,
+	EFC_EVT_GFPN_ID_RCVD,
+	EFC_EVT_GFF_ID_RCVD,
+	EFC_EVT_GID_FT_RCVD,
+	EFC_EVT_GID_PT_RCVD,
+	EFC_EVT_RPN_ID_RCVD,
+	EFC_EVT_RNN_ID_RCVD,
+	EFC_EVT_RCS_ID_RCVD,
+	EFC_EVT_RSNN_NN_RCVD,
+	EFC_EVT_RSPN_ID_RCVD,
+	EFC_EVT_RHBA_RCVD,
+	EFC_EVT_RPA_RCVD,
+
+	EFC_EVT_GIDPT_DELAY_EXPIRED,
+
+	/* SCSI Target Server events */
+	EFC_EVT_ABORT_IO,
+	EFC_EVT_ABORT_IO_NO_RESP,
+	EFC_EVT_IO_CMPL,
+	EFC_EVT_IO_CMPL_ERRORS,
+	EFC_EVT_RESP_CMPL,
+	EFC_EVT_ABORT_CMPL,
+	EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY,
+	EFC_EVT_NODE_DEL_INI_COMPLETE,
+	EFC_EVT_NODE_DEL_TGT_COMPLETE,
+	EFC_EVT_IO_ABORTED_BY_TMF,
+	EFC_EVT_IO_ABORT_IGNORED,
+	EFC_EVT_IO_FIRST_BURST,
+	EFC_EVT_IO_FIRST_BURST_ERR,
+	EFC_EVT_IO_FIRST_BURST_ABORTED,
+
+	/* Must be last */
+	EFC_EVT_LAST
+};
+
+int
+efc_sm_post_event(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *data);
+void
+efc_sm_transition(struct efc_sm_ctx_s *ctx,
+		  void *(*state)(struct efc_sm_ctx_s *ctx,
+				 enum efc_sm_event_e evt, void *arg),
+		  void *data);
+void efc_sm_disable(struct efc_sm_ctx_s *ctx);
+const char *efc_sm_event_name(enum efc_sm_event_e evt);
+
+#endif /* ! _EFC_SM_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 09/32] elx: libefc: Emulex FC discovery library APIs and definitions
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (7 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 08/32] elx: libefc: Generic state machine framework James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 10/32] elx: libefc: FC Domain state machine interfaces James Smart
                   ` (23 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- SLI/Local FC port objects
- efc_domain_s: FC domain (aka fabric) objects
- efc_node_s: FC node (aka remote ports) objects
- A sparse vector interface that manages lookup tables
  for the objects.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc.h     | 188 +++++++++
 drivers/scsi/elx/libefc/efc_lib.c | 263 +++++++++++++
 drivers/scsi/elx/libefc/efclib.h  | 796 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 1247 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc.h
 create mode 100644 drivers/scsi/elx/libefc/efc_lib.c
 create mode 100644 drivers/scsi/elx/libefc/efclib.h

diff --git a/drivers/scsi/elx/libefc/efc.h b/drivers/scsi/elx/libefc/efc.h
new file mode 100644
index 000000000000..f24ddeef99b8
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc.h
@@ -0,0 +1,188 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * EFC linux driver common include file
+ */
+
+#if !defined(__EFC_H__)
+#define __EFC_H__
+
+/***************************************************************************
+ * OS specific includes
+ */
+#include <stdarg.h>
+#include <linux/version.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/interrupt.h>
+#include <asm-generic/ioctl.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <asm/byteorder.h>
+#include <linux/timer.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/sched.h>
+#include <asm/current.h>
+#include <asm/cacheflush.h>
+#include <linux/pagemap.h>
+#include <linux/kthread.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include <linux/random.h>
+#include <linux/sched.h>
+#include <linux/jiffies.h>
+#include <linux/ctype.h>
+#include <linux/debugfs.h>
+#include <linux/firmware.h>
+
+#include "../include/efc_common.h"
+#include "efclib.h"
+
+/* Linux driver specific definitions */
+
+#define EFC_MIN_DMA_ALIGNMENT		16
+/* maximum DMA allocation that is expected to reliably succeed  */
+#define EFC_MAX_DMA_ALLOC		(64 * 1024)
+
+#define EFC_MAX_LUN			256
+#define EFC_NUM_UNSOLICITED_FRAMES	1024
+
+#define EFC_MAX_NUMA_NODES		8
+
+/* Per driver instance (efc_t) definitions */
+#define EFC_MAX_DOMAINS		1
+#define EFC_MAX_REMOTE_NODES		2048
+
+/**
+ * @brief Sparse vector structure.
+ */
+struct sparse_vector_s {
+	void *os;
+	u32 max_idx;		/**< maximum index value */
+	void **array;			/**< pointer to 3D array */
+};
+
+/**
+ * @brief Sparse Vector API
+ *
+ * This is a trimmed down sparse vector implementation tuned to the problem of
+ * 24-bit FC_IDs. In this case, the 24-bit index value is broken down in three
+ * 8-bit values. These values are used to index up to three 256 element arrays.
+ * Arrays are allocated, only when needed. @n @n
+ * The lookup can complete in constant time (3 indexed array references). @n @n
+ * A typical use case would be that the fabric/directory FC_IDs would cause two
+ * rows to be allocated, and the fabric assigned remote nodes would cause two
+ * rows to be allocated, with the root row always allocated. This gives five
+ * rows of 256 x sizeof(void*), resulting in 10k.
+ */
+/*!
+ * @defgroup spv Sparse Vector
+ */
+
+#define SPV_ROWLEN	256
+#define SPV_DIM		3
+
+void efc_spv_del(struct sparse_vector_s *spv);
+struct sparse_vector_s *efc_spv_new(void *os);
+void efc_spv_set(struct sparse_vector_s *sv, u32 idx, void *value);
+void *efc_spv_get(struct sparse_vector_s *sv, u32 idx);
+
+#define efc_assert(cond, ...)	\
+	do {			\
+		if (!(cond)) {	\
+			pr_err("%s(%d) assertion (%s) failed\n",\
+				__FILE__, __LINE__, #cond);\
+			dump_stack();\
+		} \
+	} while (0)
+
+int efc_dma_copy_in(struct efc_dma_s *dma, void *buffer,
+		    u32 buffer_length);
+
+#include "efc_sm.h"
+
+struct efc_drv_s {
+	bool attached;
+};
+
+#define efc_is_fc_initiator_enabled()	(efc->enable_ini)
+#define efc_is_fc_target_enabled()	(efc->enable_tgt)
+
+#define domain_sm_trace(domain)						\
+	efc_log_debug(domain->efc, "[domain:%s] %-20s %-20s\n",\
+		      domain->display_name, __func__, efc_sm_event_name(evt))\
+
+#define domain_trace(domain, fmt, ...) \
+	efc_log_debug(domain->efc,\
+		      "[%s]" fmt, domain->display_name, ##__VA_ARGS__)\
+
+#define node_sm_trace()				\
+	efc_log_debug(node->efc,\
+		"[%s] %-20s\n", node->display_name, efc_sm_event_name(evt))\
+
+#define sport_sm_trace(sport)\
+	efc_log_debug(sport->efc,\
+		"[%s] %-20s\n", sport->display_name, efc_sm_event_name(evt))\
+
+enum efc_hw_rtn_e {
+	EFC_HW_RTN_SUCCESS = 0,
+	EFC_HW_RTN_SUCCESS_SYNC = 1,
+	EFC_HW_RTN_ERROR = -1,
+	EFC_HW_RTN_NO_RESOURCES = -2,
+	EFC_HW_RTN_NO_MEMORY = -3,
+	EFC_HW_RTN_IO_NOT_ACTIVE = -4,
+	EFC_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
+	EFC_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
+	EFC_HW_RTN_INVALID_ARG = -7,
+};
+
+#define EFC_HW_RTN_IS_ERROR(e) ((e) < 0)
+
+enum efc_scsi_del_initiator_reason_e {
+	EFC_SCSI_INITIATOR_DELETED,
+	EFC_SCSI_INITIATOR_MISSING,
+};
+
+enum efc_scsi_del_target_reason_e {
+	EFC_SCSI_TARGET_DELETED,
+	EFC_SCSI_TARGET_MISSING,
+};
+
+#define EFC_SCSI_CALL_COMPLETE	0 /* All work is done */
+#define EFC_SCSI_CALL_ASYNC	1 /* Work will be completed asynchronously */
+
+#include "efc_domain.h"
+#include "efc_sport.h"
+#include "efc_node.h"
+
+/* Timeouts */
+#ifndef EFC_FC_ELS_SEND_DEFAULT_TIMEOUT
+#define EFC_FC_ELS_SEND_DEFAULT_TIMEOUT		0
+#endif
+
+#ifndef EFC_FC_ELS_DEFAULT_RETRIES
+#define EFC_FC_ELS_DEFAULT_RETRIES		3
+#endif
+
+#ifndef EFC_FC_FLOGI_TIMEOUT_SEC
+#define EFC_FC_FLOGI_TIMEOUT_SEC		5 /* shorter than default */
+#endif
+
+#ifndef EFC_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC
+#define EFC_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC	30000000 /* 30 seconds */
+#endif
+
+#endif /* __EFC_H__ */
diff --git a/drivers/scsi/elx/libefc/efc_lib.c b/drivers/scsi/elx/libefc/efc_lib.c
new file mode 100644
index 000000000000..c2696193b6da
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_lib.c
@@ -0,0 +1,263 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include "efc.h"
+
+int efcport_init(struct efc_lport *efc)
+{
+	u32 rc = 0;
+
+	spin_lock_init(&efc->lock);
+	INIT_LIST_HEAD(&efc->vport_list);
+
+	/* Create Node pool */
+	rc = efc_node_create_pool(efc, EFC_MAX_REMOTE_NODES);
+	if (rc)
+		efc_log_err(efc, "Can't allocate node pool\n");
+
+	return rc;
+}
+
+void efcport_destroy(struct efc_lport *efc)
+{
+	efc_node_free_pool(efc);
+}
+
+/**
+ * @brief Sparse Vector API.
+ *
+ * This is a trimmed down sparse vector implementation tuned to the problem of
+ * 24-bit FC_IDs. In this case, the 24-bit index value is broken down in three
+ * 8-bit values. These values are used to index up to three 256 element arrays.
+ * Arrays are allocated, only when needed. @n @n
+ * The lookup can complete in constant time (3 indexed array references). @n @n
+ * A typical use case would be that the fabric/directory FC_IDs would cause two
+ * rows to be allocated, and the fabric assigned remote nodes would cause two
+ * rows to be allocated, with the root row always allocated. This gives five
+ * rows of 256 x sizeof(void*), resulting in 10k.
+ */
+
+/**
+ * @ingroup spv
+ * @brief Allocate a new sparse vector row.
+ *
+ * @param os OS handle
+ * @param rowcount Count of rows.
+ *
+ * @par Description
+ * A new sparse vector row is allocated.
+ *
+ * @param rowcount Number of elements in a row.
+ *
+ * @return Returns the pointer to a row.
+ */
+static void
+**efc_spv_new_row(u32 rowcount)
+{
+	return kzalloc(sizeof(void *) * rowcount, GFP_ATOMIC);
+}
+
+/**
+ * @ingroup spv
+ * @brief Delete row recursively.
+ *
+ * @par Description
+ * This function recursively deletes the rows in this sparse vector
+ *
+ * @param os OS handle
+ * @param a Pointer to the row.
+ * @param n Number of elements in the row.
+ * @param depth Depth of deleting.
+ *
+ * @return None.
+ */
+static void
+_efc_spv_del(void *os, void **a, u32 n, u32 depth)
+{
+	if (a) {
+		if (depth) {
+			u32 i;
+
+			for (i = 0; i < n; i++)
+				_efc_spv_del(os, a[i], n, depth - 1);
+
+			kfree(a);
+		}
+	}
+}
+
+/**
+ * @ingroup spv
+ * @brief Delete a sparse vector.
+ *
+ * @par Description
+ * The sparse vector is freed.
+ *
+ * @param spv Pointer to the sparse vector object.
+ */
+void
+efc_spv_del(struct sparse_vector_s *spv)
+{
+	if (spv) {
+		_efc_spv_del(spv->os, spv->array, SPV_ROWLEN, SPV_DIM);
+		kfree(spv);
+	}
+}
+
+/**
+ * @ingroup spv
+ * @brief Instantiate a new sparse vector object.
+ *
+ * @par Description
+ * A new sparse vector is allocated.
+ *
+ * @param os OS handle
+ *
+ * @return Returns the pointer to the sparse vector, or NULL.
+ */
+struct sparse_vector_s
+*efc_spv_new(void *os)
+{
+	struct sparse_vector_s *spv;
+	u32 i;
+
+	spv = kzalloc(sizeof(*spv), GFP_ATOMIC);
+	if (!spv)
+		return NULL;
+
+	spv->os = os;
+	spv->max_idx = 1;
+	for (i = 0; i < SPV_DIM; i++)
+		spv->max_idx *= SPV_ROWLEN;
+
+	return spv;
+}
+
+/**
+ * @ingroup spv
+ * @brief Return the address of a cell.
+ *
+ * @par Description
+ * Returns the address of a cell, allocates sparse rows as needed if the
+ *         alloc_new_rows parameter is set.
+ *
+ * @param sv Pointer to the sparse vector.
+ * @param idx Index of which to return the address.
+ * @param alloc_new_rows If TRUE, then new rows may be allocated to set values,
+ *                       Set to FALSE for retrieving values.
+ *
+ * @return Returns the pointer to the cell, or NULL.
+ */
+static void
+*efc_spv_new_cell(struct sparse_vector_s *sv, u32 idx,
+		   bool alloc_new_rows)
+{
+	u32 a = (idx >> 16) & 0xff;
+	u32 b = (idx >>  8) & 0xff;
+	u32 c = (idx >>  0) & 0xff;
+	void **p;
+
+	if (idx >= sv->max_idx)
+		return NULL;
+
+	if (!sv->array) {
+		sv->array = (alloc_new_rows ?
+			     efc_spv_new_row(SPV_ROWLEN) : NULL);
+		if (!sv->array)
+			return NULL;
+	}
+	p = sv->array;
+	if (!p[a]) {
+		p[a] = (alloc_new_rows ? efc_spv_new_row(SPV_ROWLEN) : NULL);
+		if (!p[a])
+			return NULL;
+	}
+	p = p[a];
+	if (!p[b]) {
+		p[b] = (alloc_new_rows ? efc_spv_new_row(SPV_ROWLEN) : NULL);
+		if (!p[b])
+			return NULL;
+	}
+	p = p[b];
+
+	return &p[c];
+}
+
+/**
+ * @ingroup spv
+ * @brief Set the sparse vector cell value.
+ *
+ * @par Description
+ * Sets the sparse vector at @c idx to @c value.
+ *
+ * @param sv Pointer to the sparse vector.
+ * @param idx Index of which to store.
+ * @param value Value to store.
+ *
+ * @return None.
+ */
+void
+efc_spv_set(struct sparse_vector_s *sv, u32 idx, void *value)
+{
+	void **ref = efc_spv_new_cell(sv, idx, true);
+
+	if (ref)
+		*ref = value;
+}
+
+/**
+ * @ingroup spv
+ * @brief Return the sparse vector cell value.
+ *
+ * @par Description
+ * Returns the value at @c idx.
+ *
+ * @param sv Pointer to the sparse vector.
+ * @param idx Index of which to return the value.
+ *
+ * @return Returns the cell value, or NULL.
+ */
+void
+*efc_spv_get(struct sparse_vector_s *sv, u32 idx)
+{
+	void **ref = efc_spv_new_cell(sv, idx, false);
+
+	if (ref)
+		return *ref;
+
+	return NULL;
+}
+
+/*
+ * @brief copy into dma buffer
+ *
+ * Copies into a dma buffer, updates the len element
+ *
+ * @param dma DMA descriptor
+ * @param buffer address of buffer to copy from
+ * @param buffer_length buffer length in bytes
+ *
+ * @return returns bytes copied for success,
+ * a negative error code value for failure.
+ */
+
+int
+efc_dma_copy_in(struct efc_dma_s *dma, void *buffer, u32 buffer_length)
+{
+	if (!dma)
+		return -1;
+	if (!buffer)
+		return -1;
+	if (buffer_length == 0)
+		return 0;
+	if (buffer_length > dma->size)
+		buffer_length = dma->size;
+	memcpy(dma->virt, buffer, buffer_length);
+	dma->len = buffer_length;
+	return buffer_length;
+}
diff --git a/drivers/scsi/elx/libefc/efclib.h b/drivers/scsi/elx/libefc/efclib.h
new file mode 100644
index 000000000000..bbb80bbd2ab1
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efclib.h
@@ -0,0 +1,796 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCLIB_H__)
+#define __EFCLIB_H__
+
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
+#include "scsi/fc/fc_ns.h"
+#include "scsi/fc/fc_gs.h"
+#include "scsi/fc_frame.h"
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport_fc.h>
+#include <linux/completion.h>
+#include "../include/efc_common.h"
+
+#define EFC_SERVICE_PARMS_LENGTH	0x74
+#define EFC_DISPLAY_NAME_LENGTH		32
+#define EFC_DISPLAY_BUS_INFO_LENGTH	16
+
+#define EFC_WWN_LENGTH			32
+
+/**
+ * Local port topology.
+ */
+
+enum efc_sport_topology_e {
+	EFC_SPORT_TOPOLOGY_UNKNOWN = 0,
+	EFC_SPORT_TOPOLOGY_FABRIC,
+	EFC_SPORT_TOPOLOGY_P2P,
+	EFC_SPORT_TOPOLOGY_LOOP,
+};
+
+/**
+ * Common (transport agnostic) shared declarations
+ */
+
+#define enable_target_rscn(efc)	1
+
+enum efc_node_shutd_rsn_e {
+	EFC_NODE_SHUTDOWN_DEFAULT = 0,
+	EFC_NODE_SHUTDOWN_EXPLICIT_LOGO,
+	EFC_NODE_SHUTDOWN_IMPLICIT_LOGO,
+};
+
+enum efc_node_send_ls_acc_e {
+	EFC_NODE_SEND_LS_ACC_NONE = 0,
+	EFC_NODE_SEND_LS_ACC_PLOGI,
+	EFC_NODE_SEND_LS_ACC_PRLI,
+};
+
+#define EFC_LINK_STATUS_UP   0
+#define EFC_LINK_STATUS_DOWN 1
+
+/* State machine context header  */
+struct efc_sm_ctx_s {
+	void *(*current_state)(struct efc_sm_ctx_s *ctx,
+			       u32 evt, void *arg);
+
+	const char *description;
+	void	*app;			/** Application-specific handle. */
+};
+
+/**
+ * @brief Description of discovered Fabric Domain
+ * struct efc_domain_record_s - libefc discovered Fabric Domain
+ * @index:	FCF table index (used in REG_FCFI)
+ * @priority:	FCF reported priority
+ * @address:	Switch WWN
+ * @vlan:	bitmap of valid VLAN IDs
+ * @loop:	FC-AL position map
+ * @speed:	link speed
+ * @fc_id:	our ports fc_id
+ */
+struct efc_domain_record_s {
+	u32	index;
+	u32	priority;
+	u8		address[6];
+	u8		wwn[8];
+	union {
+		u8	vlan[512];
+		u8	loop[128];
+	} map;
+	u32	speed;
+	u32	fc_id;
+	bool		is_loop;
+	bool		is_nport;
+};
+
+/*
+ * @brief Fabric/Domain events
+ */
+enum efc_hw_domain_event_e {
+	EFC_HW_DOMAIN_ALLOC_OK,		/**< domain successfully allocated */
+	EFC_HW_DOMAIN_ALLOC_FAIL,	/**< domain allocation failed */
+	EFC_HW_DOMAIN_ATTACH_OK,	/**< successfully attached to domain */
+	EFC_HW_DOMAIN_ATTACH_FAIL,	/**< domain attach failed */
+	EFC_HW_DOMAIN_FREE_OK,		/**< successfully freed domain */
+	EFC_HW_DOMAIN_FREE_FAIL,	/**< domain free failed */
+	EFC_HW_DOMAIN_LOST,
+	/**< prev discovered domain no longer available */
+	EFC_HW_DOMAIN_FOUND,		/**< new domain discovered */
+	/**< prev discovered domain props have changed */
+	EFC_HW_DOMAIN_CHANGED,
+};
+
+enum efc_hw_port_event_e {
+	EFC_HW_PORT_ALLOC_OK,		/**< port successfully allocated */
+	EFC_HW_PORT_ALLOC_FAIL,		/**< port allocation failed */
+	EFC_HW_PORT_ATTACH_OK,		/**< successfully attached to port */
+	EFC_HW_PORT_ATTACH_FAIL,	/**< port attach failed */
+	EFC_HW_PORT_FREE_OK,		/**< successfully freed port */
+	EFC_HW_PORT_FREE_FAIL,		/**< port free failed */
+};
+
+enum efc_hw_remote_node_event_e {
+	EFC_HW_NODE_ATTACH_OK,
+	EFC_HW_NODE_ATTACH_FAIL,
+	EFC_HW_NODE_FREE_OK,
+	EFC_HW_NODE_FREE_FAIL,
+	EFC_HW_NODE_FREE_ALL_OK,
+	EFC_HW_NODE_FREE_ALL_FAIL,
+};
+
+enum efc_hw_node_els_event_e {
+	EFC_HW_SRRS_ELS_REQ_OK,
+	EFC_HW_SRRS_ELS_CMPL_OK,
+	EFC_HW_SRRS_ELS_REQ_FAIL,
+	EFC_HW_SRRS_ELS_CMPL_FAIL,
+	EFC_HW_SRRS_ELS_REQ_RJT,
+	EFC_HW_ELS_REQ_ABORTED,
+};
+
+/**
+ * @brief SLI Port object
+ *
+ * The SLI Port object represents the connection between the driver and the
+ * FC/FCoE domain. In some topologies / hardware, it is possible to have
+ * multiple connections to the domain via different WWN. Each would require
+ * a separate SLI port object.
+ *
+ * @efc:		pointer to efc
+ * @tgt_id:		target id
+ * @display_name:	sport display name
+ * @domain:		current fabric domain
+ * @is_vport:		this SPORT is a virtual port
+ * @wwpn:		WWPN from HW (host endian)
+ * @wwnn:		WWNN from HW (host endian)
+ * @node_list:		list of nodes
+ * @ini_sport:		initiator backend private sport data
+ * @tgt_sport:		target backend private sport data
+ * @tgt_data:		target backend private pointer
+ * @ini_data:		initiator backend private pointer
+ * @ctx:		state machine context
+ * @hw:			pointer to HW
+ * @indicator:		VPI
+ * @fc_id:		FC address
+ * @efc_dma_s:		memory for Service Parameter
+ * @wwnn_str:		WWN (ASCII)
+ * @sli_wwpn:		WWPN (wire endian)
+ * @sli_wwnn:		WWNN (wire endian)
+ * @sm_free_req_pending:Free request received while waiting for attach response
+ * @sm:			sport context state machine
+ * @lookup:		fc_id to node lookup object
+ * @enable_ini:		SCSI initiator enabled for this node
+ * @enable_tgt:		SCSI target enabled for this node
+ * @enable_rscn:	This SPORT will be expecting RSCN
+ * @shutting_down:	sport in process of shutting down
+ * @p2p_winner:		TRUE if we're the point-to-point winner
+ * @topology:		topology: fabric/p2p/unknown
+ * @service_params:	Login parameters
+ * @p2p_remote_port_id:	Remote node's port id for p2p
+ * @p2p_port_id:	our port's id
+ */
+struct efc_sli_port_s {
+	struct list_head list_entry;
+	struct efc_lport *efc;
+	u32 tgt_id;
+	u32 index;
+	u32 instance_index;
+	char display_name[EFC_DISPLAY_NAME_LENGTH];
+	struct efc_domain_s *domain;
+	bool is_vport;
+	u64	wwpn;
+	u64	wwnn;
+	struct list_head node_list;
+	void	*ini_sport;
+	void	*tgt_sport;
+	void	*tgt_data;
+	void	*ini_data;
+
+	/*
+	 * Members private to HW/SLI
+	 */
+	void	*hw;
+	u32	indicator;
+	u32	fc_id;
+	struct efc_dma_s	dma;
+
+	u8		wwnn_str[EFC_WWN_LENGTH];
+	__be64		sli_wwpn;
+	__be64		sli_wwnn;
+	bool		free_req_pending;
+	bool		attached;
+
+	/*
+	 * Implementation specific fields allowed here
+	 */
+	struct efc_sm_ctx_s	sm;
+	struct sparse_vector_s *lookup;
+	bool		enable_ini;
+	bool		enable_tgt;
+	bool		enable_rscn;
+	bool		shutting_down;
+	bool		p2p_winner;
+	enum efc_sport_topology_e topology;
+	u8		service_params[EFC_SERVICE_PARMS_LENGTH];
+	u32	p2p_remote_port_id;
+	u32	p2p_port_id;
+};
+
+/**
+ * @brief Fibre Channel domain object
+ *
+ * This object is a container for the various SLI components needed
+ * to connect to the domain of a FC or FCoE switch
+ * @efc:		pointer back to efc
+ * @instance_index:	unique instance index value
+ * @display_name:	Node display name
+ * @sport_list:		linked list of SLI ports
+ * @ini_domain:		initiator backend private domain data
+ * @tgt_domain:		target backend private domain data
+ * @hw:			pointer to HW
+ * @sm:			state machine context
+ * @fcf:		FC Forwarder table index
+ * @fcf_indicator:	FCFI
+ * @vlan_id:		VLAN tag for this domain
+ * @indicator:		VFI
+ * @dma:		memory for Service Parameters
+ * @req_rediscover_fcf:	TRUE if fcf rediscover is needed
+ *			(in response to Vlink Clear async event)
+ * @fcf_wwn:		WWN for FCF/switch
+ * @drvsm:		driver domain sm context
+ * @drvsm_lock:		driver domain sm lock
+ * @attached:		set true after attach completes
+ * @is_fc:		is FC
+ * @is_loop:		is loop topology
+ * @is_nlport:		is public loop
+ * @domain_found_pending:A domain found is pending, drec is updated
+ * @req_domain_free:	True if domain object should be free'd
+ * @req_accept_frames:	set in domain state machine to enable frames
+ * @domain_notify_pend:	Set in domain SM to avoid duplicate node event post
+ * @pending_drec:	Pending drec if a domain found is pending
+ * @service_params:	any sports service parameters
+ * @flogi_service_params:Fabric/P2p service parameters from FLOGI
+ * @lookup:		d_id to node lookup object
+ * @sport:		Pointer to first (physical) SLI port
+ */
+struct efc_domain_s {
+	struct list_head	list_entry;
+	struct efc_lport *efc;
+	u32 instance_index;
+	char display_name[EFC_DISPLAY_NAME_LENGTH];
+	struct list_head sport_list;
+	void *ini_domain;
+	void *tgt_domain;
+
+	/* Declarations private to HW/SLI */
+	void *hw;
+	u32 fcf;
+	u32 fcf_indicator;
+	u32 indicator;
+	struct efc_dma_s dma;
+	bool req_rediscover_fcf;
+
+	/* Declarations private to FC transport */
+	u64 fcf_wwn;
+	struct efc_sm_ctx_s drvsm;
+	bool attached;
+	bool is_fc;
+	bool is_loop;
+	bool is_nlport;
+	bool domain_found_pending;
+	bool req_domain_free;
+	bool req_accept_frames;
+	bool domain_notify_pend;
+
+	struct efc_domain_record_s pending_drec;
+	u8 service_params[EFC_SERVICE_PARMS_LENGTH];
+	u8 flogi_service_params[EFC_SERVICE_PARMS_LENGTH];
+
+	struct sparse_vector_s *lookup;
+
+	struct efc_sli_port_s *sport;
+	u32 sport_instance_count;
+};
+
+/**
+ * @brief Remote Node object
+ *
+ * This object represents a connection between the SLI port and another
+ * Nx_Port on the fabric. Note this can be either a well known port such
+ * as a F_Port (i.e. ff:ff:fe) or another N_Port.
+ * @indicator:		RPI
+ * @fc_id:		FC address
+ * @attached:		true if attached
+ * @node_group:		true if in node group
+ * @free_group:		true if the node group should be free'd
+ * @sport:		associated SLI port
+ * @node:		associated node
+ */
+struct efc_remote_node_s {
+	/*
+	 * Members private to HW/SLI
+	 */
+	u32	indicator;
+	u32	index;
+	u32	fc_id;
+
+	bool attached;
+	bool node_group;
+	bool free_group;
+
+	struct efc_sli_port_s	*sport;
+	void *node;
+};
+
+/**
+ * @brief FC Node object
+ * @efc:		pointer back to efc structure
+ * @instance_index:	unique instance index value
+ * @display_name:	Node display name
+ * @hold_frames:	hold incoming frames if true
+ * @lock:		node wide lock
+ * @active_ios:		active I/O's for this node
+ * @max_wr_xfer_size:	Max write IO size per phase for the transport
+ * @ini_node:		backend initiator private node data
+ * @tgt_node:		backend target private node data
+ * @rnode:		Remote node
+ * @sm:			state machine context
+ * @evtdepth:		current event posting nesting depth
+ * @req_free:		this node is to be free'd
+ * @attached:		node is attached (REGLOGIN complete)
+ * @fcp_enabled:	node is enabled to handle FCP
+ * @rscn_pending:	for name server node RSCN is pending
+ * @send_plogi:		send PLOGI accept, upon completion of node attach
+ * @send_plogi_acc:	TRUE if io_alloc() is enabled.
+ * @send_ls_acc:	type of LS acc to send
+ * @ls_acc_io:		SCSI IO for LS acc
+ * @ls_acc_oxid:	OX_ID for pending accept
+ * @ls_acc_did:		D_ID for pending accept
+ * @shutdown_reason:	reason for node shutdown
+ * @sparm_dma_buf:	service parameters buffer
+ * @service_params:	plogi/acc frame from remote device
+ * @pend_frames_lock:	lock for inbound pending frames list
+ * @pend_frames:	inbound pending frames list
+ * @pend_frames_processed:count of frames processed in hold frames interval
+ * @ox_id_in_use:	used to verify one at a time us of ox_id
+ * @els_retries_remaining:for ELS, number of retries remaining
+ * @els_req_cnt:	number of outstanding ELS requests
+ * @els_cmpl_cnt:	number of outstanding ELS completions
+ * @abort_cnt:		Abort counter for debugging purpos
+ * @current_state_name:	current node state
+ * @prev_state_name:	previous node state
+ * @current_evt:	current event
+ * @prev_evt:		previous event
+ * @targ:		node is target capable
+ * @init:		node is init capable
+ * @refound:		Handle node refound case when node is being deleted
+ * @els_io_pend_list:	list of pending (not yet processed) ELS IOs
+ * @els_io_active_list:	list of active (processed) ELS IOs
+ * @nodedb_state:	Node debugging, saved state
+ * @gidpt_delay_timer:	GIDPT delay timer
+ * @time_last_gidpt_msec:Start time of last target RSCN GIDPT
+ * @wwnn:		remote port WWNN
+ * @wwpn:		remote port WWPN
+ * @chained_io_count:	Statistics : count of IOs with chained SGL's
+ */
+struct efc_node_s {
+	struct list_head list_entry;
+	struct efc_lport *efc;
+	u32 instance_index;
+	char display_name[EFC_DISPLAY_NAME_LENGTH];
+	struct efc_sli_port_s *sport;
+	bool hold_frames;
+	spinlock_t active_ios_lock;
+	struct list_head active_ios;
+	u64 max_wr_xfer_size;
+	void *ini_node;
+	void *tgt_node;
+
+	struct efc_remote_node_s	rnode;
+	/* Declarations private to FC transport */
+	struct efc_sm_ctx_s		sm;
+	u32		evtdepth;
+
+	bool req_free;
+	bool attached;
+	bool fcp_enabled;
+	bool rscn_pending;
+	bool send_plogi;
+	bool send_plogi_acc;
+	bool io_alloc_enabled;
+
+	enum efc_node_send_ls_acc_e	send_ls_acc;
+	void			*ls_acc_io;
+	u32		ls_acc_oxid;
+	u32		ls_acc_did;
+	enum efc_node_shutd_rsn_e	shutdown_reason;
+	struct efc_dma_s		sparm_dma_buf;
+	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
+	spinlock_t		pend_frames_lock;
+	struct list_head	pend_frames;
+	u32		pend_frames_processed;
+	u32		ox_id_in_use;
+	u32		els_retries_remaining;
+	u32		els_req_cnt;
+	u32		els_cmpl_cnt;
+	u32		abort_cnt;
+
+	char current_state_name[EFC_DISPLAY_NAME_LENGTH];
+	char prev_state_name[EFC_DISPLAY_NAME_LENGTH];
+	int		current_evt;
+	int		prev_evt;
+	bool targ;
+	bool init;
+	bool refound;
+	struct list_head	els_io_pend_list;
+	struct list_head	els_io_active_list;
+
+	void *(*nodedb_state)(struct efc_sm_ctx_s *ctx,
+			      u32 evt, void *arg);
+	struct timer_list		gidpt_delay_timer;
+	time_t			time_last_gidpt_msec;
+
+	char wwnn[EFC_WWN_LENGTH];
+	char wwpn[EFC_WWN_LENGTH];
+
+	u32		chained_io_count;
+};
+
+/**
+ * @brief Virtual port specification
+ *
+ * Collection of the information required to restore a virtual port across
+ * link events
+ * @domain_instance:	instance index of this domain for the sport
+ * @wwnn:		node name
+ * @wwpn:		port name
+ * @fc_id:		port id
+ * @tgt_data:		target backend pointer
+ * @ini_data:		initiator backend pointe
+ * @sport:		Used to match record after attaching for update
+ *
+ */
+
+struct efc_vport_spec_s {
+	struct list_head list_entry;
+	u32 domain_instance;
+	u64 wwnn;
+	u64 wwpn;
+	u32 fc_id;
+	bool enable_tgt;
+	bool enable_ini;
+	void	*tgt_data;
+	void	*ini_data;
+	struct efc_sli_port_s *sport;
+};
+
+#define node_printf(node, fmt, args...) \
+	pr_info("[%s] " fmt, node->display_name, ##args)
+
+/**
+ * @brief Node SM IO Context Callback structure
+ *
+ * Structure used as callback argument
+ * @status:	completion status
+ * @ext_status:	extended completion status
+ * @header:	completion header buffer
+ * @payload:	completion payload buffers
+ * @els_rsp:	ELS response buffer
+ */
+
+struct efc_node_cb_s {
+	int status;
+	int ext_status;
+	struct efc_hw_rq_buffer_s *header;
+	struct efc_hw_rq_buffer_s *payload;
+	struct efc_dma_s els_rsp;
+};
+
+/*
+ * @brief HW unsolicited callback status
+ */
+enum efc_hw_unsol_status_e {
+	EFC_HW_UNSOL_SUCCESS,
+	EFC_HW_UNSOL_ERROR,
+	EFC_HW_UNSOL_ABTS_RCVD,
+	EFC_HW_UNSOL_MAX,	/**< must be last */
+};
+
+/*
+ * @brief Defines the type of RQ buffer
+ */
+enum efc_hw_rq_buffer_type_e {
+	EFC_HW_RQ_BUFFER_TYPE_HDR,
+	EFC_HW_RQ_BUFFER_TYPE_PAYLOAD,
+	EFC_HW_RQ_BUFFER_TYPE_MAX,
+};
+
+/*
+ * @brief Defines a wrapper for the RQ payload buffers so that we can place it
+ * back on the proper queue.
+ */
+struct efc_hw_rq_buffer_s {
+	u16 rqindex;
+	struct efc_dma_s dma;
+};
+
+/*
+ * @brief Defines a general FC sequence object,
+ * consisting of a header, payload buffers
+ * and a HW IO in the case of port owned XRI
+ */
+struct efc_hw_sequence_s {
+	struct list_head list_entry;
+	void *hw;	/* HW that owns this sequence */
+	/* sequence information */
+	u8 fcfi;		/* FCFI associated with sequence */
+	u8 auto_xrdy;	/* If auto XFER_RDY was generated */
+	u8 out_of_xris;	/* If IO wld have been
+			 *assisted if XRIs were available
+			 */
+	struct efc_hw_rq_buffer_s *header;
+	struct efc_hw_rq_buffer_s *payload; /* rcvd frame payload buff */
+
+	/* other "state" information from the SRB (sequence coalescing) */
+	enum efc_hw_unsol_status_e status;
+	u32 xri;		/* XRI assoc with seq; seq coalescing only */
+	struct efct_hw_io_s *hio;/* HW IO */
+
+	void *hw_priv;		/* HW private context */
+};
+
+struct libefc_function_template {
+	/*Domain*/
+	int (*hw_domain_alloc)(struct efc_lport *efc,
+			       struct efc_domain_s *domain, u32 fcf);
+
+	int (*hw_domain_attach)(struct efc_lport *efc,
+				struct efc_domain_s *domain, u32 fc_id);
+
+	int (*hw_domain_free)(struct efc_lport *hw, struct efc_domain_s *d);
+
+	int (*hw_domain_force_free)(struct efc_lport *efc,
+				    struct efc_domain_s *domain);
+	int (*new_domain)(struct efc_lport *efc, struct efc_domain_s *d);
+	void (*del_domain)(struct efc_lport *efc, struct efc_domain_s *d);
+
+	void (*domain_hold_frames)(struct efc_lport *efc,
+				   struct efc_domain_s *domain);
+	void (*domain_accept_frames)(struct efc_lport *efc,
+				     struct efc_domain_s *domain);
+
+	/*Sport*/
+	int (*hw_port_alloc)(struct efc_lport *hw, struct efc_sli_port_s *sp,
+			     struct efc_domain_s *d, u8 *val);
+
+	int (*hw_port_attach)(struct efc_lport *hw, struct efc_sli_port_s *sp,
+			      u32 fc_id);
+
+	int (*hw_port_free)(struct efc_lport *hw, struct efc_sli_port_s *sp);
+
+	int (*new_sport)(struct efc_lport *efc, struct efc_sli_port_s *sp);
+	void (*del_sport)(struct efc_lport *efc, struct efc_sli_port_s *sp);
+
+	/*Node*/
+	int (*hw_node_alloc)(struct efc_lport *hw, struct efc_remote_node_s *n,
+			     u32 fc_addr, struct efc_sli_port_s *sport);
+
+	int (*hw_node_attach)(struct efc_lport *hw, struct efc_remote_node_s *n,
+			      struct efc_dma_s *sparams);
+
+	int (*hw_node_detach)(struct efc_lport *hw,
+			      struct efc_remote_node_s *r);
+
+	int (*hw_node_free_resources)(struct efc_lport *efc,
+				      struct efc_remote_node_s *node);
+	int (*node_purge_pending)(struct efc_lport *efc, struct efc_node_s *n);
+
+	void (*node_io_cleanup)(struct efc_lport *efc, struct efc_node_s *node,
+				bool force);
+	void (*node_els_cleanup)(struct efc_lport *efc, struct efc_node_s *node,
+				 bool force);
+	void (*node_abort_all_els)(struct efc_lport *efc, struct efc_node_s *n);
+
+	/*Scsi*/
+
+	void (*scsi_io_alloc_disable)(struct efc_lport *efc,
+				      struct efc_node_s *node);
+	void (*scsi_io_alloc_enable)(struct efc_lport *efc,
+				     struct efc_node_s *node);
+
+	int (*scsi_validate_node)(struct efc_lport *efc, struct efc_node_s *n);
+	int (*scsi_new_node)(struct efc_lport *efc, struct efc_node_s *n);
+
+	int (*scsi_del_node)(struct efc_lport *efc,
+			     struct efc_node_s *node, int reason);
+
+	/*Send ELS*/
+
+	void *(*els_send)(struct efc_lport *efc, struct efc_node_s *node,
+			  u32 cmd, u32 timeout_sec, u32 retries);
+
+	void *(*els_send_ct)(struct efc_lport *efc, struct efc_node_s *node,
+			     u32 cmd, u32 timeout_sec, u32 retries);
+
+	void *(*els_send_resp)(struct efc_lport *efc, struct efc_node_s *node,
+			       u32 cmd, u16 ox_id);
+
+	void *(*bls_send_acc_hdr)(struct efc_lport *efc, struct efc_node_s *n,
+				  struct fc_frame_header *hdr);
+	void *(*send_flogi_p2p_acc)(struct efc_lport *efc, struct efc_node_s *n,
+				    u32 ox_id, u32 s_id);
+
+	int (*send_ct_rsp)(struct efc_lport *efc, struct efc_node_s *node,
+			   __be16 ox_id, struct fc_ct_hdr *hdr,
+			   u32 rsp_code, u32 reason_code, u32 rsn_code_expl);
+
+	void *(*send_ls_rjt)(struct efc_lport *efc, struct efc_node_s *node,
+			     u32 ox, u32 rcode, u32 rcode_expl, u32 vendor);
+
+	int (*dispatch_fcp_cmd)(struct efc_node_s *node,
+				struct efc_hw_sequence_s *seq);
+
+	int (*recv_abts_frame)(struct efc_lport *efc, struct efc_node_s *node,
+			       struct efc_hw_sequence_s *seq);
+};
+
+#define EFC_LOG_LIB		0x01 /* General logging, not categorized */
+#define EFC_LOG_NODE		0x02 /* lport layer logging */
+#define EFC_LOG_PORT		0x04 /* lport layer logging */
+#define EFC_LOG_DOMAIN		0x08 /* lport layer logging */
+#define EFC_LOG_ELS		0x10 /* lport layer logging */
+#define EFC_LOG_DOMAIN_SM	0x20 /* lport layer logging */
+#define EFC_LOG_SM		0x40 /* lport layer logging */
+
+/**
+ * @brief efc library port structure
+ * @base:	ponter to host structure
+ * @req_wwpn:	wwpn requested by user for primary sport
+ * @req_wwnn:	wwnn requested by user for primary sport
+ * @nodes_count:number of allocated nodes
+ * @nodes:	array of pointers to nodes
+ * @nodes_free_list: linked list of free nodes
+ * @vport_list:	list of VPORTS (NPIV)
+ * @configured_link_state:requested link state
+ * @lock:	Device wide lock
+ * @domain_list:linked list of virtual fabric objects
+ * @domain:	pointer to first (physical) domain (also on domain_list)
+ * @domain_instance_count:domain instance count
+ * @domain_list_empty_cb:domain list empty callback
+ *
+ */
+struct efc_lport {
+	void *base;
+	struct pci_dev  *pcidev;
+	u64 req_wwpn;
+	u64 req_wwnn;
+
+	u64 def_wwpn;
+	u64 def_wwnn;
+	u64 max_xfer_size;
+	u32 nodes_count;
+	struct efc_node_s **nodes;
+	struct list_head nodes_free_list;
+
+	u32 link_status;
+
+	/* vport */
+	struct list_head vport_list;
+
+	struct libefc_function_template tt;
+	spinlock_t lock;
+
+	bool enable_ini;
+	bool enable_tgt;
+
+	u32 log_level;
+
+	struct efc_domain_s *domain;
+	void (*domain_free_cb)(struct efc_lport *efc, void *arg);
+	void *domain_free_cb_arg;
+
+	/*
+	 * tgt_rscn_delay - delay in kicking off RSCN processing
+	 * (nameserver queries) after receiving an RSCN on the
+	 * target. This prevents thrashing of nameserver
+	 * requests due to a huge burst of RSCNs received in a
+	 * short period of time
+	 * Note: this is only valid when target RSCN handling
+	 * is enabled -- see ctrlmask.
+	 */
+	time_t tgt_rscn_delay_msec;
+
+	/*
+	 * tgt_rscn_period - determines maximum frequency when
+	 * processing back-to-back
+	 * RSCNs; e.g. if this value is 30, there will never be any
+	 * more than 1 RSCN handling per 30s window. This prevents
+	 * initiators on a faulty link generating
+	 * many RSCN from causing the target to continually query the
+	 * nameserver.
+	 * Note:this is only valid when target RSCN handling is enabled
+	 */
+	time_t tgt_rscn_period_msec;
+
+	bool external_loopback;
+	u32 nodedb_mask;
+};
+
+/*
+ * EFC library registration
+ * **********************************/
+int efcport_init(struct efc_lport *efc);
+void efcport_destroy(struct efc_lport *efc);
+/*
+ * EFC Domain
+ * **********************************/
+int efc_domain_cb(void *arg, int event, void *data);
+void efc_domain_force_free(struct efc_domain_s *domain);
+void
+efc_register_domain_free_cb(struct efc_lport *efc,
+			    void (*callback)(struct efc_lport *efc, void *arg),
+			    void *arg);
+
+/*
+ * EFC Local port
+ * **********************************/
+int efc_lport_cb(void *arg, int event, void *data);
+int8_t efc_vport_create_spec(struct efc_lport *efc, u64 wwnn,
+			     u64 wwpn, u32 fc_id, bool enable_ini,
+			     bool enable_tgt, void *tgt_data, void *ini_data);
+int efc_sport_vport_new(struct efc_domain_s *domain, u64 wwpn,
+			u64 wwnn, u32 fc_id, bool ini, bool tgt,
+			void *tgt_data, void *ini_data, bool restore_vport);
+int efc_sport_vport_del(struct efc_lport *efc, struct efc_domain_s *domain,
+			u64 wwpn, u64 wwnn);
+
+void efc_vport_del_all(struct efc_lport *efc);
+
+struct efc_sli_port_s *efc_sport_find(struct efc_domain_s *domain, u32 d_id);
+
+/*
+ * EFC Node
+ * **********************************/
+int efc_remote_node_cb(void *arg, int event, void *data);
+u64 efc_node_get_wwnn(struct efc_node_s *node);
+u64 efc_node_get_wwpn(struct efc_node_s *node);
+struct efc_node_s *efc_node_find(struct efc_sli_port_s *sport, u32 id);
+void efc_node_fcid_display(u32 fc_id, char *buffer, u32 buf_len);
+
+void efc_node_post_els_resp(struct efc_node_s *node, u32 evt, void *arg);
+void efc_node_post_shutdown(struct efc_node_s *node, u32 evt, void *arg);
+/*
+ * EFC FCP/ELS/CT interface
+ * **********************************/
+int efc_node_recv_abts_frame(struct efc_lport *efc,
+			     struct efc_node_s *node,
+			     struct efc_hw_sequence_s *seq);
+int
+efc_node_recv_els_frame(struct efc_node_s *node, struct efc_hw_sequence_s *s);
+int efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence_s *seq);
+
+int efc_node_dispatch_frame(void *arg, struct efc_hw_sequence_s *seq);
+
+int
+efc_node_recv_ct_frame(struct efc_node_s *node, struct efc_hw_sequence_s *seq);
+int
+efc_node_recv_fcp_cmd(struct efc_node_s *node, struct efc_hw_sequence_s *seq);
+int
+efc_node_recv_bls_no_sit(struct efc_node_s *node, struct efc_hw_sequence_s *s);
+
+/*
+ * EFC SCSI INTERACTION LAYER
+ * **********************************/
+void
+efc_scsi_del_initiator_complete(struct efc_lport *efc, struct efc_node_s *node);
+void
+efc_scsi_del_target_complete(struct efc_lport *efc, struct efc_node_s *node);
+void efc_scsi_io_list_empty(struct efc_lport *efc, struct efc_node_s *node);
+
+#endif /* __EFCLIB_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 10/32] elx: libefc: FC Domain state machine interfaces
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (8 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 09/32] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 11/32] elx: libefc: SLI and FC PORT " James Smart
                   ` (22 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- FC Domain registration, allocation and deallocation sequence

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_domain.c | 1393 ++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_domain.h |   57 ++
 2 files changed, 1450 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.h

diff --git a/drivers/scsi/elx/libefc/efc_domain.c b/drivers/scsi/elx/libefc/efc_domain.c
new file mode 100644
index 000000000000..0e00512924c9
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_domain.c
@@ -0,0 +1,1393 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * domain_sm Domain State Machine: States
+ */
+
+#include "efc.h"
+#include "efc_fabric.h"
+#include "efc_device.h"
+
+/**
+ * @brief Accept domain callback events from the HW.
+ *
+ * <h3 class="desc">Description</h3>
+ * HW calls this function with various domain-related events.
+ *
+ * @param arg Application-specified argument.
+ * @param event Domain event.
+ * @param data Event specific data.
+ *
+ * @return Returns 0 on success; or a negative error value on failure.
+ */
+
+int
+efc_domain_cb(void *arg, int event, void *data)
+{
+	struct efc_lport *efc = arg;
+	struct efc_domain_s *domain = NULL;
+	int rc = 0;
+
+	if (event != EFC_HW_DOMAIN_FOUND)
+		domain = data;
+
+	switch (event) {
+	case EFC_HW_DOMAIN_FOUND: {
+		u64 fcf_wwn = 0;
+		struct efc_domain_record_s *drec = data;
+
+		/* extract the fcf_wwn */
+		fcf_wwn = be64_to_cpu(*((__be64 *)drec->wwn));
+
+		efc_log_debug(efc, "Domain allocated: wwn %016llX\n",
+			      fcf_wwn);
+		/*
+		 * lookup domain, or allocate a new one
+		 * if one doesn't exist already
+		 */
+		domain = efc->domain;
+		if (!domain) {
+			domain = efc_domain_alloc(efc, fcf_wwn);
+			if (!domain) {
+				efc_log_err(efc,
+					    "efc_domain_alloc() failed\n");
+				rc = -1;
+				break;
+			}
+			efc_sm_transition(&domain->drvsm, __efc_domain_init,
+					  NULL);
+		}
+
+		if (fcf_wwn != domain->fcf_wwn) {
+			efc_log_err(efc, "evt: FOUND for existing domain\n");
+			efc_log_err(efc, "wwn:%016llX domain wwn:%016llX\n",
+				    fcf_wwn, domain->fcf_wwn);
+		}
+
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FOUND, drec);
+		break;
+	}
+
+	case EFC_HW_DOMAIN_LOST:
+		domain_trace(domain, "EFC_HW_DOMAIN_LOST:\n");
+		efc->tt.domain_hold_frames(efc, domain);
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_LOST, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ALLOC_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ALLOC_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_FAIL:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_FAIL,
+				      NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ATTACH_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ATTACH_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ATTACH_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_FAIL:\n");
+		efc_domain_post_event(domain,
+				      EFC_EVT_DOMAIN_ATTACH_FAIL, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_FREE_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_FREE_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_FREE_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_FREE_FAIL:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_FAIL, NULL);
+		break;
+
+	default:
+		efc_log_warn(efc, "unsupported event %#x\n", event);
+	}
+
+	return rc;
+}
+
+/**
+ * @brief Allocate a domain object.
+ *
+ * <h3 class="desc">Description</h3>
+ * A domain object is allocated and initialized. It is associated with the
+ * \c efc argument.
+ *
+ * @param efc Pointer to the EFC device.
+ * @param fcf_wwn FCF WWN of the domain.
+ *
+ * @return Returns a pointer to the struct efc_domain_s object; or NULL.
+ */
+
+struct efc_domain_s *
+efc_domain_alloc(struct efc_lport *efc, uint64_t fcf_wwn)
+{
+	struct efc_domain_s *domain;
+
+	domain = kzalloc(sizeof(*domain), GFP_ATOMIC);
+	if (domain) {
+		domain->efc = efc;
+		domain->drvsm.app = domain;
+
+		/* Allocate a sparse vector for sport FC_ID's */
+		domain->lookup = efc_spv_new(efc);
+		if (!domain->lookup) {
+			efc_log_err(efc, "efc_spv_new() failed\n");
+			kfree(domain);
+			return NULL;
+		}
+
+		INIT_LIST_HEAD(&domain->sport_list);
+		domain->fcf_wwn = fcf_wwn;
+		efc_log_debug(efc, "Domain allocated: wwn %016llX\n",
+			      domain->fcf_wwn);
+		efc->domain = domain;
+	} else {
+		efc_log_err(efc, "domain allocation failed\n");
+	}
+
+	return domain;
+}
+
+/**
+ * @brief Free a domain object.
+ *
+ * <h3 class="desc">Description</h3>
+ * The domain object is freed.
+ *
+ * @param domain Domain object to free.
+ *
+ * @return None.
+ */
+
+void
+efc_domain_free(struct efc_domain_s *domain)
+{
+	struct efc_lport *efc;
+
+	efc = domain->efc;
+
+	/* Hold frames to clear the domain pointer from the xport lookup */
+	efc->tt.domain_hold_frames(efc, domain);
+
+	efc_log_debug(efc, "Domain free: wwn %016llX\n",
+		      domain->fcf_wwn);
+
+	efc_spv_del(domain->lookup);
+	domain->lookup = NULL;
+	efc->domain = NULL;
+
+	if (efc->domain_free_cb)
+		(*efc->domain_free_cb)(efc, efc->domain_free_cb_arg);
+
+	kfree(domain);
+}
+
+/**
+ * @brief Free memory resources of a domain object.
+ *
+ * <h3 class="desc">Description</h3>
+ * After the domain object is freed, its child objects are also freed.
+ *
+ * @param domain Pointer to a domain object.
+ *
+ * @return None.
+ */
+
+void
+efc_domain_force_free(struct efc_domain_s *domain)
+{
+	struct efc_sli_port_s *sport;
+	struct efc_sli_port_s *next;
+	struct efc_lport *efc = domain->efc;
+
+	/* Shutdown domain sm */
+	efc_sm_disable(&domain->drvsm);
+
+	list_for_each_entry_safe(sport, next, &domain->sport_list, list_entry) {
+		efc_sport_force_free(sport);
+	}
+
+	efc->tt.hw_domain_force_free(efc, domain);
+	efc_domain_free(domain);
+}
+
+/**
+ * @brief Register a callback when the domain_list goes empty.
+ *
+ * <h3 class="desc">Description</h3>
+ * A function callback may be registered when the domain is freed.
+ *
+ * @param efc Pointer to a device object.
+ * @param callback Callback function.
+ * @param arg Callback argument.
+ *
+ * @return None.
+ */
+
+void
+efc_register_domain_free_cb(struct efc_lport *efc,
+			    void (*callback)(struct efc_lport *efc, void *arg),
+			    void *arg)
+{
+	efc->domain_free_cb = callback;
+	efc->domain_free_cb_arg = arg;
+	if (!efc->domain && callback)
+		(*callback)(efc, arg);
+}
+
+/**
+ * @ingroup domain_sm
+ * @brief Domain state machine: Common event handler.
+ *
+ * <h3 class="desc">Description</h3>
+ * Common/shared events are handled here for the domain state machine.
+ *
+ * @param funcname Function name text.
+ * @param ctx Domain state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+static void *
+__efc_domain_common(const char *funcname, struct efc_sm_ctx_s *ctx,
+		    enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_domain_s *domain = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/*
+		 * this can arise if an FLOGI fails on the SPORT,
+		 * and the SPORT is shutdown
+		 */
+		break;
+	default:
+		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
+			     funcname, efc_sm_event_name(evt));
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup domain_sm
+ * @brief Domain state machine: Common shutdown.
+ *
+ * <h3 class="desc">Description</h3>
+ * Handles common shutdown events.
+ *
+ * @param funcname Function name text.
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+static void *
+__efc_domain_common_shutdown(const char *funcname, struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_domain_s *domain = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+		break;
+	case EFC_EVT_DOMAIN_FOUND:
+		/* sm: / save drec, mark domain_found_pending */
+		memcpy(&domain->pending_drec, arg,
+		       sizeof(domain->pending_drec));
+		domain->domain_found_pending = true;
+		break;
+	case EFC_EVT_DOMAIN_LOST: /* clear drec available */
+		/* sm: / unmark domain_found_pending */
+		domain->domain_found_pending = false;
+		break;
+
+	default:
+		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
+			     funcname, efc_sm_event_name(evt));
+		break;
+	}
+
+	return NULL;
+}
+
+#define std_domain_state_decl(...)\
+	struct efc_domain_s *domain = NULL;\
+	struct efc_lport *efc = NULL;\
+	\
+	efc_assert(ctx, NULL);\
+	efc_assert(ctx->app, NULL);\
+	domain = ctx->app;\
+	efc_assert(domain->efc, NULL);\
+	efc = domain->efc
+
+/**
+ * @ingroup domain_sm
+ * @brief Domain state machine: Initial state.
+ *
+ * <h3 class="desc">Description</h3>
+ * The initial state for a domain. Each domain is initialized to
+ * this state at start of day (SOD).
+ *
+ * @param ctx Domain state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_domain_init(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt,
+		  void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		domain->attached = false;
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND: {
+		u32	i;
+		struct efc_domain_record_s *drec = arg;
+		struct efc_sli_port_s *sport;
+
+		u64	my_wwnn = efc->req_wwnn;
+		u64	my_wwpn = efc->req_wwpn;
+		__be64		be_wwpn;
+
+		/*
+		 * For now, user must specify both port name and node name,
+		 * or we let firmware pick both (same as for vports).
+		 * do we want to allow setting only port name or
+		 * only node name?
+		 */
+		if (my_wwpn == 0 || my_wwnn == 0) {
+			efc_log_debug(efc,
+				      "using default hardware WWN configuration\n");
+			my_wwpn = efc->def_wwpn;
+			my_wwnn = efc->def_wwnn;
+		}
+
+		efc_log_debug(efc,
+			      "Creating base sport using WWPN %016llX WWNN %016llX\n",
+			      my_wwpn, my_wwnn);
+
+		/* Allocate a sport and transition to __efc_sport_allocated */
+		sport = efc_sport_alloc(domain, my_wwpn, my_wwnn, U32_MAX,
+					efc->enable_ini, efc->enable_tgt);
+
+		if (!sport) {
+			efc_log_err(efc, "efc_sport_alloc() failed\n");
+			break;
+		}
+		efc_sm_transition(&sport->sm, __efc_sport_allocated, NULL);
+
+		be_wwpn = cpu_to_be64(sport->wwpn);
+
+		/* allocate struct efc_sli_port_s object for local port
+		 * Note: drec->fc_id is ALPA from read_topology only if loop
+		 */
+		if (efc->tt.hw_port_alloc(efc, sport, NULL,
+					  (uint8_t *)&be_wwpn)) {
+			efc_log_err(efc, "Can't allocate port\n");
+			efc_sport_free(sport);
+			break;
+		}
+
+		/* initialize domain object */
+		domain->is_loop = drec->is_loop;
+
+		/*
+		 * If the loop position map includes ALPA == 0,
+		 * then we are in a public loop (NL_PORT)
+		 * Note that the first element of the loopmap[]
+		 * contains the count of elements, and if
+		 * ALPA == 0 is present, it will occupy the first
+		 * location after the count.
+		 */
+		domain->is_nlport = drec->map.loop[1] == 0x00;
+
+		if (!domain->is_loop) {
+			/* Initiate HW domain alloc */
+			if (efc->tt.hw_domain_alloc(efc, domain, drec->index)) {
+				efc_log_err(efc,
+					    "Failed to initiate HW domain allocation\n");
+				break;
+			}
+			efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
+			break;
+		}
+
+		efc_log_debug(efc, "%s fc_id=%#x speed=%d\n",
+			      drec->is_loop ?
+			      (domain->is_nlport ?
+			      "public-loop" : "loop") : "other",
+			      drec->fc_id, drec->speed);
+
+		sport->fc_id = drec->fc_id;
+		sport->topology = EFC_SPORT_TOPOLOGY_LOOP;
+		snprintf(sport->display_name, sizeof(sport->display_name),
+			 "s%06x", drec->fc_id);
+
+		if (efc->enable_ini) {
+			u32 count = drec->map.loop[0];
+
+			efc_log_debug(efc, "%d position map entries\n",
+				      count);
+			for (i = 1; i <= count; i++) {
+				if (drec->map.loop[i] != drec->fc_id) {
+					struct efc_node_s *node;
+
+					efc_log_debug(efc, "%#x -> %#x\n",
+						      drec->fc_id,
+						      drec->map.loop[i]);
+					node = efc_node_alloc(sport,
+							      drec->map.loop[i],
+							      false, true);
+					if (!node) {
+						efc_log_err(efc,
+							    "efc_node_alloc() failed\n");
+						break;
+					}
+					efc_node_transition(node,
+							    __efc_d_wait_loop,
+							    NULL);
+				}
+			}
+		}
+
+		/* Initiate HW domain alloc */
+		if (efc->tt.hw_domain_alloc(efc, domain, drec->index)) {
+			efc_log_err(efc,
+				    "Failed to initiate HW domain allocation\n");
+			break;
+		}
+		efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
+		break;
+	}
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup domain_sm
+ * @brief Domain state machine: Wait for the domain allocation to complete.
+ *
+ * <h3 class="desc">Description</h3>
+ * Waits for the domain state to be allocated. After the HW domain
+ * allocation process has been initiated, this state waits for
+ * that process to complete (i.e. a domain-alloc-ok event).
+ *
+ * @param ctx Domain state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_domain_wait_alloc(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_sli_port_s *sport;
+
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ALLOC_OK: {
+		struct fc_els_flogi  *sp;
+
+		sport = domain->sport;
+		efc_assert(sport, NULL);
+		sp = (struct fc_els_flogi  *)sport->service_params;
+
+		/* Save the domain service parameters */
+		memcpy(domain->service_params + 4, domain->dma.virt,
+		       sizeof(struct fc_els_flogi) - 4);
+		memcpy(sport->service_params + 4, domain->dma.virt,
+		       sizeof(struct fc_els_flogi) - 4);
+
+		/*
+		 * Update the sport's service parameters,
+		 * user might have specified non-default names
+		 */
+		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
+		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
+
+		/*
+		 * Take the loop topology path,
+		 * unless we are an NL_PORT (public loop)
+		 */
+		if (domain->is_loop && !domain->is_nlport) {
+			/*
+			 * For loop, we already have our FC ID
+			 * and don't need fabric login.
+			 * Transition to the allocated state and
+			 * post an event to attach to
+			 * the domain. Note that this breaks the
+			 * normal action/transition
+			 * pattern here to avoid a race with the
+			 * domain attach callback.
+			 */
+			/* sm: is_loop / domain_attach */
+			efc_sm_transition(ctx, __efc_domain_allocated, NULL);
+			__efc_domain_attach_internal(domain, sport->fc_id);
+			break;
+		}
+		{
+			struct efc_node_s *node;
+
+			/* alloc fabric node, send FLOGI */
+			node = efc_node_find(sport, FC_FID_FLOGI);
+			if (node) {
+				efc_log_err(efc,
+					    "Fabric Controller node already exists\n");
+				break;
+			}
+			node = efc_node_alloc(sport, FC_FID_FLOGI,
+					      false, false);
+			if (!node) {
+				efc_log_err(efc,
+					    "Error: efc_node_alloc() failed\n");
+			} else {
+				efc_node_transition(node,
+						    __efc_fabric_init, NULL);
+			}
+			/* Accept frames */
+			domain->req_accept_frames = true;
+		}
+		/* sm: / start fabric logins */
+		efc_sm_transition(ctx, __efc_domain_allocated, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ALLOC_FAIL:
+		efc_log_err(efc, "%s recv'd waiting for DOMAIN_ALLOC_OK;",
+			    efc_sm_event_name(evt));
+		efc_log_err(efc, "shutting down domain\n");
+		domain->req_domain_free = true;
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		break;
+
+	case EFC_EVT_DOMAIN_LOST:
+		efc_log_debug(efc,
+			      "%s received while waiting for hw_domain_alloc()\n",
+			efc_sm_event_name(evt));
+		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
+		break;
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup domain_sm
+ * @brief Domain state machine: Wait for the domain attach request.
+ *
+ * <h3 class="desc">Description</h3>
+ * In this state, the domain has been allocated and is waiting for
+ * a domain attach request.
+ * The attach request comes from a node instance completing the fabric login,
+ * or from a point-to-point negotiation and login.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_domain_allocated(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg)
+{
+	int rc = 0;
+
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_REQ_ATTACH: {
+		u32 fc_id;
+
+		efc_assert(arg, NULL);
+
+		fc_id = *((u32 *)arg);
+		efc_log_debug(efc, "Requesting hw domain attach fc_id x%x\n",
+			      fc_id);
+		/* Update sport lookup */
+		efc_spv_set(domain->lookup, fc_id, domain->sport);
+
+		/* Update display name for the sport */
+		efc_node_fcid_display(fc_id, domain->sport->display_name,
+				      sizeof(domain->sport->display_name));
+
+		/* Issue domain attach call */
+		rc = efc->tt.hw_domain_attach(efc, domain, fc_id);
+		if (rc) {
+			efc_log_err(efc, "efc_hw_domain_attach failed: %d\n",
+				    rc);
+			return NULL;
+		}
+		/* sm: / domain_attach */
+		efc_sm_transition(ctx, __efc_domain_wait_attach, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_LOST: {
+		int rc;
+
+		efc_log_debug(efc,
+			      "%s received while in EFC_EVT_DOMAIN_REQ_ATTACH\n",
+			efc_sm_event_name(evt));
+		if (!list_empty(&domain->sport_list)) {
+			/*
+			 * if there are sports, transition to
+			 * wait state and send shutdown to each
+			 * sport
+			 */
+			struct efc_sli_port_s	*sport = NULL;
+			struct efc_sli_port_s	*sport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
+					  NULL);
+			list_for_each_entry_safe(sport, sport_next,
+						 &domain->sport_list,
+						 list_entry) {
+				efc_sm_post_event(&sport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no sports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			rc = efc->tt.hw_domain_free(efc, domain);
+			if (rc) {
+				efc_log_err(efc,
+					    "hw_domain_free failed: %d\n", rc);
+			}
+		}
+
+		break;
+	}
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup domain_sm
+ * @brief Domain state machine: Wait for the HW domain attach to complete.
+ *
+ * <h3 class="desc">Description</h3>
+ * Waits for the HW domain attach to complete. Forwards attach ok event to the
+ * fabric node state machine.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_domain_wait_attach(struct efc_sm_ctx_s *ctx,
+			 enum efc_sm_event_e evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		struct efc_node_s *node = NULL;
+		struct efc_node_s *next_node = NULL;
+		struct efc_sli_port_s *sport;
+		struct efc_sli_port_s *next_sport;
+
+		/*
+		 * Set domain notify pending state to avoid
+		 * duplicate domain event post
+		 */
+		domain->domain_notify_pend = true;
+
+		/* Mark as attached */
+		domain->attached = true;
+
+		/* Register with SCSI API */
+		efc->tt.new_domain(efc, domain);
+
+		/* Transition to ready */
+		/* sm: / forward event to all sports and nodes */
+		efc_sm_transition(ctx, __efc_domain_ready, NULL);
+
+		/* We have an FCFI, so we can accept frames */
+		domain->req_accept_frames = true;
+
+		/*
+		 * Notify all nodes that the domain attach request
+		 * has completed
+		 * Note: sport will have already received notification
+		 * of sport attached as a result of the HW's port attach.
+		 */
+		list_for_each_entry_safe(sport, next_sport,
+					 &domain->sport_list, list_entry) {
+			list_for_each_entry_safe(node, next_node,
+						 &sport->node_list,
+						 list_entry) {
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		}
+		domain->domain_notify_pend = false;
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ATTACH_FAIL:
+		efc_log_debug(efc,
+			      "%s received while waiting for hw attach\n",
+			      efc_sm_event_name(evt));
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_LOST:
+		/*
+		 * Domain lost while waiting for an attach to complete,
+		 * go to a state that waits for  the domain attach to
+		 * complete, then handle domain lost
+		 */
+		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
+		break;
+
+	case EFC_EVT_DOMAIN_REQ_ATTACH:
+		/*
+		 * In P2P we can get an attach request from
+		 * the other FLOGI path, so drop this one
+		 */
+		break;
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup domain_sm
+ * @brief Domain state machine: Ready state.
+ *
+ * <h3 class="desc">Description</h3>
+ * This is a domain ready state.
+ * It waits for a domain-lost event, and initiates shutdown.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_domain_ready(struct efc_sm_ctx_s *ctx,
+		   enum efc_sm_event_e evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		/* start any pending vports */
+		if (efc_vport_start(domain)) {
+			efc_log_debug(domain->efc,
+				      "efc_vport_start didn't start vports\n");
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_LOST: {
+		int rc;
+
+		if (!list_empty(&domain->sport_list)) {
+			/*
+			 * if there are sports, transition to wait state
+			 * and send shutdown to each sport
+			 */
+			struct efc_sli_port_s	*sport = NULL;
+			struct efc_sli_port_s	*sport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
+					  NULL);
+			list_for_each_entry_safe(sport, sport_next,
+						 &domain->sport_list,
+						 list_entry) {
+				efc_sm_post_event(&sport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no sports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			rc = efc->tt.hw_domain_free(efc, domain);
+			if (rc) {
+				efc_log_err(efc,
+					    "hw_domain_free failed: %d\n", rc);
+			}
+		}
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_REQ_ATTACH: {
+		/* can happen during p2p */
+		u32 fc_id;
+
+		fc_id = *((u32 *)arg);
+
+		/* Assume that the domain is attached */
+		efc_assert(domain->attached, NULL);
+
+		/*
+		 * Verify that the requested FC_ID
+		 * is the same as the one we're working with
+		 */
+		efc_assert(domain->sport->fc_id == fc_id, NULL);
+		break;
+	}
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup domain_sm
+ * @brief Domain state machine:
+ * Wait for nodes to free prior to the domain shutdown.
+ *
+ * <h3 class="desc">Description</h3>
+ * All nodes are freed, and ready for a domain shutdown.
+ *
+ * @param ctx Remote node sm context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_domain_wait_sports_free(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ALL_CHILD_NODES_FREE: {
+		int rc;
+
+		/* sm: / efc_hw_domain_free */
+		efc_sm_transition(ctx, __efc_domain_wait_shutdown, NULL);
+
+		/* Request efc_hw_domain_free and wait for completion */
+		rc = efc->tt.hw_domain_free(efc, domain);
+		if (rc) {
+			efc_log_err(efc, "efc_hw_domain_free() failed: %d\n",
+				    rc);
+		}
+		break;
+	}
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup domain_sm
+ * @brief Domain state machine: Complete the domain shutdown.
+ *
+ * <h3 class="desc">Description</h3>
+ * Waits for a HW domain free to complete.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_domain_wait_shutdown(struct efc_sm_ctx_s *ctx,
+			   enum efc_sm_event_e evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_FREE_OK: {
+		efc->tt.del_domain(efc, domain);
+
+		/* sm: / domain_free */
+		if (domain->domain_found_pending) {
+			/*
+			 * save fcf_wwn and drec from this domain,
+			 * free current domain and allocate
+			 * a new one with the same fcf_wwn
+			 * could use a SLI-4 "re-register VPI"
+			 * operation here?
+			 */
+			u64 fcf_wwn = domain->fcf_wwn;
+			struct efc_domain_record_s drec = domain->pending_drec;
+
+			efc_log_debug(efc, "Reallocating domain\n");
+			domain->req_domain_free = true;
+			domain = efc_domain_alloc(efc, fcf_wwn);
+
+			if (!domain) {
+				efc_log_err(efc,
+					    "efc_domain_alloc() failed\n");
+				return NULL;
+			}
+			/*
+			 * got a new domain; at this point,
+			 * there are at least two domains
+			 * once the req_domain_free flag is processed,
+			 * the associated domain will be removed.
+			 */
+			efc_sm_transition(&domain->drvsm, __efc_domain_init,
+					  NULL);
+			efc_sm_post_event(&domain->drvsm,
+					  EFC_EVT_DOMAIN_FOUND, &drec);
+		} else {
+			domain->req_domain_free = true;
+		}
+		break;
+	}
+
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup domain_sm
+ * @brief Domain state machine: Wait for the domain alloc/attach completion
+ * after receiving a domain lost.
+ *
+ * <h3 class="desc">Description</h3>
+ * This state is entered when receiving a domain lost
+ * while waiting for a domain alloc
+ * or a domain attach to complete.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_domain_wait_domain_lost(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ALLOC_OK:
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		int rc;
+
+		if (!list_empty(&domain->sport_list)) {
+			/*
+			 * if there are sports, transition to
+			 * wait state and send shutdown to each sport
+			 */
+			struct efc_sli_port_s	*sport = NULL;
+			struct efc_sli_port_s	*sport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
+					  NULL);
+			list_for_each_entry_safe(sport, sport_next,
+						 &domain->sport_list,
+						 list_entry) {
+				efc_sm_post_event(&sport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no sports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			rc = efc->tt.hw_domain_free(efc, domain);
+			if (rc) {
+				efc_log_err(efc,
+					    "efc_hw_domain_free() failed: %d\n",
+									rc);
+			}
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_ALLOC_FAIL:
+	case EFC_EVT_DOMAIN_ATTACH_FAIL:
+		efc_log_err(efc, "[domain] %-20s: failed\n",
+			    efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @brief Initiator domain attach. (internal call only)
+ *
+ * Assumes that the domain SM lock is already locked
+ *
+ * <h3 class="desc">Description</h3>
+ * The HW domain attach function is started.
+ *
+ * @param domain Pointer to the domain object.
+ * @param s_id FC_ID of which to register this domain.
+ *
+ * @return None.
+ */
+
+void
+__efc_domain_attach_internal(struct efc_domain_s *domain, u32 s_id)
+{
+	memcpy(domain->dma.virt,
+	       ((uint8_t *)domain->flogi_service_params) + 4,
+		   sizeof(struct fc_els_flogi) - 4);
+	(void)efc_sm_post_event(&domain->drvsm, EFC_EVT_DOMAIN_REQ_ATTACH,
+				 &s_id);
+}
+
+/**
+ * @brief Initiator domain attach.
+ *
+ * <h3 class="desc">Description</h3>
+ * The HW domain attach function is started.
+ *
+ * @param domain Pointer to the domain object.
+ * @param s_id FC_ID of which to register this domain.
+ *
+ * @return None.
+ */
+
+void
+efc_domain_attach(struct efc_domain_s *domain, u32 s_id)
+{
+	__efc_domain_attach_internal(domain, s_id);
+}
+
+int
+efc_domain_post_event(struct efc_domain_s *domain,
+		      enum efc_sm_event_e event, void *arg)
+{
+	int rc;
+	bool accept_frames;
+	bool req_domain_free;
+	struct efc_lport *efc = domain->efc;
+
+	rc = efc_sm_post_event(&domain->drvsm, event, arg);
+
+	req_domain_free = domain->req_domain_free;
+	domain->req_domain_free = false;
+
+	accept_frames = domain->req_accept_frames;
+	domain->req_accept_frames = false;
+
+	if (accept_frames)
+		efc->tt.domain_accept_frames(efc, domain);
+
+	if (req_domain_free)
+		efc_domain_free(domain);
+
+	return rc;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Dispatch unsolicited FC frame.
+ *
+ * <h3 class="desc">Description</h3>
+ * This function processes an unsolicited FC frame queued at the
+ * domain level.
+ *
+ * @param arg Pointer to efc object.
+ * @param seq Header/payload sequence buffers.
+ *
+ * @return Returns 0 if frame processed and RX buffers cleaned
+ * up appropriately, -1 if frame not handled.
+ */
+
+int
+efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence_s *seq)
+{
+	struct efc_domain_s *domain = (struct efc_domain_s *)arg;
+	struct efc_lport *efc = domain->efc;
+	struct fc_frame_header *hdr;
+	u32 s_id;
+	u32 d_id;
+	struct efc_node_s *node = NULL;
+	struct efc_sli_port_s *sport = NULL;
+	unsigned long flags = 0;
+
+	if (!seq->header || !seq->header->dma.virt || !seq->payload->dma.virt) {
+		efc_log_err(efc, "Sequence header or payload is null\n");
+		return -1;
+	}
+
+	hdr = seq->header->dma.virt;
+
+	/* extract the s_id and d_id */
+	s_id = ntoh24(hdr->fh_s_id);
+	d_id = ntoh24(hdr->fh_d_id);
+
+	sport = domain->sport;
+	if (!sport) {
+		efc_log_err(efc,
+			    "Drop frame, sport for FC ID 0x%06x is NULL", d_id);
+		return -1;
+	}
+
+	if (sport->fc_id != d_id) {
+		/* Not a physical port IO lookup sport associated with the
+		 * npiv port
+		 */
+		/* Look up without lock */
+		sport = efc_sport_find(domain, d_id);
+		if (!sport) {
+			if (hdr->fh_type == FC_TYPE_FCP) {
+				/* Drop frame */
+				efc_log_warn(efc,
+					     "unsolicited FCP frame with invalid d_id x%x\n",
+					d_id);
+				return -1;
+			}
+				/* p2p will use this case */
+				sport = domain->sport;
+		}
+	}
+
+	spin_lock_irqsave(&efc->lock, flags);
+	/* Lookup the node given the remote s_id */
+	node = efc_node_find(sport, s_id);
+
+	/* If not found, then create a new node */
+	if (!node) {
+		/* If this is solicited data or control based on R_CTL and
+		 * there is no node context,
+		 * then we can drop the frame
+		 */
+		if ((hdr->fh_r_ctl == FC_RCTL_DD_SOL_DATA) ||
+			(hdr->fh_r_ctl == FC_RCTL_DD_SOL_CTL)) {
+			efc_log_debug(efc,
+				      "solicited data/ctrl frame without node,drop\n");
+			spin_unlock_irqrestore(&efc->lock, flags);
+			return -1;
+		}
+
+		node = efc_node_alloc(sport, s_id, false, false);
+		if (!node) {
+			efc_log_err(efc, "efc_node_alloc() failed\n");
+			spin_unlock_irqrestore(&efc->lock, flags);
+			return -1;
+		}
+		/* don't send PLOGI on efc_d_init entry */
+		efc_node_init_device(node, false);
+	}
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	if (node->hold_frames || !list_empty(&node->pend_frames)) {
+
+		/* add frame to node's pending list */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+			INIT_LIST_HEAD(&seq->list_entry);
+			list_add_tail(&seq->list_entry, &node->pend_frames);
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+		return 0;
+	}
+
+	/* now dispatch frame to the node frame handler */
+	return efc_node_dispatch_frame(node, seq);
+}
+
+/**
+ * @ingroup unsol
+ * @brief Dispatch a frame.
+ *
+ * <h3 class="desc">Description</h3>
+ * A frame is dispatched from the \c node to the handler.
+ *
+ * @param arg Node that originated the frame.
+ * @param seq Header/payload sequence buffers.
+ *
+ * @return Returns 0 if frame processed and RX buffers cleaned
+ * up appropriately, -1 if frame not handled.
+ */
+int
+efc_node_dispatch_frame(void *arg, struct efc_hw_sequence_s *seq)
+{
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	u32 port_id;
+	struct efc_node_s *node = (struct efc_node_s *)arg;
+	int rc = -1;
+	int sit_set = 0;
+
+	struct efc_lport *efc = node->efc;
+
+	port_id = ntoh24(hdr->fh_s_id);
+	efc_assert(port_id == node->rnode.fc_id, -1);
+
+	if (!(ntoh24(hdr->fh_f_ctl) & FC_FC_END_SEQ)) {
+		node_printf(node,
+			    "Dropping frame hdr = %08x %08x %08x %08x %08x %08x\n",
+		    cpu_to_be32(((u32 *)hdr)[0]),
+		    cpu_to_be32(((u32 *)hdr)[1]),
+		    cpu_to_be32(((u32 *)hdr)[2]),
+		    cpu_to_be32(((u32 *)hdr)[3]),
+		    cpu_to_be32(((u32 *)hdr)[4]),
+		    cpu_to_be32(((u32 *)hdr)[5]));
+		return rc;
+	}
+
+	/*if SIT is set */
+	if (ntoh24(hdr->fh_f_ctl) & FC_FC_SEQ_INIT)
+		sit_set = 1;
+
+	switch (hdr->fh_r_ctl) {
+	case FC_RCTL_ELS_REQ:
+	case FC_RCTL_ELS_REP:
+		if (sit_set)
+			rc = efc_node_recv_els_frame(node, seq);
+
+		//failure status to release the seq
+		if (!rc)
+			rc = 2;
+		break;
+
+	case FC_RCTL_BA_ABTS:
+	case FC_RCTL_BA_ACC:
+	case FC_RCTL_BA_RJT:
+	case FC_RCTL_BA_NOP:
+		if (sit_set)
+			rc = efc->tt.recv_abts_frame(efc, node, seq);
+		else
+			rc = efc_node_recv_bls_no_sit(node, seq);
+		break;
+
+	case FC_RCTL_DD_UNSOL_CMD:
+	case FC_RCTL_DD_UNSOL_CTL:
+		switch (hdr->fh_type) {
+		case FC_TYPE_FCP:
+			if ((hdr->fh_r_ctl & 0xf) == FC_RCTL_DD_UNSOL_CMD) {
+				if (!node->fcp_enabled) {
+					rc = efc_node_recv_fcp_cmd(node, seq);
+					break;
+				}
+
+				if (sit_set) {
+					rc = efc->tt.dispatch_fcp_cmd(node,
+									seq);
+				} else {
+					node_printf(node,
+					   "Unsol cmd received with no SIT\n");
+				}
+			} else if ((hdr->fh_r_ctl & 0xf) ==
+							FC_RCTL_DD_SOL_DATA) {
+				node_printf(node,
+				    "solicited data received.Dropping IO\n");
+			}
+			break;
+		case FC_TYPE_CT:
+			if (sit_set)
+				rc = efc_node_recv_ct_frame(node, seq);
+			break;
+		default:
+			break;
+		}
+		break;
+	default:
+		efc_log_err(efc, "Unhandled frame rctl: %02x\n", hdr->fh_r_ctl);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/libefc/efc_domain.h b/drivers/scsi/elx/libefc/efc_domain.h
new file mode 100644
index 000000000000..fa07838e4240
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_domain.h
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Declare driver's domain handler exported interface
+ */
+
+#if !defined(__EFCT_DOMAIN_H__)
+#define __EFCT_DOMAIN_H__
+
+#define SLI4_MAX_FCFI 64
+extern int
+efc_domain_init(struct efc_lport *efc, struct efc_domain_s *domain);
+extern struct efc_domain_s *
+efc_domain_find(struct efc_lport *efc, uint64_t fcf_wwn);
+extern struct efc_domain_s *
+efc_domain_alloc(struct efc_lport *efc, uint64_t fcf_wwn);
+extern void
+efc_domain_free(struct efc_domain_s *domain);
+
+extern void *
+__efc_domain_init(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_domain_wait_alloc(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_domain_allocated(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_domain_wait_attach(struct efc_sm_ctx_s *ctx,
+			 enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_domain_ready(struct efc_sm_ctx_s *ctx,
+		   enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_domain_wait_sports_free(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_domain_wait_shutdown(struct efc_sm_ctx_s *ctx,
+			   enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_domain_wait_domain_lost(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg);
+
+extern void
+efc_domain_attach(struct efc_domain_s *domain, u32 s_id);
+extern int
+efc_domain_post_event(struct efc_domain_s *domain,
+		      enum efc_sm_event_e event, void *arg);
+extern void
+__efc_domain_attach_internal(struct efc_domain_s *domain, u32 s_id);
+
+#endif /* __EFCT_DOMAIN_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 11/32] elx: libefc: SLI and FC PORT state machine interfaces
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (9 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 10/32] elx: libefc: FC Domain state machine interfaces James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 12/32] elx: libefc: Remote node " James Smart
                   ` (21 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- SLI and FC port (aka n_port_id) registration, allocation and
  deallocation.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_sport.c | 1157 +++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_sport.h |   52 ++
 2 files changed, 1209 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.h

diff --git a/drivers/scsi/elx/libefc/efc_sport.c b/drivers/scsi/elx/libefc/efc_sport.c
new file mode 100644
index 000000000000..60b60212fc82
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sport.c
@@ -0,0 +1,1157 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Details SLI port (sport) functions.
+ */
+
+#include "efc.h"
+#include "efc_fabric.h"
+#include "efc_device.h"
+
+static void efc_vport_update_spec(struct efc_sli_port_s *sport);
+static void efc_vport_link_down(struct efc_sli_port_s *sport);
+
+/*!
+ *@defgroup sport_sm SLI Port (sport) State Machine: States
+ */
+
+/**
+ * @ingroup sport_sm
+ * @brief SLI port HW callback.
+ *
+ * @par Description
+ * This function is called in response to a HW sport event.
+ * This code resolves
+ * the reference to the sport object, and posts the corresponding event.
+ *
+ * @param arg Pointer to the EFC context.
+ * @param event HW sport event.
+ * @param data Application-specific event (pointer to the sport).
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+int
+efc_lport_cb(void *arg, int event, void *data)
+{
+	struct efc_lport *efc = arg;
+	struct efc_sli_port_s *sport = data;
+
+	switch (event) {
+	case EFC_HW_PORT_ALLOC_OK:
+		efc_log_debug(efc, "EFC_HW_PORT_ALLOC_OK\n");
+		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_ALLOC_OK, NULL);
+		break;
+	case EFC_HW_PORT_ALLOC_FAIL:
+		efc_log_debug(efc, "EFC_HW_PORT_ALLOC_FAIL\n");
+		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_ALLOC_FAIL, NULL);
+		break;
+	case EFC_HW_PORT_ATTACH_OK:
+		efc_log_debug(efc, "EFC_HW_PORT_ATTACH_OK\n");
+		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_ATTACH_OK, NULL);
+		break;
+	case EFC_HW_PORT_ATTACH_FAIL:
+		efc_log_debug(efc, "EFC_HW_PORT_ATTACH_FAIL\n");
+		efc_sm_post_event(&sport->sm,
+				  EFC_EVT_SPORT_ATTACH_FAIL, NULL);
+		break;
+	case EFC_HW_PORT_FREE_OK:
+		efc_log_debug(efc, "EFC_HW_PORT_FREE_OK\n");
+		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_FREE_OK, NULL);
+		break;
+	case EFC_HW_PORT_FREE_FAIL:
+		efc_log_debug(efc, "EFC_HW_PORT_FREE_FAIL\n");
+		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_FREE_FAIL, NULL);
+		break;
+	default:
+		efc_log_test(efc, "unknown event %#x\n", event);
+	}
+
+	return 0;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief Allocate a SLI port object.
+ *
+ * @par Description
+ * A sport object is allocated and associated with the domain. Various
+ * structure members are initialized.
+ *
+ * @param domain Pointer to the domain structure.
+ * @param wwpn World wide port name in host endian.
+ * @param wwnn World wide node name in host endian.
+ * @param fc_id Port ID of sport may be specified,
+ *              use U32_MAX to fabric choose
+ * @param enable_ini Enables initiator capability
+ *                   on this port using a non-zero value.
+ * @param enable_tgt Enables target capability
+ *                   on this port using a non-zero value.
+ *
+ * @return Pointer to an struct efc_sli_port_s object; or NULL.
+ */
+
+struct efc_sli_port_s *
+efc_sport_alloc(struct efc_domain_s *domain, uint64_t wwpn, uint64_t wwnn,
+		u32 fc_id, bool enable_ini, bool enable_tgt)
+{
+	struct efc_sli_port_s *sport;
+
+	if (domain->efc->enable_ini)
+		enable_ini = 0;
+
+	/* Return a failure if this sport has already been allocated */
+	if (wwpn != 0) {
+		sport = efc_sport_find_wwn(domain, wwnn, wwpn);
+		if (sport) {
+			efc_log_err(domain->efc,
+				    "Failed: SPORT %016llX %016llX already allocated\n",
+				    wwnn, wwpn);
+			return NULL;
+		}
+	}
+
+	sport = kzalloc(sizeof(*sport), GFP_ATOMIC);
+	if (sport) {
+		sport->efc = domain->efc;
+		snprintf(sport->display_name, sizeof(sport->display_name),
+			 "------");
+		sport->domain = domain;
+		sport->lookup = efc_spv_new(domain->efc);
+		sport->instance_index = domain->sport_instance_count++;
+		INIT_LIST_HEAD(&sport->node_list);
+		sport->sm.app = sport;
+		sport->enable_ini = enable_ini;
+		sport->enable_tgt = enable_tgt;
+		sport->enable_rscn = (sport->enable_ini ||
+				     (sport->enable_tgt &&
+				      enable_target_rscn(sport->efc)));
+
+		/* Copy service parameters from domain */
+		memcpy(sport->service_params, domain->service_params,
+		       sizeof(struct fc_els_flogi));
+
+		/* Update requested fc_id */
+		sport->fc_id = fc_id;
+
+		/* Update the sport's service parameters for the new wwn's */
+		sport->wwpn = wwpn;
+		sport->wwnn = wwnn;
+		snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
+			 "%016llX", wwnn);
+
+		/*
+		 * if this is the "first" sport of the domain,
+		 * then make it the "phys" sport
+		 */
+		if (list_empty(&domain->sport_list))
+			domain->sport = sport;
+
+		INIT_LIST_HEAD(&sport->list_entry);
+		list_add_tail(&sport->list_entry, &domain->sport_list);
+
+		efc_log_debug(domain->efc, "[%s] allocate sport\n",
+			      sport->display_name);
+	}
+	return sport;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief Free a SLI port object.
+ *
+ * @par Description
+ * The sport object is freed.
+ *
+ * @param sport Pointer to the SLI port object.
+ *
+ * @return None.
+ */
+
+void
+efc_sport_free(struct efc_sli_port_s *sport)
+{
+	struct efc_domain_s *domain;
+	bool post_all_free = false;
+
+	if (sport) {
+		domain = sport->domain;
+		efc_log_debug(domain->efc, "[%s] free sport\n",
+			      sport->display_name);
+		list_del(&sport->list_entry);
+		/*
+		 * if this is the physical sport,
+		 * then clear it out of the domain
+		 */
+		if (sport == domain->sport)
+			domain->sport = NULL;
+
+		efc_spv_del(sport->lookup);
+		sport->lookup = NULL;
+
+		/*
+		 * Remove the sport from the domain's
+		 * sparse vector lookup table
+		 */
+		efc_spv_set(domain->lookup, sport->fc_id, NULL);
+
+		/*
+		 * If the domain's sport_list is empty,
+		 * then post the ALL_NODES_FREE event to the domain,
+		 * after the lock is released. The domain may be
+		 * free'd as a result of the event.
+		 */
+		if (list_empty(&domain->sport_list))
+			post_all_free = true;
+
+		if (post_all_free) {
+			efc_domain_post_event(domain,
+					      EFC_EVT_ALL_CHILD_NODES_FREE,
+					      NULL);
+		}
+
+		kfree(sport);
+	}
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief Free memory resources of a SLI port object.
+ *
+ * @par Description
+ * After the sport object is freed, its child objects are freed.
+ *
+ * @param sport Pointer to the SLI port object.
+ *
+ * @return None.
+ */
+
+void
+efc_sport_force_free(struct efc_sli_port_s *sport)
+{
+	struct efc_node_s *node;
+	struct efc_node_s *next;
+
+	/* shutdown sm processing */
+	efc_sm_disable(&sport->sm);
+
+	list_for_each_entry_safe(node, next, &sport->node_list, list_entry) {
+		efc_node_force_free(node);
+	}
+
+	efc_sport_free(sport);
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief Find a SLI port object, given an FC_ID.
+ *
+ * @par Description
+ * Returns a pointer to the sport object, given an FC_ID.
+ *
+ * @param domain Pointer to the domain.
+ * @param d_id FC_ID to find.
+ *
+ * @return Returns a pointer to the struct efc_sli_port_s; or NULL.
+ */
+
+struct efc_sli_port_s *
+efc_sport_find(struct efc_domain_s *domain, u32 d_id)
+{
+	struct efc_sli_port_s *sport;
+
+	if (!domain->lookup) {
+		efc_log_test(domain->efc,
+			     "assertion failed: domain->lookup is not valid\n");
+		return NULL;
+	}
+
+	sport = efc_spv_get(domain->lookup, d_id);
+	return sport;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief Find a SLI port, given the WWNN and WWPN.
+ *
+ * @par Description
+ * Return a pointer to a sport, given the WWNN and WWPN.
+ *
+ * @param domain Pointer to the domain.
+ * @param wwnn World wide node name.
+ * @param wwpn World wide port name.
+ *
+ * @return Returns a pointer to a SLI port, if found; or NULL.
+ */
+
+struct efc_sli_port_s *
+efc_sport_find_wwn(struct efc_domain_s *domain, uint64_t wwnn, uint64_t wwpn)
+{
+	struct efc_sli_port_s *sport = NULL;
+
+	list_for_each_entry(sport, &domain->sport_list, list_entry) {
+		if (sport->wwnn == wwnn && sport->wwpn == wwpn)
+			return sport;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief Request a SLI port attach.
+ *
+ * @par Description
+ * External call to request an attach for a sport, given an FC_ID.
+ *
+ * @param sport Pointer to the sport context.
+ * @param fc_id FC_ID of which to attach.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+int
+efc_sport_attach(struct efc_sli_port_s *sport, u32 fc_id)
+{
+	int rc;
+	struct efc_node_s *node;
+	struct efc_lport *efc = sport->efc;
+
+	/* Set our lookup */
+	efc_spv_set(sport->domain->lookup, fc_id, sport);
+
+	/* Update our display_name */
+	efc_node_fcid_display(fc_id, sport->display_name,
+			      sizeof(sport->display_name));
+
+	list_for_each_entry(node, &sport->node_list, list_entry) {
+		efc_node_update_display_name(node);
+	}
+
+	efc_log_debug(sport->efc, "[%s] attach sport: fc_id x%06x\n",
+		      sport->display_name, fc_id);
+
+	rc = efc->tt.hw_port_attach(efc, sport, fc_id);
+	if (rc != EFC_HW_RTN_SUCCESS) {
+		efc_log_err(sport->efc,
+			    "efc_hw_port_attach failed: %d\n", rc);
+		return -1;
+	}
+	return 0;
+}
+
+static void
+efc_sport_shutdown(struct efc_sli_port_s *sport)
+{
+	struct efc_lport *efc = sport->efc;
+	struct efc_node_s *node;
+	struct efc_node_s *node_next;
+
+	list_for_each_entry_safe(node, node_next,
+				 &sport->node_list, list_entry) {
+		if (node->rnode.fc_id != FC_FID_FLOGI ||
+		    !sport->is_vport) {
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+			continue;
+		}
+
+		/*
+		 * If this is a vport, logout of the fabric
+		 * controller so that it deletes the vport
+		 * on the switch.
+		 */
+		/* if link is down, don't send logo */
+		if (efc->link_status == EFC_LINK_STATUS_DOWN) {
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		} else {
+			efc_log_debug(efc,
+				      "[%s] sport shutdown vport, sending logo to node\n",
+				      node->display_name);
+
+			if (efc->tt.els_send(efc, node, ELS_LOGO,
+					     EFC_FC_FLOGI_TIMEOUT_SEC,
+					EFC_FC_ELS_DEFAULT_RETRIES)) {
+				/* sent LOGO, wait for response */
+				efc_node_transition(node,
+						    __efc_d_wait_logo_rsp,
+						     NULL);
+				continue;
+			}
+
+			/*
+			 * failed to send LOGO,
+			 * go ahead and cleanup node anyways
+			 */
+			node_printf(node, "Failed to send LOGO\n");
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+		}
+	}
+}
+
+/**
+ * @brief SLI port state machine: Common event handler.
+ *
+ * @par Description
+ * Handle common sport events.
+ *
+ * @param funcname Function name to display.
+ * @param ctx Sport state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+static void *
+__efc_sport_common(const char *funcname, struct efc_sm_ctx_s *ctx,
+		   enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_sli_port_s *sport = ctx->app;
+	struct efc_domain_s *domain = sport->domain;
+	struct efc_lport *efc = sport->efc;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		break;
+	case EFC_EVT_SPORT_ATTACH_OK:
+			efc_sm_transition(ctx, __efc_sport_attached, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN: {
+		int node_list_empty;
+
+		/* Flag this sport as shutting down */
+		sport->shutting_down = true;
+
+		if (sport->is_vport)
+			efc_vport_link_down(sport);
+
+		node_list_empty = list_empty(&sport->node_list);
+
+		if (node_list_empty) {
+			/* sm: node list is empty / efc_hw_port_free */
+			/*
+			 * Remove the sport from the domain's
+			 * sparse vector lookup table
+			 */
+			efc_spv_set(domain->lookup, sport->fc_id, NULL);
+			efc_sm_transition(ctx, __efc_sport_wait_port_free,
+					  NULL);
+			if (efc->tt.hw_port_free(efc, sport)) {
+				efc_log_test(sport->efc,
+					     "efc_hw_port_free failed\n");
+				/* Not much we can do, free the sport anyways */
+				efc_sport_free(sport);
+			}
+		} else {
+			/* sm: node list is not empty / shutdown nodes */
+			efc_sm_transition(ctx,
+					  __efc_sport_wait_shutdown, NULL);
+			efc_sport_shutdown(sport);
+		}
+		break;
+	}
+	default:
+		efc_log_test(sport->efc, "[%s] %-20s %-20s not handled\n",
+			     sport->display_name, funcname,
+			     efc_sm_event_name(evt));
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief SLI port state machine: Physical sport allocated.
+ *
+ * @par Description
+ * This is the initial state for sport objects.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_sport_allocated(struct efc_sm_ctx_s *ctx,
+		      enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_sli_port_s *sport = ctx->app;
+	struct efc_domain_s *domain = sport->domain;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	/* the physical sport is attached */
+	case EFC_EVT_SPORT_ATTACH_OK:
+		efc_assert(sport == domain->sport, NULL);
+		efc_sm_transition(ctx, __efc_sport_attached, NULL);
+		break;
+
+	case EFC_EVT_SPORT_ALLOC_OK:
+		/* ignore */
+		break;
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief SLI port state machine: Handle initial virtual port events.
+ *
+ * @par Description
+ * This state is entered when a virtual port is instantiated,
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_sport_vport_init(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_sli_port_s *sport = ctx->app;
+	struct efc_lport *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		__be64 be_wwpn = cpu_to_be64(sport->wwpn);
+
+		if (sport->wwpn == 0)
+			efc_log_debug(efc, "vport: letting f/w select WWN\n");
+
+		if (sport->fc_id != U32_MAX) {
+			efc_log_debug(efc, "vport: hard coding port id: %x\n",
+				      sport->fc_id);
+		}
+
+		efc_sm_transition(ctx, __efc_sport_vport_wait_alloc, NULL);
+		/* If wwpn is zero, then we'll let the f/w */
+		if (efc->tt.hw_port_alloc(efc, sport, sport->domain,
+					  sport->wwpn == 0 ? NULL :
+					  (uint8_t *)&be_wwpn)) {
+			efc_log_err(efc, "Can't allocate port\n");
+			break;
+		}
+
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief SLI port state machine:
+ * Wait for the HW SLI port allocation to complete.
+ *
+ * @par Description
+ * Waits for the HW sport allocation request to complete.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_sport_vport_wait_alloc(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_sli_port_s *sport = ctx->app;
+	struct efc_lport *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ALLOC_OK: {
+		struct fc_els_flogi *sp;
+		struct efc_node_s *fabric;
+
+		sp = (struct fc_els_flogi *)sport->service_params;
+		/*
+		 * If we let f/w assign wwn's,
+		 * then sport wwn's with those returned by hw
+		 */
+		if (sport->wwnn == 0) {
+			sport->wwnn = be64_to_cpu(sport->sli_wwnn);
+			sport->wwpn = be64_to_cpu(sport->sli_wwpn);
+			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
+				 "%016llX", sport->wwpn);
+		}
+
+		/* Update the sport's service parameters */
+		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
+		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
+
+		/*
+		 * if sport->fc_id is uninitialized,
+		 * then request that the fabric node use FDISC
+		 * to find an fc_id.
+		 * Otherwise we're restoring vports, or we're in
+		 * fabric emulation mode, so attach the fc_id
+		 */
+		if (sport->fc_id == U32_MAX) {
+			fabric = efc_node_alloc(sport, FC_FID_FLOGI, false,
+						false);
+			if (!fabric) {
+				efc_log_err(efc, "efc_node_alloc() failed\n");
+				return NULL;
+			}
+			efc_node_transition(fabric, __efc_vport_fabric_init,
+					    NULL);
+		} else {
+			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
+				 "%016llX", sport->wwpn);
+			efc_sport_attach(sport, sport->fc_id);
+		}
+		efc_sm_transition(ctx, __efc_sport_vport_allocated, NULL);
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief SLI port state machine: virtual sport allocated.
+ *
+ * @par Description
+ * This state is entered after the sport is allocated;
+ * it then waits for a fabric node
+ * FDISC to complete, which requests a sport attach.
+ * The sport attach complete is handled in this state.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_sport_vport_allocated(struct efc_sm_ctx_s *ctx,
+			    enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_sli_port_s *sport = ctx->app;
+	struct efc_lport *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ATTACH_OK: {
+		struct efc_node_s *node;
+
+		/* Find our fabric node, and forward this event */
+		node = efc_node_find(sport, FC_FID_FLOGI);
+		if (!node) {
+			efc_log_test(efc, "can't find node %06x\n",
+				     FC_FID_FLOGI);
+			break;
+		}
+		/* sm: / forward sport attach to fabric node */
+		efc_node_post_event(node, evt, NULL);
+		efc_sm_transition(ctx, __efc_sport_attached, NULL);
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief SLI port state machine: Attached.
+ *
+ * @par Description
+ * State entered after the sport attach has completed.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_sport_attached(struct efc_sm_ctx_s *ctx,
+		     enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_sli_port_s *sport = ctx->app;
+	struct efc_lport *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		struct efc_node_s *node;
+
+		efc_log_debug(efc,
+			      "[%s] SPORT attached WWPN %016llX WWNN %016llX\n",
+			      sport->display_name,
+			      sport->wwpn, sport->wwnn);
+
+		list_for_each_entry(node, &sport->node_list, list_entry) {
+			efc_node_update_display_name(node);
+		}
+
+		sport->tgt_id = sport->fc_id;
+
+		efc->tt.new_sport(efc, sport);
+
+		/*
+		 * Update the vport (if its not the physical sport)
+		 * parameters
+		 */
+		if (sport->is_vport)
+			efc_vport_update_spec(sport);
+		break;
+	}
+
+	case EFC_EVT_EXIT:
+		efc_log_debug(efc,
+			      "[%s] SPORT deattached WWPN %016llX WWNN %016llX\n",
+			      sport->display_name,
+			      sport->wwpn, sport->wwnn);
+
+		efc->tt.del_sport(efc, sport);
+		break;
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief SLI port state machine: Wait for the node shutdowns to complete.
+ *
+ * @par Description
+ * Waits for the ALL_CHILD_NODES_FREE event to be posted from the node
+ * shutdown process.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_sport_wait_shutdown(struct efc_sm_ctx_s *ctx,
+			  enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_sli_port_s *sport = ctx->app;
+	struct efc_domain_s *domain = sport->domain;
+	struct efc_lport *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ALLOC_OK:
+	case EFC_EVT_SPORT_ALLOC_FAIL:
+	case EFC_EVT_SPORT_ATTACH_OK:
+	case EFC_EVT_SPORT_ATTACH_FAIL:
+		/* ignore these events - just wait for the all free event */
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE: {
+		/*
+		 * Remove the sport from the domain's
+		 * sparse vector lookup table
+		 */
+		efc_spv_set(domain->lookup, sport->fc_id, NULL);
+		efc_sm_transition(ctx, __efc_sport_wait_port_free, NULL);
+		if (efc->tt.hw_port_free(efc, sport)) {
+			efc_log_err(sport->efc, "efc_hw_port_free failed\n");
+			/* Not much we can do, free the sport anyways */
+			efc_sport_free(sport);
+		}
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief SLI port state machine: Wait for the HW's port free to complete.
+ *
+ * @par Description
+ * Waits for the HW's port free to complete.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_sport_wait_port_free(struct efc_sm_ctx_s *ctx,
+			   enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_sli_port_s *sport = ctx->app;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ATTACH_OK:
+		/* Ignore as we are waiting for the free CB */
+		break;
+	case EFC_EVT_SPORT_FREE_OK: {
+		/* All done, free myself */
+		/* sm: / efc_sport_free */
+		efc_sport_free(sport);
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief Start the vports on a domain
+ *
+ * @par Description
+ * Use the vport specification to find the associated vports and start them.
+ *
+ * @param domain Pointer to the domain context.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+int
+efc_vport_start(struct efc_domain_s *domain)
+{
+	struct efc_lport *efc = domain->efc;
+	struct efc_vport_spec_s *vport;
+	struct efc_vport_spec_s *next;
+	struct efc_sli_port_s *sport;
+	int rc = 0;
+	u8 found = false;
+
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		if (!vport->sport) {
+			found = true;
+			break;
+		}
+	}
+
+	/* Allocate a sport */
+	if (found && vport) {
+		sport = efc_sport_alloc(domain, vport->wwpn,
+					vport->wwnn, vport->fc_id,
+					vport->enable_ini,
+					vport->enable_tgt);
+		vport->sport = sport;
+		if (!sport) {
+			rc = -1;
+		} else {
+			sport->is_vport = true;
+			sport->tgt_data = vport->tgt_data;
+			sport->ini_data = vport->ini_data;
+
+			/* Transition to vport_init */
+			efc_sm_transition(&sport->sm, __efc_sport_vport_init,
+					  NULL);
+		}
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief Clear the sport reference in the vport specification.
+ *
+ * @par Description
+ * Clear the sport pointer on the vport specification when
+ * the vport is torn down. This allows it to be
+ * re-created when the link is re-established.
+ *
+ * @param sport Pointer to the sport context.
+ */
+static void
+efc_vport_link_down(struct efc_sli_port_s *sport)
+{
+	struct efc_lport *efc = sport->efc;
+	struct efc_vport_spec_s *vport;
+
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if (vport->sport == sport) {
+			vport->sport = NULL;
+			break;
+		}
+	}
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief Allocate a new virtual SLI port.
+ *
+ * @par Description
+ * A new sport is created, in response to an external management request.
+ *
+ * @n @b Note: If the WWPN is zero, the firmware will assign the WWNs.
+ *
+ * @param domain Pointer to the domain context.
+ * @param wwpn World wide port name.
+ * @param wwnn World wide node name
+ * @param fc_id Requested port ID (used in fabric emulation mode).
+ * @param ini TRUE, if port is created as an initiator node.
+ * @param tgt TRUE, if port is created as a target node.
+ * @param tgt_data Pointer to target specific data
+ * @param ini_data Pointer to initiator specific data
+ * @param restore_vport If TRUE, then the vport will be re-created automatically
+ *                      on link disruption.
+ *
+ * @return Returns 0 on success; or a negative error value on failure.
+ */
+
+int
+efc_sport_vport_new(struct efc_domain_s *domain, uint64_t wwpn, uint64_t wwnn,
+		    u32 fc_id, bool ini, bool tgt, void *tgt_data,
+		    void *ini_data, bool restore_vport)
+{
+	struct efc_sli_port_s *sport;
+
+	if (ini && domain->efc->enable_ini == 0) {
+		efc_log_test(domain->efc,
+			     "driver initiator functionality not enabled\n");
+		return -1;
+	}
+
+	if (tgt && domain->efc->enable_tgt == 0) {
+		efc_log_test(domain->efc,
+			     "driver target functionality not enabled\n");
+		return -1;
+	}
+
+	/*
+	 * Create a vport spec if we need to recreate
+	 * this vport after a link up event
+	 */
+	if (restore_vport) {
+		if (efc_vport_create_spec(domain->efc, wwnn, wwpn, fc_id,
+					  ini, tgt, tgt_data, ini_data)) {
+			efc_log_test(domain->efc,
+				     "failed to create vport object entry\n");
+			return -1;
+		}
+		return efc_vport_start(domain);
+	}
+
+	/* Allocate a sport */
+	sport = efc_sport_alloc(domain, wwpn, wwnn, fc_id, ini, tgt);
+
+	if (!sport)
+		return -1;
+
+	sport->is_vport = true;
+	sport->tgt_data = tgt_data;
+	sport->ini_data = ini_data;
+
+	/* Transition to vport_init */
+	efc_sm_transition(&sport->sm, __efc_sport_vport_init, NULL);
+
+	return 0;
+}
+
+/**
+ * @ingroup sport_sm
+ * @brief Remove a previously-allocated virtual port.
+ *
+ * @par Description
+ * A previously-allocated virtual port is removed by
+ * posting the shutdown event to the
+ * sport with a matching WWN.
+ *
+ * @param efc Pointer to the device object.
+ * @param domain Pointer to the domain structure (may be NULL).
+ * @param wwpn World wide port name of the port to delete (host endian).
+ * @param wwnn World wide node name of the port to delete (host endian).
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+int
+efc_sport_vport_del(struct efc_lport *efc, struct efc_domain_s *domain,
+		    u64 wwpn, uint64_t wwnn)
+{
+	struct efc_sli_port_s *sport;
+	int found = 0;
+	struct efc_vport_spec_s *vport;
+	struct efc_vport_spec_s *next;
+
+	/* walk the efc_vport_list and remove from there */
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		if (vport->wwpn == wwpn && vport->wwnn == wwnn) {
+			list_del(&vport->list_entry);
+			kfree(vport);
+			break;
+		}
+	}
+
+	if (!domain) {
+		/* No domain means no sport to look for */
+		return 0;
+	}
+
+	list_for_each_entry(sport, &domain->sport_list, list_entry) {
+		if (sport->wwpn == wwpn && sport->wwnn == wwnn) {
+			found = 1;
+			break;
+		}
+	}
+
+	if (found) {
+		/* Shutdown this SPORT */
+		efc_sm_post_event(&sport->sm, EFC_EVT_SHUTDOWN, NULL);
+	}
+	return 0;
+}
+
+/**
+ * @brief Force free all saved vports.
+ *
+ * @par Description
+ * Delete all device vports.
+ *
+ * @param efc Pointer to the device object.
+ *
+ * @return None.
+ */
+
+void
+efc_vport_del_all(struct efc_lport *efc)
+{
+	struct efc_vport_spec_s *vport;
+	struct efc_vport_spec_s *next;
+
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		list_del(&vport->list_entry);
+		kfree(vport);
+	}
+}
+
+/**
+ * @brief Save the virtual port's parameters.
+ *
+ * @par Description
+ * The information required to restore a virtual port is saved.
+ *
+ * @param sport Pointer to the sport context.
+ *
+ * @return None.
+ */
+
+static void
+efc_vport_update_spec(struct efc_sli_port_s *sport)
+{
+	struct efc_lport *efc = sport->efc;
+	struct efc_vport_spec_s *vport;
+
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if (vport->sport == sport) {
+			vport->wwnn = sport->wwnn;
+			vport->wwpn = sport->wwpn;
+			vport->tgt_data = sport->tgt_data;
+			vport->ini_data = sport->ini_data;
+			break;
+		}
+	}
+}
+
+/**
+ * @brief Create a saved vport entry.
+ *
+ * A saved vport entry is added to the vport list,
+ * which is restored following a link up.
+ * This function is used to allow vports to be created the first time
+ * the link comes up without having to go through the ioctl() API.
+ *
+ * @param efc Pointer to device context.
+ * @param wwnn World wide node name (may be zero for auto-select).
+ * @param wwpn World wide port name (may be zero for auto-select).
+ * @param fc_id Requested port ID (used in fabric emulation mode).
+ * @param enable_ini TRUE if vport is to be an initiator port.
+ * @param enable_tgt TRUE if vport is to be a target port.
+ * @param tgt_data Pointer to target specific data.
+ * @param ini_data Pointer to initiator specific data.
+ *
+ * @return None.
+ */
+
+int8_t
+efc_vport_create_spec(struct efc_lport *efc, uint64_t wwnn, uint64_t wwpn,
+		      u32 fc_id, bool enable_ini,
+		      bool enable_tgt, void *tgt_data, void *ini_data)
+{
+	struct efc_vport_spec_s *vport;
+
+	/*
+	 * walk the efc_vport_list and return failure
+	 * if a valid(vport with non zero WWPN and WWNN) vport entry
+	 * is already created
+	 */
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if ((wwpn && vport->wwpn == wwpn) &&
+		    (wwnn && vport->wwnn == wwnn)) {
+			efc_log_test(efc,
+				     "Failed: VPORT %016llX %016llX already allocated\n",
+				     wwnn, wwpn);
+			return -1;
+		}
+	}
+
+	vport = kzalloc(sizeof(*vport), GFP_ATOMIC);
+	if (!vport)
+		return -1;
+
+	vport->wwnn = wwnn;
+	vport->wwpn = wwpn;
+	vport->fc_id = fc_id;
+	vport->domain_instance = 0;
+	vport->enable_tgt = enable_tgt;
+	vport->enable_ini = enable_ini;
+	vport->tgt_data = tgt_data;
+	vport->ini_data = ini_data;
+
+	INIT_LIST_HEAD(&vport->list_entry);
+	list_add_tail(&vport->list_entry, &efc->vport_list);
+	return 0;
+}
diff --git a/drivers/scsi/elx/libefc/efc_sport.h b/drivers/scsi/elx/libefc/efc_sport.h
new file mode 100644
index 000000000000..1fd4d1e8fabc
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sport.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/**
+ * EFC FC SLI port (SPORT) exported declarations
+ *
+ */
+
+#if !defined(__EFC_SPORT_H__)
+#define __EFC_SPORT_H__
+
+extern struct efc_sli_port_s *
+efc_sport_alloc(struct efc_domain_s *domain, uint64_t wwpn, uint64_t wwnn,
+		u32 fc_id, bool enable_ini, bool enable_tgt);
+extern void
+efc_sport_free(struct efc_sli_port_s *sport);
+extern void
+efc_sport_force_free(struct efc_sli_port_s *sport);
+extern struct efc_sli_port_s *
+efc_sport_find_wwn(struct efc_domain_s *domain, uint64_t wwnn, uint64_t wwpn);
+extern int
+efc_sport_attach(struct efc_sli_port_s *sport, u32 fc_id);
+
+extern void *
+__efc_sport_allocated(struct efc_sm_ctx_s *ctx,
+		      enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_sport_wait_shutdown(struct efc_sm_ctx_s *ctx,
+			  enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_sport_wait_port_free(struct efc_sm_ctx_s *ctx,
+			   enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_sport_vport_init(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_sport_vport_wait_alloc(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_sport_vport_allocated(struct efc_sm_ctx_s *ctx,
+			    enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_sport_attached(struct efc_sm_ctx_s *ctx,
+		     enum efc_sm_event_e evt, void *arg);
+
+extern int
+efc_vport_start(struct efc_domain_s *domain);
+
+#endif /* __EFC_SPORT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 12/32] elx: libefc: Remote node state machine interfaces
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (10 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 11/32] elx: libefc: SLI and FC PORT " James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 13/32] elx: libefc: Fabric " James Smart
                   ` (20 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- Remote node (aka remote port) allocation, initializaion and
  destroy routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_node.c | 1878 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_node.h |  196 ++++
 2 files changed, 2074 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_node.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.h

diff --git a/drivers/scsi/elx/libefc/efc_node.c b/drivers/scsi/elx/libefc/efc_node.c
new file mode 100644
index 000000000000..dcb63b1515b2
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_node.c
@@ -0,0 +1,1878 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * EFC driver remote node handler.  This file contains code that is shared
+ * between fabric (efc_fabric.c) and device (efc_device.c) nodes.
+ */
+
+#include "efc.h"
+#include "efc_device.h"
+
+#define SCSI_IOFMT "[%04x][i:%0*x t:%0*x h:%04x]"
+#define SCSI_ITT_SIZE(efc)	4
+
+#define SCSI_IOFMT_ARGS(io) \
+	(io->instance_index, SCSI_ITT_SIZE(io->efc), \
+	 io->init_task_tag, SCSI_ITT_SIZE(io->efc), \
+	 io->tgt_task_tag, io->hw_tag)
+
+#define scsi_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efc, "[%s]" SCSI_IOFMT fmt, \
+	io->node->display_name, SCSI_IOFMT_ARGS(io), ##__VA_ARGS__)
+
+#define MAX_ACC_REJECT_PAYLOAD	sizeof(struct fc_els_ls_rjt)
+
+/**
+ * @ingroup node_common
+ * @brief Handle remote node events from HW
+ *
+ * Handle remote node events from HW.
+ * Essentially the HW event is translated into
+ * a node state machine event that is posted to the affected node.
+ *
+ * @param arg pointer to efc
+ * @param event HW event to proceoss
+ * @param data application specific data (pointer to the affected node)
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+int
+efc_remote_node_cb(void *arg, int event,
+		   void *data)
+{
+	struct efc_lport *efc = arg;
+	enum efc_sm_event_e	sm_event = EFC_EVT_LAST;
+	struct efc_remote_node_s *rnode = data;
+	struct efc_node_s *node = rnode->node;
+	unsigned long flags = 0;
+
+	switch (event) {
+	case EFC_HW_NODE_ATTACH_OK:
+		sm_event = EFC_EVT_NODE_ATTACH_OK;
+		break;
+
+	case EFC_HW_NODE_ATTACH_FAIL:
+		sm_event = EFC_EVT_NODE_ATTACH_FAIL;
+		break;
+
+	case EFC_HW_NODE_FREE_OK:
+		sm_event = EFC_EVT_NODE_FREE_OK;
+		break;
+
+	case EFC_HW_NODE_FREE_FAIL:
+		sm_event = EFC_EVT_NODE_FREE_FAIL;
+		break;
+
+	default:
+		efc_log_test(efc, "unhandled event %#x\n", event);
+		return -1;
+	}
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_node_post_event(node, sm_event, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	return 0;
+}
+
+/**
+ * @ingroup node_alloc
+ * @brief Find an FC node structure given the FC port ID
+ *
+ * @param sport the SPORT to search
+ * @param port_id FC port ID
+ *
+ * @return pointer to the object or NULL if not found
+ */
+struct efc_node_s *
+efc_node_find(struct efc_sli_port_s *sport, u32 port_id)
+{
+	struct efc_node_s *node;
+
+	node = efc_spv_get(sport->lookup, port_id);
+	return node;
+}
+
+/**
+ * @ingroup node_alloc
+ * @brief allocate node object pool
+ *
+ * A pool of struct efc_node_s objects is allocated.
+ *
+ * @param efc pointer to driver instance context
+ * @param node_count count of nodes to allocate
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+
+int
+efc_node_create_pool(struct efc_lport *efc, u32 node_count)
+{
+	u32 i;
+	struct efc_node_s *node;
+	u64 max_xfer_size;
+	struct efc_dma_s *dma;
+
+	efc->nodes_count = node_count;
+
+	efc->nodes = kmalloc_array(node_count, sizeof(struct efc_node_s *),
+				   GFP_ATOMIC);
+	if (!efc->nodes)
+		return -1;
+
+	memset(efc->nodes, 0, node_count * sizeof(struct efc_node_s *));
+
+	if (efc->max_xfer_size)
+		max_xfer_size = efc->max_xfer_size;
+	else
+		max_xfer_size = 65536;
+
+	INIT_LIST_HEAD(&efc->nodes_free_list);
+
+	for (i = 0; i < node_count; i++) {
+		dma = NULL;
+		node = kzalloc(sizeof(*node), GFP_ATOMIC);
+		if (!node) {
+			efc_log_err(efc, "node allocation failed");
+			goto error;
+		}
+		/* Assign any persistent field values */
+		node->instance_index = i;
+		node->max_wr_xfer_size = max_xfer_size;
+		node->rnode.indicator = U32_MAX;
+
+		dma = &node->sparm_dma_buf;
+		dma->size = 256;
+		dma->virt = dma_alloc_coherent(&efc->pcidev->dev, dma->size,
+					       &dma->phys, GFP_DMA);
+		if (!dma->virt) {
+			kfree(node);
+			efc_log_err(efc, "efc_dma_alloc failed");
+			goto error;
+		}
+
+		efc->nodes[i] = node;
+		INIT_LIST_HEAD(&node->list_entry);
+		list_add_tail(&node->list_entry, &efc->nodes_free_list);
+	}
+	return 0;
+
+error:
+	efc_node_free_pool(efc);
+	return -1;
+}
+
+/**
+ * @ingroup node_alloc
+ * @brief free node object pool
+ *
+ * The pool of previously allocated node objects is freed
+ *
+ * @param efc pointer to driver instance context
+ *
+ * @return none
+ */
+
+void
+efc_node_free_pool(struct efc_lport *efc)
+{
+	struct efc_node_s *node;
+	u32 i;
+	struct efc_dma_s *dma;
+
+	if (!efc->nodes)
+		return;
+
+	for (i = 0; i < efc->nodes_count; i++) {
+		node = efc->nodes[i];
+		if (node) {
+			/* free sparam_dma_buf */
+			dma = &node->sparm_dma_buf;
+			dma_free_coherent(&efc->pcidev->dev, dma->size,
+					  dma->virt, dma->phys);
+
+			kfree(node);
+		}
+		efc->nodes[i] = NULL;
+	}
+}
+
+/**
+ * @ingroup node_alloc
+ * @brief return pointer to node object given instance index
+ *
+ * A pointer to the node object given by an instance index is returned.
+ *
+ * @param efc pointer to driver instance context
+ * @param index instance index
+ *
+ * @return returns pointer to node object, or NULL
+ */
+
+struct efc_node_s *
+efc_node_get_instance(struct efc_lport *efc, u32 index)
+{
+	struct efc_node_s *node = NULL;
+
+	if (index >= efc->nodes_count) {
+		efc_log_test(efc, "invalid index: %d\n", index);
+		return NULL;
+	}
+	node = efc->nodes[index];
+	return node->attached ? node : NULL;
+}
+
+/**
+ * @ingroup node_alloc
+ * @brief Allocate an fc node structure and add to node list
+ *
+ * @param sport pointer to the SPORT from which this node is allocated
+ * @param port_id FC port ID of new node
+ * @param init Port is an inititiator (sent a plogi)
+ * @param targ Port is potentially a target
+ *
+ * @return pointer to the object or NULL if none available
+ */
+
+struct efc_node_s *efc_node_alloc(struct efc_sli_port_s *sport,
+				  u32 port_id, bool init, bool targ)
+{
+	int rc;
+	struct efc_node_s *node = NULL;
+	u32 instance_index;
+	u64 max_wr_xfer_size;
+	struct efc_lport *efc = sport->efc;
+	struct efc_dma_s sparm_dma_buf;
+
+	if (sport->shutting_down) {
+		efc_log_debug(efc, "node allocation when shutting down %06x",
+			      port_id);
+		return NULL;
+	}
+
+	if (!list_empty(&efc->nodes_free_list)) {
+		node = list_first_entry(&efc->nodes_free_list,
+					struct efc_node_s, list_entry);
+		list_del(&node->list_entry);
+	}
+
+	if (!node) {
+		efc_log_err(efc, "node allocation failed %06x", port_id);
+		return NULL;
+	}
+
+	/* Save persistent values across memset zero */
+	instance_index = node->instance_index;
+	max_wr_xfer_size = node->max_wr_xfer_size;
+	sparm_dma_buf = node->sparm_dma_buf;
+
+	memset(node, 0, sizeof(*node));
+	node->instance_index = instance_index;
+	node->max_wr_xfer_size = max_wr_xfer_size;
+	node->sparm_dma_buf = sparm_dma_buf;
+	node->rnode.indicator = U32_MAX;
+
+	node->sport = sport;
+	INIT_LIST_HEAD(&node->list_entry);
+	list_add_tail(&node->list_entry, &sport->node_list);
+
+	node->efc = efc;
+	node->init = init;
+	node->targ = targ;
+
+	spin_lock_init(&node->pend_frames_lock);
+	INIT_LIST_HEAD(&node->pend_frames);
+	spin_lock_init(&node->active_ios_lock);
+	INIT_LIST_HEAD(&node->active_ios);
+	INIT_LIST_HEAD(&node->els_io_pend_list);
+	INIT_LIST_HEAD(&node->els_io_active_list);
+	efc->tt.scsi_io_alloc_enable(efc, node);
+
+	rc = efc->tt.hw_node_alloc(efc, &node->rnode, port_id, sport);
+	if (rc) {
+		efc_log_err(efc, "efc_hw_node_alloc failed: %d\n", rc);
+		return NULL;
+	}
+	/* zero the service parameters */
+	memset(node->sparm_dma_buf.virt, 0, node->sparm_dma_buf.size);
+
+	node->rnode.node = node;
+	node->sm.app = node;
+	node->evtdepth = 0;
+
+	efc_node_update_display_name(node);
+
+	efc_spv_set(sport->lookup, port_id, node);
+
+	return node;
+}
+
+/**
+ * @ingroup node_alloc
+ * @brief free a node structure
+ *
+ * The node structure given by 'node' is free'd
+ *
+ * @param node the node to free
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+
+int
+efc_node_free(struct efc_node_s *node)
+{
+	struct efc_sli_port_s *sport;
+	struct efc_lport *efc;
+	int rc = 0;
+	struct efc_node_s *ns = NULL;
+	bool post_all_free = false;
+
+	sport = node->sport;
+	efc = node->efc;
+
+	node_printf(node, "Free'd\n");
+
+	if (node->refound) {
+		/*
+		 * Save the name server node. We will send fake RSCN event at
+		 * the end to handle ignored RSCN event during node deletion
+		 */
+		ns = efc_node_find(node->sport, FC_FID_DIR_SERV);
+	}
+
+	/* Remove from node list */
+	list_del(&node->list_entry);
+
+	/* Free HW resources */
+	rc = efc->tt.hw_node_free_resources(efc, &node->rnode);
+	if (EFC_HW_RTN_IS_ERROR(rc)) {
+		efc_log_test(efc, "efc_hw_node_free failed: %d\n", rc);
+		rc = -1;
+	}
+
+	/* if the gidpt_delay_timer is still running, then delete it */
+	if (timer_pending(&node->gidpt_delay_timer))
+		del_timer(&node->gidpt_delay_timer);
+
+	/* remove entry from sparse vector list */
+	if (!sport->lookup) {
+		efc_log_test(node->efc,
+			     "assertion failed: sport lookup is NULL\n");
+		return -1;
+	}
+
+	efc_spv_set(sport->lookup, node->rnode.fc_id, NULL);
+
+	/*
+	 * If the node_list is empty,
+	 * then post a ALL_CHILD_NODES_FREE event to the sport,
+	 * after the lock is released.
+	 * The sport may be free'd as a result of the event.
+	 */
+	if (list_empty(&sport->node_list))
+		post_all_free = true;
+
+	if (post_all_free) {
+		efc_sm_post_event(&sport->sm, EFC_EVT_ALL_CHILD_NODES_FREE,
+				  NULL);
+	}
+
+	node->sport = NULL;
+	node->sm.current_state = NULL;
+
+	/* return to free list */
+	INIT_LIST_HEAD(&node->list_entry);
+	list_add_tail(&node->list_entry, &efc->nodes_free_list);
+
+	if (ns) {
+		/* sending fake RSCN event to name server node */
+		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, NULL);
+	}
+
+	return rc;
+}
+
+/**
+ * @brief free memory resources of a node object
+ *
+ * The node object's child objects are freed after which the
+ * node object is freed.
+ *
+ * @param node pointer to a node object
+ *
+ * @return none
+ */
+
+void
+efc_node_force_free(struct efc_node_s *node)
+{
+	struct efc_lport *efc = node->efc;
+	/* shutdown sm processing */
+	efc_sm_disable(&node->sm);
+
+	strncpy(node->prev_state_name, node->current_state_name,
+		sizeof(node->prev_state_name));
+	strncpy(node->current_state_name, "disabled",
+		sizeof(node->current_state_name));
+
+	efc->tt.node_io_cleanup(efc, node, true);
+	efc->tt.node_els_cleanup(efc, node, true);
+
+	/* manually purge pending frames (if any) */
+	efc->tt.node_purge_pending(efc, node);
+
+	efc_node_free(node);
+}
+
+/**
+ * @ingroup node_common
+ * @brief Perform HW call to attach a remote node
+ *
+ * @param node pointer to node object
+ *
+ * @return 0 on success, non-zero otherwise
+ */
+int
+efc_node_attach(struct efc_node_s *node)
+{
+	int rc = 0;
+	struct efc_sli_port_s *sport = node->sport;
+	struct efc_domain_s *domain = sport->domain;
+	struct efc_lport *efc = node->efc;
+
+	if (!domain->attached) {
+		efc_log_test(efc,
+			     "Warning: unattached domain\n");
+		return -1;
+	}
+	/* Update node->wwpn/wwnn */
+
+	efc_node_build_eui_name(node->wwpn, sizeof(node->wwpn),
+				efc_node_get_wwpn(node));
+	efc_node_build_eui_name(node->wwnn, sizeof(node->wwnn),
+				efc_node_get_wwnn(node));
+
+	efc_dma_copy_in(&node->sparm_dma_buf, node->service_params + 4,
+			sizeof(node->service_params) - 4);
+
+	/* take lock to protect node->rnode.attached */
+	rc = efc->tt.hw_node_attach(efc, &node->rnode, &node->sparm_dma_buf);
+	if (EFC_HW_RTN_IS_ERROR(rc))
+		efc_log_test(efc, "efc_hw_node_attach failed: %d\n", rc);
+
+	return rc;
+}
+
+/**
+ * @ingroup node_common
+ * @brief Generate text for a node's fc_id
+ *
+ * The text for a nodes fc_id is generated,
+ * either as a well known name, or a 6 digit
+ * hex value.
+ *
+ * @param fc_id fc_id
+ * @param buffer text buffer
+ * @param buffer_length text buffer length in bytes
+ *
+ * @return none
+ */
+
+void
+efc_node_fcid_display(u32 fc_id, char *buffer, u32 buffer_length)
+{
+	switch (fc_id) {
+	case FC_FID_FLOGI:
+		snprintf(buffer, buffer_length, "fabric");
+		break;
+	case FC_FID_FCTRL:
+		snprintf(buffer, buffer_length, "fabctl");
+		break;
+	case FC_FID_DIR_SERV:
+		snprintf(buffer, buffer_length, "nserve");
+		break;
+	default:
+		if (fc_id == FC_FID_DOM_MGR) {
+			snprintf(buffer, buffer_length, "dctl%02x",
+				 (fc_id & 0x0000ff));
+		} else {
+			snprintf(buffer, buffer_length, "%06x", fc_id);
+		}
+		break;
+	}
+}
+
+/**
+ * @brief update the node's display name
+ *
+ * The node's display name is updated, sometimes needed because the sport part
+ * is updated after the node is allocated.
+ *
+ * @param node pointer to the node object
+ *
+ * @return none
+ */
+
+void
+efc_node_update_display_name(struct efc_node_s *node)
+{
+	u32 port_id = node->rnode.fc_id;
+	struct efc_sli_port_s *sport = node->sport;
+	char portid_display[16];
+
+	efc_node_fcid_display(port_id, portid_display, sizeof(portid_display));
+
+	snprintf(node->display_name, sizeof(node->display_name), "%s.%s",
+		 sport->display_name, portid_display);
+}
+
+/**
+ * @brief cleans up an XRI for the pending link services accept by aborting the
+ *         XRI if required.
+ *
+ * <h3 class="desc">Description</h3>
+ * This function is called when the LS accept is not sent.
+ *
+ * @param node Node for which should be cleaned up
+ */
+
+void
+efc_node_send_ls_io_cleanup(struct efc_node_s *node)
+{
+	struct efc_lport *efc = node->efc;
+
+	if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE) {
+		efc_log_debug(efc, "[%s] cleaning up LS_ACC oxid=0x%x\n",
+			      node->display_name, node->ls_acc_oxid);
+
+		node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+		node->ls_acc_io = NULL;
+	}
+}
+
+/**
+ * @ingroup node_common
+ * @brief state: shutdown a node
+ *
+ * A node is shutdown,
+ *
+ * @param ctx remote node sm context
+ * @param evt event to process
+ * @param arg per event optional argument
+ *
+ * @return returns NULL
+ *
+ * @note
+ */
+
+void *
+__efc_node_shutdown(struct efc_sm_ctx_s *ctx,
+		    enum efc_sm_event_e evt, void *arg)
+{
+	int rc;
+	unsigned long flags = 0;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		efc_node_hold_frames(node);
+		efc_assert(efc_node_active_ios_empty(node), NULL);
+		efc_assert(efc_els_io_list_empty(node,
+						 &node->els_io_active_list),
+			   NULL);
+
+		/* by default, we will be freeing node after we unwind */
+		node->req_free = true;
+
+		switch (node->shutdown_reason) {
+		case EFC_NODE_SHUTDOWN_IMPLICIT_LOGO:
+			/*
+			 * sm: if shutdown reason is
+			 * implicit logout / efc_node_attach
+			 */
+			/* Node shutdown b/c of PLOGI received when node
+			 * already logged in. We have PLOGI service
+			 * parameters, so submit node attach; we won't be
+			 * freeing this node
+			 */
+
+			/* currently, only case for implicit logo is PLOGI
+			 * recvd. Thus, node's ELS IO pending list won't be
+			 * empty (PLOGI will be on it)
+			 */
+			efc_assert(node->send_ls_acc ==
+				   EFC_NODE_SEND_LS_ACC_PLOGI, NULL);
+			node_printf(node,
+				    "Shutdown reason: implicit logout, re-authenticate\n");
+
+			efc->tt.scsi_io_alloc_enable(efc, node);
+
+			/* Re-attach node with the same HW node resources */
+			node->req_free = false;
+			rc = efc_node_attach(node);
+			efc_node_transition(node, __efc_d_wait_node_attach,
+					    NULL);
+			if (rc == EFC_HW_RTN_SUCCESS_SYNC) {
+				efc_node_post_event(node,
+						    EFC_EVT_NODE_ATTACH_OK,
+						    NULL);
+			}
+			break;
+		case EFC_NODE_SHUTDOWN_EXPLICIT_LOGO: {
+			s8 pend_frames_empty;
+			struct list_head *list;
+
+			/* cleanup any pending LS_ACC ELSs */
+			efc_node_send_ls_io_cleanup(node);
+			list = &node->els_io_pend_list;
+			efc_assert(efc_els_io_list_empty(node, list), NULL);
+
+			spin_lock_irqsave(&node->pend_frames_lock, flags);
+			pend_frames_empty = list_empty(&node->pend_frames);
+			spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+			/*
+			 * there are two scenarios where we want to keep
+			 * this node alive:
+			 * 1. there are pending frames that need to be
+			 *    processed or
+			 * 2. we're an initiator and the remote node is
+			 *    a target and we need to re-authenticate
+			 */
+			node_printf(node,
+				    "Shutdown: explicit logo pend=%d ",
+					!pend_frames_empty);
+			 node_printf(node,
+				     "sport.ini=%d node.tgt=%d\n",
+				    node->sport->enable_ini, node->targ);
+
+			if (!pend_frames_empty ||
+			    (node->sport->enable_ini && node->targ)) {
+				u8 send_plogi = false;
+
+				if (node->sport->enable_ini && node->targ) {
+					/*
+					 * we're an initiator and
+					 * node shutting down is a target;
+					 * we'll need to re-authenticate in
+					 * initial state
+					 */
+					send_plogi = true;
+				}
+
+				/*
+				 * transition to __efc_d_init
+				 * (will retain HW node resources)
+				 */
+				efc->tt.scsi_io_alloc_enable(efc, node);
+				node->req_free = false;
+
+				/*
+				 * either pending frames exist,
+				 * or we're re-authenticating with PLOGI
+				 * (or both); in either case,
+				 * return to initial state
+				 */
+				efc_node_init_device(node, send_plogi);
+			}
+			/* else: let node shutdown occur */
+			break;
+		}
+		case EFC_NODE_SHUTDOWN_DEFAULT:
+		default: {
+			struct list_head *list;
+
+			/*
+			 * shutdown due to link down,
+			 * node going away (xport event) or
+			 * sport shutdown, purge pending and
+			 * proceed to cleanup node
+			 */
+
+			/* cleanup any pending LS_ACC ELSs */
+			efc_node_send_ls_io_cleanup(node);
+			list = &node->els_io_pend_list;
+			efc_assert(efc_els_io_list_empty(node, list), NULL);
+
+			node_printf(node,
+				    "Shutdown reason: default, purge pending\n");
+			efc->tt.node_purge_pending(efc, node);
+			break;
+		}
+		}
+
+		break;
+	}
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup common_node
+ * @brief Checks to see if ELS's have been quiesced
+ *
+ * Check if ELS's have been quiesced. If so, transition to the
+ * next state in the shutdown process.
+ *
+ * @param node Node for which ELS's are checked
+ *
+ * @return Returns 1 if ELS's have been quiesced, 0 otherwise.
+ */
+static int
+efc_node_check_els_quiesced(struct efc_node_s *node)
+{
+	/* check to see if ELS requests, completions are quiesced */
+	if (node->els_req_cnt == 0 && node->els_cmpl_cnt == 0 &&
+	    efc_els_io_list_empty(node, &node->els_io_active_list)) {
+		if (!node->attached) {
+			/* hw node detach already completed, proceed */
+			node_printf(node, "HW node not attached\n");
+			efc_node_transition(node,
+					    __efc_node_wait_ios_shutdown,
+					     NULL);
+		} else {
+			/*
+			 * hw node detach hasn't completed,
+			 * transition and wait
+			 */
+			node_printf(node, "HW node still attached\n");
+			efc_node_transition(node, __efc_node_wait_node_free,
+					    NULL);
+		}
+		return 1;
+	}
+	return 0;
+}
+
+/**
+ * @ingroup common_node
+ * @brief Initiate node IO cleanup.
+ *
+ * Note: this function must be called with a non-attached node
+ * or a node for which the node detach (efc_hw_node_detach())
+ * has already been initiated.
+ *
+ * @param node Node for which shutdown is initiated
+ *
+ * @return Returns None.
+ */
+
+void
+efc_node_initiate_cleanup(struct efc_node_s *node)
+{
+	struct efc_lport *efc;
+
+	efc = node->efc;
+	efc->tt.node_els_cleanup(efc, node, false);
+
+	/*
+	 * if ELS's have already been quiesced, will move to next state
+	 * if ELS's have not been quiesced, abort them
+	 */
+	if (efc_node_check_els_quiesced(node) == 0) {
+		/*
+		 * Abort all ELS's since ELS's won't be aborted by HW
+		 * node free.
+		 */
+		efc_node_hold_frames(node);
+		efc->tt.node_abort_all_els(efc, node);
+		efc_node_transition(node, __efc_node_wait_els_shutdown, NULL);
+	}
+}
+
+/**
+ * @ingroup node_common
+ * @brief Node state machine: Wait for all ELSs to complete.
+ *
+ * <h3 class="desc">Description</h3>
+ * State waits for all ELSs to complete after aborting all
+ * outstanding .
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_node_wait_els_shutdown(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg)
+{
+	bool check_quiesce = false;
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
+			node_printf(node, "All ELS IOs complete\n");
+			check_quiesce = true;
+		}
+		break;
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_ELS_REQ_ABORTED:
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* all ELS IO's complete */
+		node_printf(node, "All ELS IOs complete\n");
+		efc_assert(efc_els_io_list_empty(node,
+						 &node->els_io_active_list),
+			   NULL);
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	if (check_quiesce)
+		efc_node_check_els_quiesced(node);
+
+	return NULL;
+}
+
+/**
+ * @ingroup node_command
+ * @brief Node state machine: Wait for a HW node free event to
+ * complete.
+ *
+ * <h3 class="desc">Description</h3>
+ * State waits for the node free event to be received from the HW.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_node_wait_node_free(struct efc_sm_ctx_s *ctx,
+			  enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_FREE_OK:
+		/* node is officially no longer attached */
+		node->attached = false;
+		efc_node_transition(node, __efc_node_wait_ios_shutdown, NULL);
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		/* As IOs and ELS IO's complete we expect to get these events */
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* Fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup node_common
+ * @brief state: initiate node shutdown
+ *
+ * State is entered when a node receives a shutdown event, and it's waiting
+ * for all the active IOs and ELS IOs associated with the node to complete.
+ *
+ * @param ctx remote node sm context
+ * @param evt event to process
+ * @param arg per event optional argument
+ *
+ * @return returns NULL
+ */
+
+void *
+__efc_node_wait_ios_shutdown(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+
+		/* first check to see if no ELS IOs are outstanding */
+		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
+			/* If there are any active IOS, Free them. */
+			efc_node_transition(node, __efc_node_shutdown, NULL);
+		}
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE: {
+		if (efc_node_active_ios_empty(node) &&
+		    efc_els_io_list_empty(node, &node->els_io_active_list)) {
+			efc_node_transition(node, __efc_node_shutdown, NULL);
+		}
+		break;
+	}
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s\n", node->display_name,
+			      efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup node_common
+ * @brief state: common node event handler
+ *
+ * Handle common/shared node events
+ *
+ * @param funcname calling function's name
+ * @param ctx remote node sm context
+ * @param evt event to process
+ * @param arg per event optional argument
+ *
+ * @return returns NULL
+ */
+
+void *
+__efc_node_common(const char *funcname, struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = NULL;
+	struct efc_lport *efc = NULL;
+	struct efc_node_cb_s *cbdata = arg;
+
+	efc_assert(ctx, NULL);
+	efc_assert(ctx->app, NULL);
+	node = ctx->app;
+	efc_assert(node->efc, NULL);
+	efc = node->efc;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY:
+	case EFC_EVT_NODE_MISSING:
+	case EFC_EVT_FCP_CMD_RCVD:
+		break;
+
+	case EFC_EVT_NODE_REFOUND:
+		node->refound = true;
+		break;
+
+	/*
+	 * node->attached must be set appropriately
+	 * for all node attach/detach events
+	 */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		break;
+
+	case EFC_EVT_NODE_FREE_OK:
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		node->attached = false;
+		break;
+
+	/*
+	 * handle any ELS completions that
+	 * other states either didn't care about
+	 * or forgot about
+	 */
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		break;
+
+	/*
+	 * handle any ELS request completions that
+	 * other states either didn't care about
+	 * or forgot about
+	 */
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_ELS_REQ_ABORTED:
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+
+	case EFC_EVT_ELS_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/*
+		 * Unsupported ELS was received,
+		 * send LS_RJT, command not supported
+		 */
+		efc_log_debug(efc,
+			      "[%s] (%s) ELS x%02x, LS_RJT not supported\n",
+			      node->display_name, funcname,
+			      ((uint8_t *)cbdata->payload->dma.virt)[0]);
+
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+					ELS_RJT_UNSUP, ELS_EXPL_NONE, 0);
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD:
+	case EFC_EVT_FLOGI_RCVD:
+	case EFC_EVT_LOGO_RCVD:
+	case EFC_EVT_PRLI_RCVD:
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_FDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD:
+	case EFC_EVT_SCR_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/* sm: / send ELS_RJT */
+		efc_log_debug(efc, "[%s] (%s) %s sending ELS_RJT\n",
+			      node->display_name, funcname,
+			      efc_sm_event_name(evt));
+		/* if we didn't catch this in a state, send generic LS_RJT */
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+						ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
+
+		break;
+	}
+	case EFC_EVT_GID_PT_RCVD:
+	case EFC_EVT_RFT_ID_RCVD:
+	case EFC_EVT_RFF_ID_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		efc_log_debug(efc, "[%s] (%s) %s sending CT_REJECT\n",
+			      node->display_name, funcname,
+			      efc_sm_event_name(evt));
+		efc->tt.send_ct_rsp(efc, node, hdr->fh_ox_id,
+				cbdata->payload->dma.virt,
+				FC_FS_RJT, FC_FS_RJT_UNSUP, 0);
+		break;
+	}
+
+	case EFC_EVT_ABTS_RCVD: {
+		efc_log_debug(efc, "[%s] (%s) %s sending BA_ACC\n",
+			      node->display_name, funcname,
+			      efc_sm_event_name(evt));
+
+		/* sm: / send BA_ACC */
+		efc->tt.bls_send_acc_hdr(efc, node, cbdata->header->dma.virt);
+		break;
+	}
+
+	default:
+		efc_log_test(node->efc, "[%s] %-20s %-20s not handled\n",
+			     node->display_name, funcname,
+			     efc_sm_event_name(evt));
+		break;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup node_common
+ * @brief save node service parameters
+ *
+ * Service parameters are copyed into the node structure
+ *
+ * @param node pointer to node structure
+ * @param payload pointer to service parameters to save
+ *
+ * @return none
+ */
+
+void
+efc_node_save_sparms(struct efc_node_s *node, void *payload)
+{
+	memcpy(node->service_params, payload, sizeof(node->service_params));
+}
+
+/**
+ * @ingroup node_common
+ * @brief Post event to node state machine context
+ *
+ * This is used by the node state machine code to post events to the nodes.
+ * Upon completion of the event posting, if the nesting depth is zero and
+ * we're not holding inbound frames, then the pending frames are processed.
+ *
+ * @param node pointer to node
+ * @param evt event to post
+ * @param arg event posting argument
+ *
+ * @return none
+ */
+
+void
+efc_node_post_event(struct efc_node_s *node,
+		    enum efc_sm_event_e evt, void *arg)
+{
+	bool free_node = false;
+
+	node->evtdepth++;
+
+	efc_sm_post_event(&node->sm, evt, arg);
+
+	/* If our event call depth is one and
+	 * we're not holding frames
+	 * then we can dispatch any pending frames.
+	 * We don't want to allow the efc_process_node_pending()
+	 * call to recurse.
+	 */
+	if (!node->hold_frames && node->evtdepth == 1)
+		efc_process_node_pending(node);
+
+	node->evtdepth--;
+
+	/*
+	 * Free the node object if so requested,
+	 * and we're at an event call depth of zero
+	 */
+	if (node->evtdepth == 0 && node->req_free)
+		free_node = true;
+
+	if (free_node)
+		efc_node_free(node);
+}
+
+/**
+ * @ingroup node_common
+ * @brief transition state of a node
+ *
+ * The node's state is transitioned to the requested state.
+ * Entry/Exit events are posted as needed.
+ *
+ * @param node pointer to node
+ * @param state state to transition to
+ * @param data transition data
+ *
+ * @return none
+ */
+
+void
+efc_node_transition(struct efc_node_s *node,
+		    void *(*state)(struct efc_sm_ctx_s *,
+				   enum efc_sm_event_e, void *), void *data)
+
+{
+	struct efc_sm_ctx_s *ctx = &node->sm;
+
+	if (ctx->current_state == state) {
+		efc_node_post_event(node, EFC_EVT_REENTER, data);
+	} else {
+		efc_node_post_event(node, EFC_EVT_EXIT, data);
+		ctx->current_state = state;
+		efc_node_post_event(node, EFC_EVT_ENTER, data);
+	}
+}
+
+/**
+ * @ingroup node_common
+ * @brief build EUI formatted WWN
+ *
+ * Build a WWN given the somewhat transport agnostic
+ * iScsi naming specification, for FC
+ * use the eui. format, an ascii string such as: "eui.10000000C9A19501"
+ *
+ * @param buffer buffer to place formatted name into
+ * @param buffer_len length in bytes of the buffer
+ * @param eui_name cpu endian 64 bit WWN value
+ *
+ * @return none
+ */
+
+void
+efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name)
+{
+	memset(buffer, 0, buffer_len);
+
+	snprintf(buffer, buffer_len, "eui.%016llX", eui_name);
+}
+
+/**
+ * @ingroup node_common
+ * @brief return nodes' WWPN as a uint64_t
+ *
+ * The WWPN is computed from service parameters and returned as a uint64_t
+ *
+ * @param node pointer to node structure
+ *
+ * @return WWPN
+ *
+ */
+
+uint64_t
+efc_node_get_wwpn(struct efc_node_s *node)
+{
+	struct fc_els_flogi *sp =
+			(struct fc_els_flogi *)node->service_params;
+
+	return be64_to_cpu(sp->fl_wwpn);
+}
+
+/**
+ * @ingroup node_common
+ * @brief return nodes' WWNN as a uint64_t
+ *
+ * The WWNN is computed from service parameters and returned as a uint64_t
+ *
+ * @param node pointer to node structure
+ *
+ * @return WWNN
+ *
+ */
+
+uint64_t
+efc_node_get_wwnn(struct efc_node_s *node)
+{
+	struct fc_els_flogi *sp =
+			(struct fc_els_flogi *)node->service_params;
+
+	return be64_to_cpu(sp->fl_wwnn);
+}
+
+/**
+ * @brief check ELS request completion
+ *
+ * Check ELS request completion event to make sure it's for the
+ * ELS request we expect. If not, invoke given common event
+ * handler and return an error.
+ *
+ * @param ctx state machine context
+ * @param evt ELS request event
+ * @param arg event argument
+ * @param cmd ELS command expected
+ * @param node_common_func common event handler to call if ELS
+ *	doesn't match
+ * @param funcname function name that called this
+ *
+ * @return zero if ELS command matches, -1 otherwise
+ */
+int
+efc_node_check_els_req(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt,
+		       void *arg, uint8_t cmd,
+			void *(*efc_node_common_func)(const char *,
+						      struct efc_sm_ctx_s *,
+			       enum efc_sm_event_e, void *),
+			const char *funcname)
+{
+	return 0;
+}
+
+/**
+ * @brief check NS request completion
+ *
+ * Check ELS request completion event to make sure it's for the
+ * nameserver request we expect. If not, invoke given common
+ * event handler and return an error.
+ *
+ * @param ctx state machine context
+ * @param evt ELS request event
+ * @param arg event argument
+ * @param cmd nameserver command expected
+ * @param node_common_func common event handler to call if
+ *                         nameserver cmd doesn't match
+ * @param funcname function name that called this
+ *
+ * @return zero if NS command matches, -1 otherwise
+ */
+int
+efc_node_check_ns_req(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt,
+		      void *arg, uint16_t cmd,
+		       void *(*efc_node_common_func)(const char *,
+						     struct efc_sm_ctx_s *,
+			      enum efc_sm_event_e, void *),
+		       const char *funcname)
+{
+	return 0;
+}
+
+/**
+ * @brief Return TRUE if active ios list is empty
+ *
+ * Test if node->active_ios list is empty while
+ * holding the node->active_ios_lock.
+ *
+ * @param node pointer to node object
+ *
+ * @return TRUE if node active ios list is empty
+ */
+
+int
+efc_node_active_ios_empty(struct efc_node_s *node)
+{
+	int empty;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	empty = list_empty(&node->active_ios);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return empty;
+}
+
+int
+efc_els_io_list_empty(struct efc_node_s *node, struct list_head *list)
+{
+	int empty;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	empty = list_empty(list);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return empty;
+}
+
+/**
+ * @brief Pause a node
+ *
+ * The node is placed in the __efc_node_paused state after saving the state
+ * to return to
+ *
+ * @param node Pointer to node object
+ * @param state State to resume to
+ *
+ * @return none
+ */
+
+void
+efc_node_pause(struct efc_node_s *node,
+	       void *(*state)(struct efc_sm_ctx_s *,
+			      enum efc_sm_event_e, void *))
+
+{
+	node->nodedb_state = state;
+	efc_node_transition(node, __efc_node_paused, NULL);
+}
+
+/**
+ * @brief Paused node state
+ *
+ * This state is entered when a state is "paused". When resumed, the node
+ * is transitioned to a previously saved state (node->ndoedb_state)
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return returns NULL
+ */
+
+void *
+__efc_node_paused(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		node_printf(node, "Paused\n");
+		break;
+
+	case EFC_EVT_RESUME: {
+		void *(*pf)(struct efc_sm_ctx_s *ctx,
+			    enum efc_sm_event_e evt, void *arg);
+
+		pf = node->nodedb_state;
+
+		node->nodedb_state = NULL;
+		efc_node_transition(node, pf, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node->req_free = true;
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		break;
+	}
+	return NULL;
+}
+
+/**
+ * @brief Resume a paused state
+ *
+ * Posts a resume event to the paused node.
+ *
+ * @param node Pointer to node object
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+
+int
+efc_node_resume(struct efc_node_s *node)
+{
+	efc_node_post_event(node, EFC_EVT_RESUME, NULL);
+
+	return 0;
+}
+
+/**
+ * @ingroup node_common
+ * @brief Dispatch a ELS frame.
+ *
+ * <h3 class="desc">Description</h3>
+ * An ELS frame is dispatched to the \c node state machine.
+ * RQ Pair mode: this function is always called with a NULL hw
+ * io.
+ *
+ * @param node Node that originated the frame.
+ * @param seq header/payload sequence buffers
+ *
+ * @return Returns 0 if frame processed and RX buffers cleaned
+ * up appropriately, -1 if frame not handled and RX buffers need
+ * to be returned.
+ */
+
+int
+efc_node_recv_els_frame(struct efc_node_s *node,
+			struct efc_hw_sequence_s *seq)
+{
+	unsigned long flags = 0;
+	u32 prli_size = sizeof(struct fc_els_prli) + sizeof(struct fc_els_spp);
+	struct {
+		u32 cmd;
+		enum efc_sm_event_e evt;
+		u32 payload_size;
+	} els_cmd_list[] = {
+		{ELS_PLOGI, EFC_EVT_PLOGI_RCVD,	sizeof(struct fc_els_flogi)},
+		{ELS_FLOGI, EFC_EVT_FLOGI_RCVD,	sizeof(struct fc_els_flogi)},
+		{ELS_LOGO, EFC_EVT_LOGO_RCVD, sizeof(struct fc_els_ls_acc)},
+		{ELS_PRLI, EFC_EVT_PRLI_RCVD, prli_size},
+		{ELS_PRLO, EFC_EVT_PRLO_RCVD, prli_size},
+		{ELS_PDISC, EFC_EVT_PDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
+		{ELS_FDISC, EFC_EVT_FDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
+		{ELS_ADISC, EFC_EVT_ADISC_RCVD,	sizeof(struct fc_els_adisc)},
+		{ELS_RSCN, EFC_EVT_RSCN_RCVD, MAX_ACC_REJECT_PAYLOAD},
+		{ELS_SCR, EFC_EVT_SCR_RCVD, MAX_ACC_REJECT_PAYLOAD},
+	};
+	struct efc_node_cb_s cbdata;
+	u8 *buf = seq->payload->dma.virt;
+	enum efc_sm_event_e evt = EFC_EVT_ELS_RCVD;
+	u32 i;
+
+	memset(&cbdata, 0, sizeof(cbdata));
+	cbdata.header = seq->header;
+	cbdata.payload = seq->payload;
+
+	/* find a matching event for the ELS command */
+	for (i = 0; i < ARRAY_SIZE(els_cmd_list); i++) {
+		if (els_cmd_list[i].cmd == buf[0]) {
+			evt = els_cmd_list[i].evt;
+			break;
+		}
+	}
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, evt, &cbdata);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+
+	return 0;
+}
+
+/**
+ * @ingroup node_common
+ * @brief Dispatch a CT frame.
+ *
+ * <h3 class="desc">Description</h3>
+ * A CT frame is dispatched to the \c node state machine.
+ * RQ Pair mode: this function is always called with a NULL hw
+ * io.
+ *
+ * @param node Node that originated the frame.
+ * @param seq header/payload sequence buffers
+ *
+ * @return Returns 0 if frame processed and RX buffers cleaned
+ * up appropriately, -1 if frame not handled and RX buffers need
+ * to be returned.
+ */
+
+int
+efc_node_recv_ct_frame(struct efc_node_s *node,
+		       struct efc_hw_sequence_s *seq)
+{
+	struct fc_ct_hdr  *iu = seq->payload->dma.virt;
+	enum efc_sm_event_e evt = EFC_EVT_ELS_RCVD;
+	u16 gscmd = be16_to_cpu(iu->ct_cmd);
+	struct efc_node_cb_s cbdata;
+	unsigned long flags = 0;
+	u32 i;
+	struct {
+		u32 cmd;
+		enum efc_sm_event_e evt;
+		u32 payload_size;
+	} ct_cmd_list[] = {
+		{FC_NS_RFF_ID, EFC_EVT_RFF_ID_RCVD, 100},
+		{FC_NS_RFT_ID, EFC_EVT_RFT_ID_RCVD, 100},
+		{FC_NS_GNN_ID, EFC_EVT_GNN_ID_RCVD, 100},
+		{FC_NS_GPN_ID, EFC_EVT_GPN_ID_RCVD, 100},
+		{FC_NS_GID_PT, EFC_EVT_GID_PT_RCVD, 256},
+		{FC_NS_RPN_ID, EFC_EVT_RPN_ID_RCVD, 100},
+		{FC_NS_RNN_ID, EFC_EVT_RNN_ID_RCVD, 100},
+		{FC_NS_RSNN_NN, EFC_EVT_RSNN_NN_RCVD, 100},
+		{FC_NS_RSPN_ID, EFC_EVT_RSPN_ID_RCVD, 100},
+	};
+
+	memset(&cbdata, 0, sizeof(cbdata));
+	cbdata.header = seq->header;
+	cbdata.payload = seq->payload;
+
+	/* find a matching event for the ELS/GS command */
+	for (i = 0; i < ARRAY_SIZE(ct_cmd_list); i++) {
+		if (ct_cmd_list[i].cmd == gscmd) {
+			evt = ct_cmd_list[i].evt;
+			break;
+		}
+	}
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, evt, &cbdata);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+
+	return 0;
+}
+
+/*
+ *@ingroup node_common
+ * @brief Dispatch a FCP command frame when the node is not ready.
+ *
+ * <h3 class="desc">Description</h3>
+ * A frame is dispatched to the \c node state machine.
+ *
+ * @param node Node that originated the frame.
+ * @param seq header/payload sequence buffers
+ *
+ * @return Returns 0 if frame processed and RX buffers cleaned
+ * up appropriately, -1 if frame not handled.
+ */
+
+int
+efc_node_recv_fcp_cmd(struct efc_node_s *node, struct efc_hw_sequence_s *seq)
+{
+	struct efc_node_cb_s cbdata;
+	unsigned long flags = 0;
+
+	memset(&cbdata, 0, sizeof(cbdata));
+	cbdata.header = seq->header;
+	cbdata.payload = seq->payload;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_FCP_CMD_RCVD, &cbdata);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+
+	return 1;
+}
+
+/**
+ * @ingroup node_common
+ * @brief Stub handler for non-ABTS BLS frames
+ *
+ * <h3 class="desc">Description</h3>
+ * Log message and drop. Customer can plumb it to their back-end as needed
+ *
+ * @param node Node that originated the frame.
+ * @param seq header/payload sequence buffers
+ *
+ * @return Returns 0
+ */
+
+int
+efc_node_recv_bls_no_sit(struct efc_node_s *node,
+			 struct efc_hw_sequence_s *seq)
+{
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+
+	node_printf(node,
+		    "Dropping frame hdr = %08x %08x %08x %08x %08x %08x\n",
+		    cpu_to_be32(((u32 *)hdr)[0]),
+		    cpu_to_be32(((u32 *)hdr)[1]),
+		    cpu_to_be32(((u32 *)hdr)[2]),
+		    cpu_to_be32(((u32 *)hdr)[3]),
+		    cpu_to_be32(((u32 *)hdr)[4]),
+		    cpu_to_be32(((u32 *)hdr)[5]));
+
+	return -1;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Process pending frames queued to the given node.
+ *
+ * <h3 class="desc">Description</h3>
+ * Frames that are queued for the \c node are dispatched and returned
+ * to the RQ.
+ *
+ * @param node Node of the queued frames that are to be dispatched.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+int
+efc_process_node_pending(struct efc_node_s *node)
+{
+	struct efc_lport *efc = node->efc;
+	struct efc_hw_sequence_s *seq = NULL;
+	u32 pend_frames_processed = 0;
+	unsigned long flags = 0;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (node->hold_frames)
+			break;
+
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+			if (!list_empty(&node->pend_frames)) {
+				seq = list_first_entry(&node->pend_frames,
+						       struct efc_hw_sequence_s,
+						       list_entry);
+				list_del(&seq->list_entry);
+			}
+			if (!seq) {
+				pend_frames_processed =
+						node->pend_frames_processed;
+				node->pend_frames_processed = 0;
+				spin_unlock_irqrestore(&node->pend_frames_lock,
+						       flags);
+				break;
+			}
+			node->pend_frames_processed++;
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+		/* now dispatch frame(s) to dispatch function */
+		efc_node_dispatch_frame(node, seq);
+	}
+
+	if (pend_frames_processed != 0)
+		efc_log_debug(efc, "%u node frames held and processed\n",
+			      pend_frames_processed);
+
+	return 0;
+}
+
+/*
+ * @ingroup scsi_api_base
+ * @brief Notify that delete initiator is complete.
+ *
+ * @par Description
+ * Sent by the target-server to notify the base driver that the work
+ * started from efc_scsi_del_initiator() is now complete and that it
+ * is safe for the node to release the rest of its resources.
+ *
+ * @param node Pointer to the node.
+ *
+ * @return None.
+ */
+
+void
+efc_scsi_del_initiator_complete(struct efc_lport *efc, struct efc_node_s *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	/* Notify the node to resume */
+	efc_node_post_event(node, EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
+
+/*
+ * @ingroup scsi_api_base
+ * @brief Notify that delete target is complete.
+ *
+ * @par Description
+ * Sent by the initiator-client to notify the base driver that the
+ * work started from efc_scsi_del_target() is now complete and
+ * that it is safe for the node to release the rest of its resources.
+ *
+ * @param node Pointer to the node.
+ *
+ * @return None.
+ */
+void
+efc_scsi_del_target_complete(struct efc_lport *efc, struct efc_node_s *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	/* Notify the node to resume */
+	efc_node_post_event(node, EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
+
+void
+efc_scsi_io_list_empty(struct efc_lport *efc, struct efc_node_s *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY, NULL);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
+
+void efc_node_post_els_resp(struct efc_node_s *node,
+			    enum efc_hw_node_els_event_e evt, void *arg)
+{
+	enum efc_sm_event_e	sm_event = EFC_EVT_LAST;
+	struct efc_lport	*efc = node->efc;
+	unsigned long flags = 0;
+
+	switch (evt) {
+	case EFC_HW_SRRS_ELS_REQ_OK:
+		sm_event = EFC_EVT_SRRS_ELS_REQ_OK;
+		break;
+	case EFC_HW_SRRS_ELS_CMPL_OK:
+		sm_event = EFC_EVT_SRRS_ELS_CMPL_OK;
+		break;
+	case EFC_HW_SRRS_ELS_REQ_FAIL:
+		sm_event = EFC_EVT_SRRS_ELS_REQ_FAIL;
+		break;
+	case EFC_HW_SRRS_ELS_CMPL_FAIL:
+		sm_event = EFC_EVT_SRRS_ELS_CMPL_FAIL;
+		break;
+	case EFC_HW_SRRS_ELS_REQ_RJT:
+		sm_event = EFC_EVT_SRRS_ELS_REQ_RJT;
+		break;
+	case EFC_HW_ELS_REQ_ABORTED:
+		sm_event = EFC_EVT_ELS_REQ_ABORTED;
+		break;
+	default:
+		efc_log_test(efc, "unhandled event %#x\n", evt);
+		return;
+	}
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, sm_event, arg);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
+
+void efc_node_post_shutdown(struct efc_node_s *node,
+			    u32 evt, void *arg)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_SHUTDOWN, arg);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
diff --git a/drivers/scsi/elx/libefc/efc_node.h b/drivers/scsi/elx/libefc/efc_node.h
new file mode 100644
index 000000000000..96a4964b74c1
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_node.h
@@ -0,0 +1,196 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFC_NODE_H__)
+#define __EFC_NODE_H__
+#include "scsi/fc/fc_ns.h"
+
+#define EFC_NODEDB_PAUSE_FABRIC_LOGIN	BIT(0)
+#define EFC_NODEDB_PAUSE_NAMESERVER	BIT(1)
+#define EFC_NODEDB_PAUSE_NEW_NODES	BIT(2)
+
+static inline void
+efc_node_evt_set(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt,
+		 const char *handler)
+{
+	struct efc_node_s *node = ctx->app;
+
+	if (evt == EFC_EVT_ENTER) {
+		strncpy(node->current_state_name, handler,
+			sizeof(node->current_state_name));
+	} else if (evt == EFC_EVT_EXIT) {
+		strncpy(node->prev_state_name, node->current_state_name,
+			sizeof(node->prev_state_name));
+		strncpy(node->current_state_name, "invalid",
+			sizeof(node->current_state_name));
+	}
+	node->prev_evt = node->current_evt;
+	node->current_evt = evt;
+}
+
+/**
+ * @brief hold frames in pending frame list
+ *
+ * Unsolicited receive frames are held on the node pending frame list,
+ * rather than being processed.
+ *
+ * @param node pointer to node structure
+ *
+ * @return none
+ */
+
+static inline void
+efc_node_hold_frames(struct efc_node_s *node)
+{
+	efc_assert(node);
+	node->hold_frames = true;
+}
+
+/**
+ * @brief accept frames
+ *
+ * Unsolicited receive frames processed rather than being held on the node
+ * pending frame list.
+ *
+ * @param node pointer to node structure
+ *
+ * @return none
+ */
+
+static inline void
+efc_node_accept_frames(struct efc_node_s *node)
+{
+	efc_assert(node);
+	node->hold_frames = false;
+}
+
+extern int
+efc_node_create_pool(struct efc_lport *efc, u32 node_count);
+extern void
+efc_node_free_pool(struct efc_lport *efc);
+extern struct efc_node_s *
+efc_node_get_instance(struct efc_lport *efc, u32 instance);
+
+/**
+ * @brief Node initiator/target enable defines
+ *
+ * All combinations of the SLI port (sport) initiator/target enable, and remote
+ * node initiator/target enable are enumerated.
+ *
+ */
+
+enum efc_node_enable_e {
+	EFC_NODE_ENABLE_x_TO_x,
+	EFC_NODE_ENABLE_x_TO_T,
+	EFC_NODE_ENABLE_x_TO_I,
+	EFC_NODE_ENABLE_x_TO_IT,
+	EFC_NODE_ENABLE_T_TO_x,
+	EFC_NODE_ENABLE_T_TO_T,
+	EFC_NODE_ENABLE_T_TO_I,
+	EFC_NODE_ENABLE_T_TO_IT,
+	EFC_NODE_ENABLE_I_TO_x,
+	EFC_NODE_ENABLE_I_TO_T,
+	EFC_NODE_ENABLE_I_TO_I,
+	EFC_NODE_ENABLE_I_TO_IT,
+	EFC_NODE_ENABLE_IT_TO_x,
+	EFC_NODE_ENABLE_IT_TO_T,
+	EFC_NODE_ENABLE_IT_TO_I,
+	EFC_NODE_ENABLE_IT_TO_IT,
+};
+
+static inline enum efc_node_enable_e
+efc_node_get_enable(struct efc_node_s *node)
+{
+	u32 retval = 0;
+
+	if (node->sport->enable_ini)
+		retval |= (1U << 3);
+	if (node->sport->enable_tgt)
+		retval |= (1U << 2);
+	if (node->init)
+		retval |= (1U << 1);
+	if (node->targ)
+		retval |= (1U << 0);
+	return (enum efc_node_enable_e)retval;
+}
+
+extern int
+efc_node_check_els_req(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg,
+		       u8 cmd, void *(*efc_node_common_func)(const char *,
+		       struct efc_sm_ctx_s *, enum efc_sm_event_e, void *),
+		       const char *funcname);
+extern int
+efc_node_check_ns_req(struct efc_sm_ctx_s *ctx,
+		      enum efc_sm_event_e evt, void *arg,
+		  u16 cmd, void *(*efc_node_common_func)(const char *,
+		  struct efc_sm_ctx_s *, enum efc_sm_event_e, void *),
+		  const char *funcname);
+extern int
+efc_node_attach(struct efc_node_s *node);
+extern struct efc_node_s *
+efc_node_alloc(struct efc_sli_port_s *sport, u32 port_id,
+		bool init, bool targ);
+extern int
+efc_node_free(struct efc_node_s *efc);
+extern void
+efc_node_force_free(struct efc_node_s *efc);
+extern void
+efc_node_update_display_name(struct efc_node_s *node);
+void efc_node_post_event(struct efc_node_s *node, enum efc_sm_event_e evt,
+			 void *arg);
+
+extern void *
+__efc_node_shutdown(struct efc_sm_ctx_s *ctx,
+		    enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_node_wait_node_free(struct efc_sm_ctx_s *ctx,
+			  enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_node_wait_els_shutdown(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_node_wait_ios_shutdown(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg);
+extern void
+efc_node_save_sparms(struct efc_node_s *node, void *payload);
+extern void
+efc_node_transition(struct efc_node_s *node,
+		    void *(*state)(struct efc_sm_ctx_s *,
+		    enum efc_sm_event_e, void *), void *data);
+extern void *
+__efc_node_common(const char *funcname, struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg);
+
+extern void
+efc_node_initiate_cleanup(struct efc_node_s *node);
+
+extern void
+efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name);
+extern uint64_t
+efc_node_get_wwpn(struct efc_node_s *node);
+
+extern void
+efc_node_pause(struct efc_node_s *node,
+	       void *(*state)(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg));
+extern int
+efc_node_resume(struct efc_node_s *node);
+extern void *
+__efc_node_paused(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg);
+extern int
+efc_node_active_ios_empty(struct efc_node_s *node);
+extern void
+efc_node_send_ls_io_cleanup(struct efc_node_s *node);
+
+extern int
+efc_els_io_list_empty(struct efc_node_s *node, struct list_head *list);
+
+extern int
+efc_process_node_pending(struct efc_node_s *domain);
+
+#endif /* __EFC_NODE_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 13/32] elx: libefc: Fabric node state machine interfaces
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (11 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 12/32] elx: libefc: Remote node " James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 14/32] elx: libefc: FC node ELS and state handling James Smart
                   ` (19 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- Fabric node initialization and logins.
- Name/Directory Services node.
- Fabric Controller node to process rscn events.

These are all interactions with remote ports that correspond
to well-known fabric entities

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_fabric.c | 2252 ++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_fabric.h |  116 ++
 2 files changed, 2368 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h

diff --git a/drivers/scsi/elx/libefc/efc_fabric.c b/drivers/scsi/elx/libefc/efc_fabric.c
new file mode 100644
index 000000000000..d8e77df5d6ff
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_fabric.c
@@ -0,0 +1,2252 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * This file implements remote node state machines for:
+ * - Fabric logins.
+ * - Fabric controller events.
+ * - Name/directory services interaction.
+ * - Point-to-point logins.
+ */
+
+/*
+ * fabric_sm Node State Machine: Fabric States
+ * ns_sm Node State Machine: Name/Directory Services States
+ * p2p_sm Node State Machine: Point-to-Point Node States
+ */
+
+#include "efc.h"
+#include "efc_fabric.h"
+#include "efc_device.h"
+
+static void efc_fabric_initiate_shutdown(struct efc_node_s *node);
+static void *__efc_fabric_common(const char *funcname,
+				 struct efc_sm_ctx_s *ctx,
+				  enum efc_sm_event_e evt, void *arg);
+static int efc_start_ns_node(struct efc_sli_port_s *sport);
+static int efc_start_fabctl_node(struct efc_sli_port_s *sport);
+static int efc_process_gidpt_payload(struct efc_node_s *node,
+				     void *gidpt, u32 gidpt_len);
+static void efc_process_rscn(struct efc_node_s *node,
+			     struct efc_node_cb_s *cbdata);
+static uint64_t efc_get_wwpn(struct fc_els_flogi *sp);
+static void gidpt_delay_timer_cb(struct timer_list *t);
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric node state machine: Initial state.
+ *
+ * @par Description
+ * Send an FLOGI to a well-known fabric.
+ *
+ * @param ctx Remote node sm context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_fabric_init(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt,
+		  void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_REENTER:	/* not sure why we're getting these ... */
+		efc_log_debug(efc, ">>> reenter !!\n");
+		/* fall through */
+	case EFC_EVT_ENTER:
+		/*  sm: / send FLOGI */
+		efc->tt.els_send(efc, node, ELS_FLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_fabric_flogi_wait_rsp, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Set sport topology.
+ *
+ * @par Description
+ * Set sport topology.
+ *
+ * @param node Pointer to the node for which the topology is set.
+ * @param topology Topology to set.
+ *
+ * @return Returns NULL.
+ */
+void
+efc_fabric_set_topology(struct efc_node_s *node,
+			enum efc_sport_topology_e topology)
+{
+	node->sport->topology = topology;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Set sport topology.
+ *
+ * @par Description
+ * Nofity sport topology.
+ *
+ * @param node Pointer to the node for which the topology is set.
+ * @param topology Topology to set.
+ *
+ * @return Returns NULL.
+ */
+void
+efc_fabric_notify_topology(struct efc_node_s *node)
+{
+	struct efc_node_s *tmp_node;
+	struct efc_node_s *next;
+	enum efc_sport_topology_e topology = node->sport->topology;
+
+	/*
+	 * now loop through the nodes in the sport
+	 * and send topology notification
+	 */
+	list_for_each_entry_safe(tmp_node, next, &node->sport->node_list,
+				 list_entry) {
+		if (tmp_node != node) {
+			efc_node_post_event(tmp_node,
+					    EFC_EVT_SPORT_TOPOLOGY_NOTIFY,
+					    (void *)topology);
+		}
+	}
+}
+
+static bool efc_rnode_is_nport(struct fc_els_flogi *rsp)
+{
+	return !(ntohs(rsp->fl_csp.sp_features) & FC_SP_FT_FPORT);
+}
+
+static bool efc_rnode_is_npiv_capable(struct fc_els_flogi *rsp)
+{
+	return !!(ntohs(rsp->fl_csp.sp_features) & FC_SP_FT_NPIV_ACC);
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric node state machine: Wait for an FLOGI response.
+ *
+ * @par Description
+ * Wait for an FLOGI response event.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx_s *ctx,
+			    enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+
+		memcpy(node->sport->domain->flogi_service_params,
+		       cbdata->els_rsp.virt,
+		       sizeof(struct fc_els_flogi));
+
+		/* Check to see if the fabric is an F_PORT or and N_PORT */
+		if (!efc_rnode_is_nport(cbdata->els_rsp.virt)) {
+			/* sm: if not nport / efc_domain_attach */
+			/* ext_status has the fc_id, attach domain */
+			if (efc_rnode_is_npiv_capable(cbdata->els_rsp.virt)) {
+				efc_log_debug(node->efc,
+					      " NPIV is enabled at switch side\n");
+				//node->efc->sw_feature_cap |= 1<<10;
+			}
+			efc_fabric_set_topology(node,
+						EFC_SPORT_TOPOLOGY_FABRIC);
+			efc_fabric_notify_topology(node);
+			efc_assert(!node->sport->domain->attached, NULL);
+			efc_domain_attach(node->sport->domain,
+					  cbdata->ext_status);
+			efc_node_transition(node,
+					    __efc_fabric_wait_domain_attach,
+					    NULL);
+			break;
+		}
+
+		/*  sm: if nport and p2p_winner / efc_domain_attach */
+		efc_fabric_set_topology(node, EFC_SPORT_TOPOLOGY_P2P);
+		if (efc_p2p_setup(node->sport)) {
+			node_printf(node,
+				    "p2p setup failed, shutting down node\n");
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+			break;
+		}
+
+		if (node->sport->p2p_winner) {
+			efc_node_transition(node,
+					    __efc_p2p_wait_domain_attach,
+					     NULL);
+			if (node->sport->domain->attached &&
+			    !node->sport->domain->domain_notify_pend) {
+				/*
+				 * already attached,
+				 * just send ATTACH_OK
+				 */
+				node_printf(node,
+					    "p2p winner, domain already attached\n");
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		} else {
+			/*
+			 * peer is p2p winner;
+			 * PLOGI will be received on the
+			 * remote SID=1 node;
+			 * this node has served its purpose
+			 */
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+		}
+
+		break;
+	}
+
+	case EFC_EVT_ELS_REQ_ABORTED:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		struct efc_sli_port_s *sport = node->sport;
+		/*
+		 * with these errors, we have no recovery,
+		 * so shutdown the sport, leave the link
+		 * up and the domain ready
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		node_printf(node,
+			    "FLOGI failed evt=%s, shutting down sport [%s]\n",
+			    efc_sm_event_name(evt), sport->display_name);
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_sm_post_event(&sport->sm, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric node state machine: Initial state for a virtual port.
+ *
+ * @par Description
+ * State entered when a virtual port is created. Send FDISC.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_vport_fabric_init(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send FDISC */
+		efc->tt.els_send(efc, node, ELS_FDISC,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+
+		efc_node_transition(node, __efc_fabric_fdisc_wait_rsp, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric node state machine: Wait for an FDISC response
+ *
+ * @par Description
+ * Used for a virtual port. Waits for an FDISC response.
+ * If OK, issue a HW port attach.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx_s *ctx,
+			    enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		/* fc_id is in ext_status */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / efc_sport_attach */
+		efc_sport_attach(node->sport, cbdata->ext_status);
+		efc_node_transition(node, __efc_fabric_wait_domain_attach,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_log_err(node->efc, "FDISC failed, shutting down sport\n");
+		/* sm: / shutdown sport */
+		efc_sm_post_event(&node->sport->sm, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric node state machine: Wait for a domain/sport attach event.
+ *
+ * @par Description
+ * Waits for a domain/sport attach event.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_fabric_wait_domain_attach(struct efc_sm_ctx_s *ctx,
+				enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+	case EFC_EVT_SPORT_ATTACH_OK: {
+		int rc;
+
+		rc = efc_start_ns_node(node->sport);
+		if (rc)
+			return NULL;
+
+		/* sm: if enable_ini / start fabctl node */
+		/* Instantiate the fabric controller (sends SCR) */
+		if (node->sport->enable_rscn) {
+			rc = efc_start_fabctl_node(node->sport);
+			if (rc)
+				return NULL;
+		}
+		efc_node_transition(node, __efc_fabric_idle, NULL);
+		break;
+	}
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric node state machine: Fabric node is idle.
+ *
+ * @par Description
+ * Wait for fabric node events.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_fabric_idle(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt,
+		  void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup ns_sm
+ * @brief Name services node state machine: Initialize.
+ *
+ * @par Description
+ * A PLOGI is sent to the well-known name/directory services node.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_ns_init(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send PLOGI */
+		efc->tt.els_send(efc, node, ELS_PLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_plogi_wait_rsp, NULL);
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup ns_sm
+ * @brief Name services node state machine: Wait for a PLOGI response.
+ *
+ * @par Description
+ * Waits for a response from PLOGI to name services node, then issues a
+ * node attach request to the HW.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_ns_plogi_wait_rsp(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		/* Save service parameters */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_ns_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+	}
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup ns_sm
+ * @brief Name services node state machine: Wait for a node attach completion.
+ *
+ * @par Description
+ * Waits for a node attach completion, then issues an RFTID name services
+ * request.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_ns_wait_node_attach(struct efc_sm_ctx_s *ctx,
+			  enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		/* sm: / send RFTID */
+		efc->tt.els_send_ct(efc, node, FC_RCTL_ELS_REQ,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_rftid_wait_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup ns_sm
+ * @brief Wait for a domain/sport/node attach completion, then
+ * shutdown.
+ *
+ * @par Description
+ * Waits for a domain/sport/node attach completion, then shuts
+ * node down.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx_s *ctx,
+				      enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	/* wait for any of these attach events and then shutdown */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		node->attached = false;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	/* ignore shutdown event as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup ns_sm
+ * @brief Name services node state machine: Wait for an RFTID response event.
+ *
+ * @par Description
+ * Waits for an RFTID response event; if configured for an initiator operation,
+ * a GIDPT name services request is issued.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_ns_rftid_wait_rsp(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFT_ID,
+					  __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / send RFFID */
+		efc->tt.els_send_ct(efc, node, FC_NS_RFF_ID,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_rffid_wait_rsp, NULL);
+		break;
+
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup ns_sm
+ * @brief Fabric node state machine: Wait for RFFID response event.
+ *
+ * @par Description
+ * Waits for an RFFID response event; if configured for an initiator operation,
+ * a GIDPT name services request is issued.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_ns_rffid_wait_rsp(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:	{
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFF_ID,
+					  __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		if (node->sport->enable_rscn) {
+			/* sm: if enable_rscn / send GIDPT */
+			efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
+					EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+					EFC_FC_ELS_DEFAULT_RETRIES);
+
+			efc_node_transition(node, __efc_ns_gidpt_wait_rsp,
+					    NULL);
+		} else {
+			/* if 'T' only, we're done, go to idle */
+			efc_node_transition(node, __efc_ns_idle, NULL);
+		}
+		break;
+	}
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup ns_sm
+ * @brief Name services node state machine: Wait for a GIDPT response.
+ *
+ * @par Description
+ * Wait for a GIDPT response from the name server. Process the FC_IDs that are
+ * reported by creating new remote ports, as needed.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:	{
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_GID_PT,
+					  __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / process GIDPT payload */
+		efc_process_gidpt_payload(node, cbdata->els_rsp.virt,
+					  cbdata->els_rsp.len);
+		efc_node_transition(node, __efc_ns_idle, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	{
+		/* not much we can do; will retry with the next RSCN */
+		node_printf(node, "GID_PT failed to complete\n");
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_ns_idle, NULL);
+		break;
+	}
+
+	/* if receive RSCN here, queue up another discovery processing */
+	case EFC_EVT_RSCN_RCVD: {
+		node_printf(node, "RSCN received during GID_PT processing\n");
+		node->rscn_pending = true;
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup ns_sm
+ * @brief Name services node state machine: Idle state.
+ *
+ * @par Description
+ * Idle. Waiting for RSCN received events
+ * (posted from the fabric controller), and
+ * restarts the GIDPT name services query and processing.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_ns_idle(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		if (!node->rscn_pending)
+			break;
+
+		node_printf(node, "RSCN pending, restart discovery\n");
+		node->rscn_pending = false;
+
+			/* fall through */
+
+	case EFC_EVT_RSCN_RCVD: {
+		/* sm: / send GIDPT */
+		/*
+		 * If target RSCN processing is enabled,
+		 * and this is target only (not initiator),
+		 * and tgt_rscn_delay is non-zero,
+		 * then we delay issuing the GID_PT
+		 */
+		if (efc->tgt_rscn_delay_msec != 0 &&
+		    !node->sport->enable_ini && node->sport->enable_tgt &&
+		    enable_target_rscn(efc)) {
+			efc_node_transition(node, __efc_ns_gidpt_delay, NULL);
+		} else {
+			efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
+					EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+					EFC_FC_ELS_DEFAULT_RETRIES);
+			efc_node_transition(node, __efc_ns_gidpt_wait_rsp,
+					    NULL);
+		}
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * @brief Handle GIDPT delay timer callback
+ *
+ * @par Description
+ * Post an EFC_EVT_GIDPT_DEIALY_EXPIRED event to the passed in node.
+ *
+ * @param arg Pointer to node.
+ *
+ * @return None.
+ */
+static void
+gidpt_delay_timer_cb(struct timer_list *t)
+{
+	struct efc_node_s *node = from_timer(node, t, gidpt_delay_timer);
+
+	del_timer(&node->gidpt_delay_timer);
+
+	efc_node_post_event(node, EFC_EVT_GIDPT_DELAY_EXPIRED, NULL);
+}
+
+/**
+ * @ingroup ns_sm
+ * @brief Name services node state machine: Delayed GIDPT.
+ *
+ * @par Description
+ * Waiting for GIDPT delay to expire before submitting GIDPT to name server.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_ns_gidpt_delay(struct efc_sm_ctx_s *ctx,
+		     enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		time_t delay_msec;
+
+		/*
+		 * Compute the delay time.
+		 * Set to tgt_rscn_delay, if the time since last GIDPT
+		 * is less than tgt_rscn_period, then use tgt_rscn_period.
+		 */
+		delay_msec = efc->tgt_rscn_delay_msec;
+		if ((jiffies_to_msecs(jiffies) - node->time_last_gidpt_msec)
+		    < efc->tgt_rscn_period_msec) {
+			delay_msec = efc->tgt_rscn_period_msec;
+		}
+		timer_setup(&node->gidpt_delay_timer, &gidpt_delay_timer_cb,
+			    0);
+		mod_timer(&node->gidpt_delay_timer,
+			  jiffies + msecs_to_jiffies(delay_msec));
+
+		break;
+	}
+
+	case EFC_EVT_GIDPT_DELAY_EXPIRED:
+		node->time_last_gidpt_msec = jiffies_to_msecs(jiffies);
+
+		efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_gidpt_wait_rsp, NULL);
+		break;
+
+	case EFC_EVT_RSCN_RCVD: {
+		efc_log_debug(efc,
+			      "RSCN received while in GIDPT delay - no action\n");
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric controller node state machine: Initial state.
+ *
+ * @par Description
+ * Issue a PLOGI to a well-known fabric controller address.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_fabctl_init(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* no need to login to fabric controller, just send SCR */
+		efc->tt.els_send(efc, node, ELS_SCR,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_fabctl_wait_scr_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric controller node state machine: Wait for a node attach request
+ * to complete.
+ *
+ * @par Description
+ * Wait for a node attach to complete. If successful, issue an SCR
+ * to the fabric controller, subscribing to all RSCN.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ *
+ */
+void *
+__efc_fabctl_wait_node_attach(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		/* sm: / send SCR */
+		efc->tt.els_send(efc, node, ELS_SCR,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_fabctl_wait_scr_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric controller node state machine:
+ * Wait for an SCR response from the
+ * fabric controller.
+ *
+ * @par Description
+ * Waits for an SCR response from the fabric controller.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx_s *ctx,
+			  enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_SCR,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_fabctl_ready, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric controller node state machine: Ready.
+ *
+ * @par Description
+ * In this state, the fabric controller sends a RSCN, which is received
+ * by this node and is forwarded to the name services node object; and
+ * the RSCN LS_ACC is sent.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_fabctl_ready(struct efc_sm_ctx_s *ctx,
+		   enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_RSCN_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/*
+		 * sm: / process RSCN (forward to name services node),
+		 * send LS_ACC
+		 */
+		efc_process_rscn(node, cbdata);
+		efc->tt.els_send_resp(efc, node, ELS_LS_ACC,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_fabctl_wait_ls_acc_cmpl,
+				    NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric controller node state machine: Wait for LS_ACC.
+ *
+ * @par Description
+ * Waits for the LS_ACC from the fabric controller.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		efc_node_transition(node, __efc_fabctl_ready, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Initiate fabric node shutdown.
+ *
+ * @param node Node for which shutdown is initiated.
+ *
+ * @return Returns None.
+ */
+
+static void
+efc_fabric_initiate_shutdown(struct efc_node_s *node)
+{
+	int rc;
+	struct efc_lport *efc = node->efc;
+
+	efc->tt.scsi_io_alloc_disable(efc, node);
+
+	if (node->attached) {
+		/* issue hw node free; don't care if succeeds right away
+		 * or sometime later, will check node->attached later in
+		 * shutdown process
+		 */
+		rc = efc->tt.hw_node_detach(efc, &node->rnode);
+		if (rc != EFC_HW_RTN_SUCCESS &&
+		    rc != EFC_HW_RTN_SUCCESS_SYNC) {
+			node_printf(node, "Failed freeing HW node, rc=%d\n",
+				    rc);
+		}
+	}
+	/*
+	 * node has either been detached or is in the process of being detached,
+	 * call common node's initiate cleanup function
+	 */
+	efc_node_initiate_cleanup(node);
+}
+
+/**
+ * @ingroup fabric_sm
+ * @brief Fabric node state machine: Handle the common fabric node events.
+ *
+ * @param funcname Function name text.
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+static void *
+__efc_fabric_common(const char *funcname, struct efc_sm_ctx_s *ctx,
+		    enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = NULL;
+
+	efc_assert(ctx, NULL);
+	efc_assert(ctx->app, NULL);
+	node = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+	case EFC_EVT_SHUTDOWN:
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	default:
+		/* call default event handler common to all nodes */
+		__efc_node_common(funcname, ctx, evt, arg);
+		break;
+	}
+	return NULL;
+}
+
+/**
+ * @brief Return the node's WWPN as an uint64_t.
+ *
+ * @par Description
+ * The WWPN is computed from service parameters, and returned as a uint64_t.
+ *
+ * @param sp Pointer to service parameters.
+ *
+ * @return Returns WWPN.
+ *
+ */
+
+static uint64_t
+efc_get_wwpn(struct fc_els_flogi *sp)
+{
+	return be64_to_cpu(sp->fl_wwnn);
+}
+
+/**
+ * @brief Return TRUE if the remote node is the point-to-point winner.
+ *
+ * @par Description
+ * Compares WWPNs. Returns TRUE if the remote node's WWPN is numerically
+ * higher than the local node's WWPN.
+ *
+ * @param sport Pointer to the sport object.
+ *
+ * @return
+ * - 0, if the remote node is the loser.
+ * - 1, if the remote node is the winner.
+ * - (-1), if remote node is neither the loser nor the winner
+ *   (WWPNs match)
+ */
+
+static int
+efc_rnode_is_winner(struct efc_sli_port_s *sport)
+{
+	struct fc_els_flogi *remote_sp;
+	u64 remote_wwpn;
+	u64 local_wwpn = sport->wwpn;
+	//char prop_buf[32];
+	u64 wwn_bump = 0;
+
+	remote_sp = (struct fc_els_flogi *)sport->domain->flogi_service_params;
+	remote_wwpn = efc_get_wwpn(remote_sp);
+
+	local_wwpn ^= wwn_bump;
+
+	remote_wwpn = efc_get_wwpn(remote_sp);
+
+	efc_log_debug(sport->efc, "r: %llx\n",
+		      be64_to_cpu(remote_sp->fl_wwpn));
+	efc_log_debug(sport->efc, "l: %llx\n", local_wwpn);
+
+	if (remote_wwpn == local_wwpn) {
+		efc_log_warn(sport->efc,
+			     "WWPN of remote node [%08x %08x] matches local WWPN\n",
+			     (u32)(local_wwpn >> 32ll),
+			     (u32)local_wwpn);
+		return -1;
+	}
+
+	return (remote_wwpn > local_wwpn);
+}
+
+/**
+ * @ingroup p2p_sm
+ * @brief Point-to-point state machine: Wait for the domain attach to complete.
+ *
+ * @par Description
+ * Once the domain attach has completed, a PLOGI is sent (if we're the
+ * winning point-to-point node).
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_p2p_wait_domain_attach(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		struct efc_sli_port_s *sport = node->sport;
+		struct efc_node_s *rnode;
+
+		/*
+		 * this transient node (SID=0 (recv'd FLOGI)
+		 * or DID=fabric (sent FLOGI))
+		 * is the p2p winner, will use a separate node
+		 * to send PLOGI to peer
+		 */
+		efc_assert(node->sport->p2p_winner, NULL);
+
+		rnode = efc_node_find(sport, node->sport->p2p_remote_port_id);
+		if (rnode) {
+			/*
+			 * the "other" transient p2p node has
+			 * already kicked off the
+			 * new node from which PLOGI is sent
+			 */
+			node_printf(node,
+				    "Node with fc_id x%x already exists\n",
+				    rnode->rnode.fc_id);
+		} else {
+			/*
+			 * create new node (SID=1, DID=2)
+			 * from which to send PLOGI
+			 */
+			rnode = efc_node_alloc(sport,
+					       sport->p2p_remote_port_id,
+						false, false);
+			if (!rnode) {
+				efc_log_err(efc, "node alloc failed\n");
+				return NULL;
+			}
+
+			efc_fabric_notify_topology(node);
+			/* sm: / allocate p2p remote node */
+			efc_node_transition(rnode, __efc_p2p_rnode_init,
+					    NULL);
+		}
+
+		/*
+		 * the transient node (SID=0 or DID=fabric)
+		 * has served its purpose
+		 */
+		if (node->rnode.fc_id == 0) {
+			/*
+			 * if this is the SID=0 node,
+			 * move to the init state in case peer
+			 * has restarted FLOGI discovery and FLOGI is pending
+			 */
+			/* don't send PLOGI on efc_d_init entry */
+			efc_node_init_device(node, false);
+		} else {
+			/*
+			 * if this is the DID=fabric node
+			 * (we initiated FLOGI), shut it down
+			 */
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+		}
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup p2p_sm
+ * @brief Point-to-point state machine: Remote node initialization state.
+ *
+ * @par Description
+ * This state is entered after winning point-to-point, and the remote node
+ * is instantiated.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_p2p_rnode_init(struct efc_sm_ctx_s *ctx,
+		     enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send PLOGI */
+		efc->tt.els_send(efc, node, ELS_PLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_p2p_wait_plogi_rsp, NULL);
+		break;
+
+	case EFC_EVT_ABTS_RCVD:
+		/* sm: send BA_ACC */
+		efc->tt.bls_send_acc_hdr(efc, node, cbdata->header->dma.virt);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup p2p_sm
+ * @brief Point-to-point node state machine:
+ * Wait for the FLOGI accept completion.
+ *
+ * @par Description
+ * Wait for the FLOGI accept completion.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+
+		/* sm: if p2p_winner / domain_attach */
+		if (node->sport->p2p_winner) {
+			efc_node_transition(node,
+					    __efc_p2p_wait_domain_attach,
+					NULL);
+			if (!node->sport->domain->attached) {
+				node_printf(node, "Domain not attached\n");
+				efc_domain_attach(node->sport->domain,
+						  node->sport->p2p_port_id);
+			} else {
+				node_printf(node, "Domain already attached\n");
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		} else {
+			/* this node has served its purpose;
+			 * we'll expect a PLOGI on a separate
+			 * node (remote SID=0x1); return this node
+			 * to init state in case peer
+			 * restarts discovery -- it may already
+			 * have (pending frames may exist).
+			 */
+			/* don't send PLOGI on efc_d_init entry */
+			efc_node_init_device(node, false);
+		}
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/*
+		 * LS_ACC failed, possibly due to link down;
+		 * shutdown node and wait
+		 * for FLOGI discovery to restart
+		 */
+		node_printf(node, "FLOGI LS_ACC failed, shutting down\n");
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_ABTS_RCVD: {
+		/* sm: / send BA_ACC */
+		efc->tt.bls_send_acc_hdr(efc, node,
+					 cbdata->header->dma.virt);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup p2p_sm
+ * @brief Point-to-point node state machine: Wait for a PLOGI response
+ * as a point-to-point winner.
+ *
+ * @par Description
+ * Wait for a PLOGI response from the remote node as a point-to-point winner.
+ * Submit node attach request to the HW.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx_s *ctx,
+			 enum efc_sm_event_e evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_p2p_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+	}
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		node_printf(node, "PLOGI failed, shutting down\n");
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* if we're in external loopback mode, just send LS_ACC */
+		if (node->efc->external_loopback) {
+			efc->tt.els_send_resp(efc, node, ELS_PLOGI,
+						be16_to_cpu(hdr->fh_ox_id));
+		} else {
+			/*
+			 * if this isn't external loopback,
+			 * pass to default handler
+			 */
+			__efc_fabric_common(__func__, ctx, evt, arg);
+		}
+		break;
+	}
+	case EFC_EVT_PRLI_RCVD:
+		/* I, or I+T */
+		/* sent PLOGI and before completion was seen, received the
+		 * PRLI from the remote node (WCQEs and RCQEs come in on
+		 * different queues and order of processing cannot be assumed)
+		 * Save OXID so PRLI can be sent after the attach and continue
+		 * to wait for PLOGI response
+		 */
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+					     EFC_NODE_SEND_LS_ACC_PRLI);
+		efc_node_transition(node, __efc_p2p_wait_plogi_rsp_recvd_prli,
+				    NULL);
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup p2p_sm
+ * @brief Point-to-point node state machine:
+ * Waiting on a response for a sent PLOGI.
+ *
+ * @par Description
+ * State is entered when the point-to-point winner has sent
+ * a PLOGI and is waiting for a response. Before receiving the
+ * response, a PRLI was received, implying that the PLOGI was
+ * successful.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx_s *ctx,
+				    enum efc_sm_event_e evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/*
+		 * Since we've received a PRLI, we have a port login and will
+		 * just need to wait for the PLOGI response to do the node
+		 * attach and then we can send the LS_ACC for the PRLI. If,
+		 * during this time, we receive FCP_CMNDs (which is possible
+		 * since we've already sent a PRLI and our peer may have
+		 * accepted).
+		 * At this time, we are not waiting on any other unsolicited
+		 * frames to continue with the login process. Thus, it will not
+		 * hurt to hold frames here.
+		 */
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_p2p_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup p2p_sm
+ * @brief Point-to-point node state machine:
+ * Wait for a point-to-point node attach
+ * to complete.
+ *
+ * @par Description
+ * Waits for the point-to-point node attach to complete.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_p2p_wait_node_attach(struct efc_sm_ctx_s *ctx,
+			   enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		switch (node->send_ls_acc) {
+		case EFC_NODE_SEND_LS_ACC_PRLI: {
+			efc_d_send_prli_rsp(node->ls_acc_io,
+					    node->ls_acc_oxid);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_PLOGI: /* Can't happen in P2P */
+		case EFC_NODE_SEND_LS_ACC_NONE:
+		default:
+			/* Normal case for I */
+			/* sm: send_plogi_acc is not set / send PLOGI acc */
+			efc_node_transition(node, __efc_d_port_logged_in,
+					    NULL);
+			break;
+		}
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+	case EFC_EVT_PRLI_RCVD:
+		node_printf(node, "%s: PRLI received before node is attached\n",
+			    efc_sm_event_name(evt));
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PRLI);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @brief Start up the name services node.
+ *
+ * @par Description
+ * Allocates and starts up the name services node.
+ *
+ * @param sport Pointer to the sport structure.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+static int
+efc_start_ns_node(struct efc_sli_port_s *sport)
+{
+	struct efc_node_s *ns;
+
+	/* Instantiate a name services node */
+	ns = efc_node_find(sport, FC_FID_DIR_SERV);
+	if (!ns) {
+		ns = efc_node_alloc(sport, FC_FID_DIR_SERV, false, false);
+		if (!ns)
+			return -1;
+	}
+	/*
+	 * for found ns, should we be transitioning from here?
+	 * breaks transition only
+	 *  1. from within state machine or
+	 *  2. if after alloc
+	 */
+	if (ns->efc->nodedb_mask & EFC_NODEDB_PAUSE_NAMESERVER)
+		efc_node_pause(ns, __efc_ns_init);
+	else
+		efc_node_transition(ns, __efc_ns_init, NULL);
+	return 0;
+}
+
+/**
+ * @brief Start up the fabric controller node.
+ *
+ * @par Description
+ * Allocates and starts up the fabric controller node.
+ *
+ * @param sport Pointer to the sport structure.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+static int
+efc_start_fabctl_node(struct efc_sli_port_s *sport)
+{
+	struct efc_node_s *fabctl;
+
+	fabctl = efc_node_find(sport, FC_FID_FCTRL);
+	if (!fabctl) {
+		fabctl = efc_node_alloc(sport, FC_FID_FCTRL,
+					false, false);
+		if (!fabctl)
+			return -1;
+	}
+	/*
+	 * for found ns, should we be transitioning from here?
+	 * breaks transition only
+	 *  1. from within state machine or
+	 *  2. if after alloc
+	 */
+	efc_node_transition(fabctl, __efc_fabctl_init, NULL);
+	return 0;
+}
+
+/**
+ * @brief Process the GIDPT payload.
+ *
+ * @par Description
+ * The GIDPT payload is parsed, and new nodes are created, as needed.
+ *
+ * @param node Pointer to the node structure.
+ * @param gidpt Pointer to the GIDPT payload.
+ * @param gidpt_len Payload length
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+static int
+efc_process_gidpt_payload(struct efc_node_s *node,
+			  void *data, u32 gidpt_len)
+{
+	u32 i, j;
+	struct efc_node_s *newnode;
+	struct efc_sli_port_s *sport = node->sport;
+	struct efc_lport *efc = node->efc;
+	u32 port_id = 0, port_count, portlist_count;
+	struct efc_node_s *n;
+	struct efc_node_s **active_nodes;
+	int residual;
+	struct fc_ct_hdr *hdr = data;
+	struct fc_gid_pn_resp *gidpt = data + sizeof(*hdr);
+
+	residual = be16_to_cpu(hdr->ct_mr_size);
+
+	if (residual != 0)
+		efc_log_debug(node->efc, "residual is %u words\n", residual);
+
+	if (be16_to_cpu(hdr->ct_cmd) == FC_FS_RJT) {
+		node_printf(node,
+			    "GIDPT request failed: rsn x%x rsn_expl x%x\n",
+			    hdr->ct_reason, hdr->ct_explan);
+		return -1;
+	}
+
+	portlist_count = (gidpt_len - sizeof(*hdr)) / sizeof(*gidpt);
+
+	/* Count the number of nodes */
+	port_count = 0;
+	list_for_each_entry(n, &sport->node_list, list_entry) {
+		port_count++;
+	}
+
+	/* Allocate a buffer for all nodes */
+	active_nodes = kzalloc(port_count * sizeof(*active_nodes), GFP_ATOMIC);
+	if (!active_nodes) {
+		node_printf(node, "efc_malloc failed\n");
+		return -1;
+	}
+
+	/* Fill buffer with fc_id of active nodes */
+	i = 0;
+	list_for_each_entry(n, &sport->node_list, list_entry) {
+		port_id = n->rnode.fc_id;
+		switch (port_id) {
+		case FC_FID_FLOGI:
+		case FC_FID_FCTRL:
+		case FC_FID_DIR_SERV:
+			break;
+		default:
+			if (port_id != FC_FID_DOM_MGR)
+				active_nodes[i++] = n;
+			break;
+		}
+	}
+
+	/* update the active nodes buffer */
+	for (i = 0; i < portlist_count; i++) {
+		hton24(gidpt[i].fp_fid, port_id);
+
+		for (j = 0; j < port_count; j++) {
+			if (active_nodes[j] &&
+			    port_id == active_nodes[j]->rnode.fc_id) {
+				active_nodes[j] = NULL;
+			}
+		}
+
+		if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
+			break;
+	}
+
+	/* Those remaining in the active_nodes[] are now gone ! */
+	for (i = 0; i < port_count; i++) {
+		/*
+		 * if we're an initiator and the remote node
+		 * is a target, then post the node missing event.
+		 * if we're target and we have enabled
+		 * target RSCN, then post the node missing event.
+		 */
+		if (active_nodes[i]) {
+			if ((node->sport->enable_ini &&
+			     active_nodes[i]->targ) ||
+			     (node->sport->enable_tgt &&
+			     enable_target_rscn(efc))) {
+				efc_node_post_event(active_nodes[i],
+						    EFC_EVT_NODE_MISSING,
+						     NULL);
+			} else {
+				node_printf(node,
+					    "GID_PT: skipping non-tgt port_id x%06x\n",
+					    active_nodes[i]->rnode.fc_id);
+			}
+		}
+	}
+	kfree(active_nodes);
+
+	for (i = 0; i < portlist_count; i++) {
+		hton24(gidpt[i].fp_fid, port_id);
+
+		/* Don't create node for ourselves */
+		if (port_id != node->rnode.sport->fc_id) {
+			newnode = efc_node_find(sport, port_id);
+			if (!newnode) {
+				if (node->sport->enable_ini) {
+					newnode = efc_node_alloc(sport,
+								 port_id,
+								  false,
+								  false);
+					if (!newnode) {
+						efc_log_err(efc,
+							    "efc_node_alloc() failed\n");
+						return -1;
+					}
+					/*
+					 * send PLOGI automatically
+					 * if initiator
+					 */
+					efc_node_init_device(newnode, true);
+				}
+				continue;
+			}
+
+			if (node->sport->enable_ini && newnode->targ) {
+				efc_node_post_event(newnode,
+						    EFC_EVT_NODE_REFOUND,
+						    NULL);
+			}
+			/*
+			 * original code sends ADISC,
+			 * has notion of "refound"
+			 */
+		}
+
+		if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
+			break;
+	}
+	return 0;
+}
+
+/**
+ * @brief Set up the domain point-to-point parameters.
+ *
+ * @par Description
+ * The remote node service parameters are examined, and various point-to-point
+ * variables are set.
+ *
+ * @param sport Pointer to the sport object.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+int
+efc_p2p_setup(struct efc_sli_port_s *sport)
+{
+	struct efc_lport *efc = sport->efc;
+	int rnode_winner;
+
+	rnode_winner = efc_rnode_is_winner(sport);
+
+	/* set sport flags to indicate p2p "winner" */
+	if (rnode_winner == 1) {
+		sport->p2p_remote_port_id = 0;
+		sport->p2p_port_id = 0;
+		sport->p2p_winner = false;
+	} else if (rnode_winner == 0) {
+		sport->p2p_remote_port_id = 2;
+		sport->p2p_port_id = 1;
+		sport->p2p_winner = true;
+	} else {
+		/* no winner; only okay if external loopback enabled */
+		if (sport->efc->external_loopback) {
+			/*
+			 * External loopback mode enabled;
+			 * local sport and remote node
+			 * will be registered with an NPortID = 1;
+			 */
+			efc_log_debug(efc,
+				      "External loopback mode enabled\n");
+			sport->p2p_remote_port_id = 1;
+			sport->p2p_port_id = 1;
+			sport->p2p_winner = true;
+		} else {
+			efc_log_warn(efc,
+				     "failed to determine p2p winner\n");
+			return rnode_winner;
+		}
+	}
+	return 0;
+}
+
+/**
+ * @brief Process the FABCTL node RSCN.
+ *
+ * <h3 class="desc">Description</h3>
+ * Processes the FABCTL node RSCN payload,
+ * simply passes the event to the name server.
+ *
+ * @param node Pointer to the node structure.
+ * @param cbdata Callback data to pass forward.
+ *
+ * @return None.
+ */
+
+static void
+efc_process_rscn(struct efc_node_s *node, struct efc_node_cb_s *cbdata)
+{
+	struct efc_lport *efc = node->efc;
+	struct efc_sli_port_s *sport = node->sport;
+	struct efc_node_s *ns;
+
+	/* Forward this event to the name-services node */
+	ns = efc_node_find(sport, FC_FID_DIR_SERV);
+	if (ns)
+		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, cbdata);
+	else
+		efc_log_warn(efc, "can't find name server node\n");
+}
diff --git a/drivers/scsi/elx/libefc/efc_fabric.h b/drivers/scsi/elx/libefc/efc_fabric.h
new file mode 100644
index 000000000000..7c5f5e0ea5ba
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_fabric.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Declarations for the interface exported by efc_fabric
+ */
+
+#if !defined(__EFCT_FABRIC_H__)
+#define __EFCT_FABRIC_H__
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
+#include "scsi/fc/fc_ns.h"
+
+void *
+__efc_fabric_init(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx_s *ctx,
+			    enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabric_domain_attach_wait(struct efc_sm_ctx_s *ctx,
+				enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabric_wait_domain_attach(struct efc_sm_ctx_s *ctx,
+				enum efc_sm_event_e evt, void *arg);
+
+void *
+__efc_vport_fabric_init(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx_s *ctx,
+			    enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabric_wait_sport_attach(struct efc_sm_ctx_s *ctx,
+			       enum efc_sm_event_e evt, void *arg);
+
+void *
+__efc_ns_init(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt, void *arg);
+void *
+__efc_ns_plogi_wait_rsp(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg);
+void *
+__efc_ns_rftid_wait_rsp(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg);
+void *
+__efc_ns_rffid_wait_rsp(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg);
+void *
+__efc_ns_wait_node_attach(struct efc_sm_ctx_s *ctx,
+			  enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx_s *ctx,
+				      enum efc_sm_event_e evt, void *arg);
+void *
+__efc_ns_logo_wait_rsp(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e, void *arg);
+void *
+__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx_s *ctx,
+			enum efc_sm_event_e evt, void *arg);
+void *
+__efc_ns_idle(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt, void *arg);
+void *
+__efc_ns_gidpt_delay(struct efc_sm_ctx_s *ctx,
+		     enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabctl_init(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabctl_wait_node_attach(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx_s *ctx,
+			  enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabctl_ready(struct efc_sm_ctx_s *ctx,
+		   enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg);
+void *
+__efc_fabric_idle(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg);
+
+void *
+__efc_p2p_rnode_init(struct efc_sm_ctx_s *ctx,
+		     enum efc_sm_event_e evt, void *arg);
+void *
+__efc_p2p_domain_attach_wait(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg);
+void *
+__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx_s *ctx,
+			      enum efc_sm_event_e evt, void *arg);
+void *
+__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx_s *ctx,
+			 enum efc_sm_event_e evt, void *arg);
+void *
+__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx_s *ctx,
+				    enum efc_sm_event_e evt, void *arg);
+void *
+__efc_p2p_wait_domain_attach(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg);
+void *
+__efc_p2p_wait_node_attach(struct efc_sm_ctx_s *ctx,
+			   enum efc_sm_event_e evt, void *arg);
+
+int
+efc_p2p_setup(struct efc_sli_port_s *sport);
+void
+efc_fabric_set_topology(struct efc_node_s *node,
+			enum efc_sport_topology_e topology);
+void efc_fabric_notify_topology(struct efc_node_s *node);
+
+#endif /* __EFCT_FABRIC_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 14/32] elx: libefc: FC node ELS and state handling
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (12 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 13/32] elx: libefc: Fabric " James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 15/32] elx: efct: Data structures and defines for hw operations James Smart
                   ` (18 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- FC node PRLI handling and state management

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_device.c | 1977 ++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_device.h |   72 ++
 2 files changed, 2049 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_device.c
 create mode 100644 drivers/scsi/elx/libefc/efc_device.h

diff --git a/drivers/scsi/elx/libefc/efc_device.c b/drivers/scsi/elx/libefc/efc_device.c
new file mode 100644
index 000000000000..f7c1428ced6a
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_device.c
@@ -0,0 +1,1977 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * device_sm Node State Machine: Remote Device States
+ */
+
+#include "efc.h"
+#include "efc_device.h"
+#include "efc_fabric.h"
+
+static void *__efc_d_common(const char *funcname, struct efc_sm_ctx_s *ctx,
+			    enum efc_sm_event_e evt, void *arg);
+static void *__efc_d_wait_del_node(struct efc_sm_ctx_s *ctx,
+				   enum efc_sm_event_e evt, void *arg);
+static void *__efc_d_wait_del_ini_tgt(struct efc_sm_ctx_s *ctx,
+				      enum efc_sm_event_e evt, void *arg);
+
+/**
+ * @ingroup device_sm
+ * @brief Send response to PRLI.
+ *
+ * <h3 class="desc">Description</h3>
+ * For device nodes, this function sends a PRLI response.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param ox_id OX_ID of PRLI
+ *
+ * @return Returns None.
+ */
+
+void
+efc_d_send_prli_rsp(struct efc_node_s *node, uint16_t ox_id)
+{
+	struct efc_lport *efc = node->efc;
+	/* If the back-end doesn't want to talk to this initiator,
+	 * we send an LS_RJT
+	 */
+	if (node->sport->enable_tgt &&
+	    (efc->tt.scsi_validate_node(efc, node) == 0)) {
+		node_printf(node, "PRLI rejected by target-server\n");
+
+		efc->tt.send_ls_rjt(efc, node, ox_id,
+				    ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+	} else {
+		/*
+		 * sm: / process PRLI payload, send PRLI acc
+		 */
+		efc->tt.els_send_resp(efc, node, ELS_PRLI, ox_id);
+
+		/* Immediately go to ready state to avoid window where we're
+		 * waiting for the PRLI LS_ACC to complete while holding
+		 * FCP_CMNDs
+		 */
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+	}
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Initiate node shutdown
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_initiate_shutdown(struct efc_sm_ctx_s *ctx,
+			  enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		/* assume no wait needed */
+		int rc = EFC_SCSI_CALL_COMPLETE;
+
+		efc->tt.scsi_io_alloc_disable(efc, node);
+
+		/* make necessary delete upcall(s) */
+		if (node->init && !node->targ) {
+			efc_log_info(node->efc,
+				     "[%s] delete (initiator) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			efc_node_transition(node,
+					    __efc_d_wait_del_node,
+					     NULL);
+			if (node->sport->enable_tgt)
+				rc = efc->tt.scsi_del_node(efc, node,
+					EFC_SCSI_INITIATOR_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+
+		} else if (node->targ && !node->init) {
+			efc_log_info(node->efc,
+				     "[%s] delete (target) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			efc_node_transition(node,
+					    __efc_d_wait_del_node,
+					     NULL);
+			if (node->sport->enable_ini)
+				rc = efc->tt.scsi_del_node(efc, node,
+					EFC_SCSI_TARGET_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+
+		} else if (node->init && node->targ) {
+			efc_log_info(node->efc,
+				     "[%s] delete (I+T) WWPN %s WWNN %s\n",
+				node->display_name, node->wwpn, node->wwnn);
+			efc_node_transition(node, __efc_d_wait_del_ini_tgt,
+					    NULL);
+			if (node->sport->enable_tgt)
+				rc = efc->tt.scsi_del_node(efc, node,
+						EFC_SCSI_INITIATOR_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+			/* assume no wait needed */
+			rc = EFC_SCSI_CALL_COMPLETE;
+			if (node->sport->enable_ini)
+				rc = efc->tt.scsi_del_node(efc, node,
+						EFC_SCSI_TARGET_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+		}
+
+		/* we've initiated the upcalls as needed, now kick off the node
+		 * detach to precipitate the aborting of outstanding exchanges
+		 * associated with said node
+		 *
+		 * Beware: if we've made upcall(s), we've already transitioned
+		 * to a new state by the time we execute this.
+		 * consider doing this before the upcalls?
+		 */
+		if (node->attached) {
+			/* issue hw node free; don't care if succeeds right
+			 * away or sometime later, will check node->attached
+			 * later in shutdown process
+			 */
+			rc = efc->tt.hw_node_detach(efc, &node->rnode);
+			if (rc != EFC_HW_RTN_SUCCESS &&
+			    rc != EFC_HW_RTN_SUCCESS_SYNC)
+				node_printf(node,
+					    "Failed freeing HW node, rc=%d\n",
+					rc);
+		}
+
+		/* if neither initiator nor target, proceed to cleanup */
+		if (!node->init && !node->targ) {
+			/*
+			 * node has either been detached or is in
+			 * the process of being detached,
+			 * call common node's initiate cleanup function
+			 */
+			efc_node_initiate_cleanup(node);
+		}
+		break;
+	}
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* Ignore, this can happen if an ELS is
+		 * aborted while in a delay/retry state
+		 */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Common device event handler.
+ *
+ * <h3 class="desc">Description</h3>
+ * For device nodes, this event handler manages default and common events.
+ *
+ * @param funcname Function name text.
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+static void *
+__efc_d_common(const char *funcname, struct efc_sm_ctx_s *ctx,
+	       enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = NULL;
+	struct efc_lport *efc = NULL;
+
+	efc_assert(ctx, NULL);
+	node = ctx->app;
+	efc_assert(node, NULL);
+	efc = node->efc;
+	efc_assert(efc, NULL);
+
+	switch (evt) {
+	/* Handle shutdown events */
+	case EFC_EVT_SHUTDOWN:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
+			      funcname, efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n",
+			      node->display_name, funcname,
+				efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
+			      funcname, efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	default:
+		/* call default event handler common to all nodes */
+		__efc_node_common(funcname, ctx, evt, arg);
+		break;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine:
+ * Wait for a domain-attach completion in loop topology.
+ *
+ * <h3 class="desc">Description</h3>
+ * State waits for a domain-attached completion while in loop topology.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_loop(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		/* send PLOGI automatically if initiator */
+		efc_node_init_device(node, true);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief state: wait for node resume event
+ *
+ * State is entered when a node is in I+T mode and sends a
+ * delete initiator/target
+ * call to the target-server/initiator-client and needs to
+ * wait for that work to complete.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg per event optional argument
+ *
+ * @return returns NULL
+ */
+
+static void *
+__efc_d_wait_del_ini_tgt(struct efc_sm_ctx_s *ctx,
+			 enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		/* Fall through */
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* These are expected events. */
+		break;
+
+	case EFC_EVT_NODE_DEL_INI_COMPLETE:
+	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
+		efc_node_transition(node, __efc_d_wait_del_node, NULL);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief state: Wait for node resume event.
+ *
+ * State is entered when a node sends a delete initiator/target call to the
+ * target-server/initiator-client and needs to wait for that work to complete.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+static void *
+__efc_d_wait_del_node(struct efc_sm_ctx_s *ctx,
+		      enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		/* Fall through */
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* These are expected events. */
+		break;
+
+	case EFC_EVT_NODE_DEL_INI_COMPLETE:
+	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
+		/*
+		 * node has either been detached or is in the process
+		 * of being detached,
+		 * call common node's initiate cleanup function
+		 */
+		efc_node_initiate_cleanup(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @brief Save the OX_ID for sending LS_ACC sometime later.
+ *
+ * <h3 class="desc">Description</h3>
+ * When deferring the response to an ELS request, the OX_ID of the request
+ * is saved using this function.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param hdr Pointer to the FC header.
+ * @param ls Defines the type of ELS to send: LS_ACC, LS_ACC for PLOGI;
+ * or LSS_ACC for PRLI.
+ *
+ * @return None.
+ */
+
+void
+efc_send_ls_acc_after_attach(struct efc_node_s *node,
+			     struct fc_frame_header *hdr,
+			     enum efc_node_send_ls_acc_e ls)
+{
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+
+	efc_assert(node->send_ls_acc == EFC_NODE_SEND_LS_ACC_NONE);
+
+	node->ls_acc_oxid = ox_id;
+	node->send_ls_acc = ls;
+	node->ls_acc_did = ntoh24(hdr->fh_d_id);
+}
+
+/**
+ * @brief Process the PRLI payload.
+ *
+ * <h3 class="desc">Description</h3>
+ * The PRLI payload is processed; the initiator/target
+ * capabilities of the
+ * remote node are extracted and saved in the node object.
+ *
+ * @param node Pointer to the node object.
+ * @param prli Pointer to the PRLI payload.
+ *
+ * @return None.
+ */
+
+void
+efc_process_prli_payload(struct efc_node_s *node, void *prli)
+{
+	struct fc_els_spp *sp = prli + sizeof(struct fc_els_prli);
+
+	node->init = (sp->spp_flags & FCP_SPPF_INIT_FCN) != 0;
+	node->targ = (sp->spp_flags & FCP_SPPF_TARG_FCN) != 0;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Wait for the PLOGI accept to complete.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx_s *ctx,
+			    enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:	/* PLOGI ACC completions */
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		efc_node_transition(node, __efc_d_port_logged_in, NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Wait for the LOGO response.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_logo_rsp(struct efc_sm_ctx_s *ctx,
+		      enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* LOGO response received, sent shutdown */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_LOGO,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		node_printf(node,
+			    "LOGO sent (evt=%s), shutdown node\n",
+			efc_sm_event_name(evt));
+		/* sm: / post explicit logout */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+				    NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @brief Initialize device node.
+ *
+ * Initialize device node. If a node is an initiator,
+ * then send a PLOGI and transition
+ * to __efc_d_wait_plogi_rsp, otherwise transition to __efc_d_init.
+ *
+ * @param node Pointer to the node object.
+ * @param send_plogi Boolean indicating to send PLOGI command or not.
+ *
+ * @return none
+ */
+
+void
+efc_node_init_device(struct efc_node_s *node, bool send_plogi)
+{
+	node->send_plogi = send_plogi;
+	if ((node->efc->nodedb_mask & EFC_NODEDB_PAUSE_NEW_NODES) &&
+	    (node->rnode.fc_id != FC_FID_DOM_MGR)) {
+		node->nodedb_state = __efc_d_init;
+		efc_node_transition(node, __efc_node_paused, NULL);
+	} else {
+		efc_node_transition(node, __efc_d_init, NULL);
+	}
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Initial node state for an initiator or
+ * a target.
+ *
+ * <h3 class="desc">Description</h3>
+ * This state is entered when a node is instantiated, either having been
+ * discovered from a name services query, or having received a PLOGI/FLOGI.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ * - EFC_EVT_ENTER: (uint8_t *) - 1 to send a PLOGI on
+ * entry (initiator-only); 0 indicates a PLOGI is
+ * not sent on entry (initiator-only). Not applicable for a target.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_init(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* check if we need to send PLOGI */
+		if (node->send_plogi) {
+			/* only send if we have initiator capability,
+			 * and domain is attached
+			 */
+			if (node->sport->enable_ini &&
+			    node->sport->domain->attached) {
+				efc->tt.els_send(efc, node, ELS_PLOGI,
+					EFC_FC_FLOGI_TIMEOUT_SEC,
+					EFC_FC_ELS_DEFAULT_RETRIES);
+
+				efc_node_transition(node,
+						    __efc_d_wait_plogi_rsp,
+						    NULL);
+			} else {
+				node_printf(node,
+					    "not sending plogi sport.ini=%d,",
+						node->sport->enable_ini);
+				node_printf(node, "domain attached=%d\n",
+					    node->sport->domain->attached);
+			}
+		}
+		break;
+	case EFC_EVT_PLOGI_RCVD: {
+		/* T, or I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		u32 d_id = ntoh24(hdr->fh_d_id);
+
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+					     EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/* domain already attached */
+		if (node->sport->domain->attached) {
+			rc = efc_node_attach(node);
+			efc_node_transition(node,
+					    __efc_d_wait_node_attach, NULL);
+			if (rc == EFC_HW_RTN_SUCCESS_SYNC) {
+				efc_node_post_event(node,
+						    EFC_EVT_NODE_ATTACH_OK,
+						    NULL);
+			}
+			break;
+		}
+
+		/* domain not attached; several possibilities: */
+		switch (node->sport->topology) {
+		case EFC_SPORT_TOPOLOGY_P2P:
+			/* we're not attached and sport is p2p,
+			 * need to attach
+			 */
+			efc_domain_attach(node->sport->domain, d_id);
+			efc_node_transition(node,
+					    __efc_d_wait_domain_attach,
+					    NULL);
+			break;
+		case EFC_SPORT_TOPOLOGY_FABRIC:
+			/* we're not attached and sport is fabric, domain
+			 * attach should have already been requested as part
+			 * of the fabric state machine, wait for it
+			 */
+			efc_node_transition(node, __efc_d_wait_domain_attach,
+					    NULL);
+			break;
+		case EFC_SPORT_TOPOLOGY_UNKNOWN:
+			/* Two possibilities:
+			 * 1. received a PLOGI before our FLOGI has completed
+			 *    (possible since completion comes in on another
+			 *    CQ), thus we don't know what we're connected to
+			 *    yet; transition to a state to wait for the
+			 *    fabric node to tell us;
+			 * 2. PLOGI received before link went down and we
+			 * haven't performed domain attach yet.
+			 * Note: we cannot distinguish between 1. and 2.
+			 * so have to assume PLOGI
+			 * was received after link back up.
+			 */
+			node_printf(node,
+				    "received PLOGI, unknown topology did=0x%x\n",
+				d_id);
+			efc_node_transition(node,
+					    __efc_d_wait_topology_notify,
+					    NULL);
+			break;
+		default:
+			node_printf(node,
+				    "received PLOGI, with unexpected topology %d\n",
+				node->sport->topology);
+			efc_assert(false, NULL);
+			break;
+		}
+		break;
+	}
+
+	case EFC_EVT_FDISC_RCVD: {
+		__efc_d_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	case EFC_EVT_FLOGI_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		u32 d_id = ntoh24(hdr->fh_d_id);
+
+		/* sm: / save sparams, send FLOGI acc */
+		memcpy(node->sport->domain->flogi_service_params,
+		       cbdata->payload->dma.virt,
+		       sizeof(struct fc_els_flogi));
+
+		/* send FC LS_ACC response, override s_id */
+		efc_fabric_set_topology(node, EFC_SPORT_TOPOLOGY_P2P);
+		efc->tt.send_flogi_p2p_acc(efc, node,
+				be16_to_cpu(hdr->fh_ox_id), d_id);
+		if (efc_p2p_setup(node->sport)) {
+			node_printf(node,
+				    "p2p setup failed, shutting down node\n");
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		} else {
+			efc_node_transition(node,
+					    __efc_p2p_wait_flogi_acc_cmpl,
+					    NULL);
+		}
+
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		if (!node->sport->domain->attached) {
+			/* most likely a frame left over from before a link
+			 * down; drop and
+			 * shut node down w/ "explicit logout" so pending
+			 * frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD:
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		if (!node->sport->domain->attached) {
+			/* most likely a frame left over from before a link
+			 * down; drop and shut node down w/ "explicit logout"
+			 * so pending frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+		node_printf(node, "%s received, sending reject\n",
+			    efc_sm_event_name(evt));
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+				    ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
+
+		break;
+	}
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* note: problem, we're now expecting an ELS REQ completion
+		 * from both the LOGO and PLOGI
+		 */
+		if (!node->sport->domain->attached) {
+			/* most likely a frame left over from before a
+			 * link down; drop and
+			 * shut node down w/ "explicit logout" so pending
+			 * frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+
+		/* Send LOGO */
+		node_printf(node, "FCP_CMND received, send LOGO\n");
+		if (efc->tt.els_send(efc, node, ELS_LOGO,
+				     EFC_FC_FLOGI_TIMEOUT_SEC,
+			EFC_FC_ELS_DEFAULT_RETRIES) == NULL) {
+			/*
+			 * failed to send LOGO, go ahead and cleanup node
+			 * anyways
+			 */
+			node_printf(node, "Failed to send LOGO\n");
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+		} else {
+			/* sent LOGO, wait for response */
+			efc_node_transition(node,
+					    __efc_d_wait_logo_rsp, NULL);
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Wait on a response for a sent PLOGI.
+ *
+ * <h3 class="desc">Description</h3>
+ * State is entered when an initiator-capable node has sent
+ * a PLOGI and is waiting for a response.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_plogi_rsp(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_PLOGI_RCVD: {
+		/* T, or I+T */
+		/* received PLOGI with svc parms, go ahead and attach node
+		 * when PLOGI that was sent ultimately completes, it'll be a
+		 * no-op
+		 *
+		 * If there is an outstanding PLOGI sent, can we set a flag
+		 * to indicate that we don't want to retry it if it times out?
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+		/* sm: domain->attached / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD:
+		/* I, or I+T */
+		/* sent PLOGI and before completion was seen, received the
+		 * PRLI from the remote node (WCQEs and RCQEs come in on
+		 * different queues and order of processing cannot be assumed)
+		 * Save OXID so PRLI can be sent after the attach and continue
+		 * to wait for PLOGI response
+		 */
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PRLI);
+		efc_node_transition(node, __efc_d_wait_plogi_rsp_recvd_prli,
+				    NULL);
+		break;
+
+	case EFC_EVT_LOGO_RCVD: /* why don't we do a shutdown here?? */
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_FDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD:
+	case EFC_EVT_SCR_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received, sending reject\n",
+			    efc_sm_event_name(evt));
+
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+				    ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
+
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* Our PLOGI was rejected, this is ok in some cases */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* not logged in yet and outstanding PLOGI so don't send LOGO,
+		 * just drop
+		 */
+		node_printf(node, "FCP_CMND received, drop\n");
+		break;
+	}
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Waiting on a response for a
+ *        sent PLOGI.
+ *
+ * <h3 class="desc">Description</h3>
+ * State is entered when an initiator-capable node has sent
+ * a PLOGI and is waiting for a response. Before receiving the
+ * response, a PRLI was received, implying that the PLOGI was
+ * successful.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx_s *ctx,
+				  enum efc_sm_event_e evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/*
+		 * Since we've received a PRLI, we have a port login and will
+		 * just need to wait for the PLOGI response to do the node
+		 * attach and then we can send the LS_ACC for the PRLI. If,
+		 * during this time, we receive FCP_CMNDs (which is possible
+		 * since we've already sent a PRLI and our peer may have
+		 * accepted). At this time, we are not waiting on any other
+		 * unsolicited frames to continue with the login process. Thus,
+		 * it will not hurt to hold frames here.
+		 */
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Wait for a domain attach.
+ *
+ * <h3 class="desc">Description</h3>
+ * Waits for a domain-attach complete ok event.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_domain_attach(struct efc_sm_ctx_s *ctx,
+			   enum efc_sm_event_e evt, void *arg)
+{
+	int rc;
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		efc_assert(node->sport->domain->attached, NULL);
+		/* sm: / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Wait for topology
+ *        notification
+ *
+ * <h3 class="desc">Description</h3>
+ * Waits for topology notification from fabric node, then
+ * attaches domain and node.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_topology_notify(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg)
+{
+	int rc;
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY: {
+		enum efc_sport_topology_e topology =
+					(enum efc_sport_topology_e)arg;
+
+		efc_assert(!node->sport->domain->attached, NULL);
+
+		efc_assert(node->send_ls_acc == EFC_NODE_SEND_LS_ACC_PLOGI,
+			   NULL);
+		node_printf(node, "topology notification, topology=%d\n",
+			    topology);
+
+		/* At the time the PLOGI was received, the topology was unknown,
+		 * so we didn't know which node would perform the domain attach:
+		 * 1. The node from which the PLOGI was sent (p2p) or
+		 * 2. The node to which the FLOGI was sent (fabric).
+		 */
+		if (topology == EFC_SPORT_TOPOLOGY_P2P) {
+			/* if this is p2p, need to attach to the domain using
+			 * the d_id from the PLOGI received
+			 */
+			efc_domain_attach(node->sport->domain,
+					  node->ls_acc_did);
+		}
+		/* else, if this is fabric, the domain attach
+		 * should be performed by the fabric node (node sending FLOGI);
+		 * just wait for attach to complete
+		 */
+
+		efc_node_transition(node, __efc_d_wait_domain_attach, NULL);
+		break;
+	}
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		efc_assert(node->sport->domain->attached, NULL);
+		node_printf(node, "domain attach ok\n");
+		/* sm: / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Wait for a node attach
+ * when found by a remote node.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_node_attach(struct efc_sm_ctx_s *ctx,
+			 enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		switch (node->send_ls_acc) {
+		case EFC_NODE_SEND_LS_ACC_PLOGI: {
+			/* sm: send_plogi_acc is set / send PLOGI acc */
+			/* Normal case for T, or I+T */
+			efc->tt.els_send_resp(efc, node, ELS_PLOGI,
+							node->ls_acc_oxid);
+			efc_node_transition(node,
+					    __efc_d_wait_plogi_acc_cmpl,
+					     NULL);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_PRLI: {
+			efc_d_send_prli_rsp(node,
+					    node->ls_acc_oxid);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_NONE:
+		default:
+			/* Normal case for I */
+			/* sm: send_plogi_acc is not set / send PLOGI acc */
+			efc_node_transition(node,
+					    __efc_d_port_logged_in, NULL);
+			break;
+		}
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	/* Handle shutdown events */
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
+				    NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
+				    NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
+		efc_node_transition(node,
+				    __efc_d_wait_attach_evt_shutdown, NULL);
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Wait for a node/domain
+ * attach then shutdown node.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx_s *ctx,
+				 enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	/* wait for any of these attach events and then shutdown */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Port is logged in.
+ *
+ * <h3 class="desc">Description</h3>
+ * This state is entered when a remote port has completed port login (PLOGI).
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process
+ * @param arg Per event optional argument
+ *
+ * @return Returns NULL.
+ */
+void *
+__efc_d_port_logged_in(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* Normal case for I or I+T */
+		if (node->sport->enable_ini &&
+		    !(node->rnode.fc_id != FC_FID_DOM_MGR)) {
+			/* sm: if enable_ini / send PRLI */
+			efc->tt.els_send(efc, node, ELS_PRLI,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+			/* can now expect ELS_REQ_OK/FAIL/RJT */
+		}
+		break;
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/* Normal for T or I+T */
+
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_d_send_prli_rsp(node, be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_OK: {	/* PRLI response */
+		/* Normal case for I or I+T */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / process PRLI payload */
+		efc_process_prli_payload(node, cbdata->els_rsp.virt);
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {	/* PRLI response failed */
+		/* I, I+T, assume some link failure, shutdown node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT: {
+		/* PRLI rejected by remote
+		 * Normal for I, I+T (connected to an I)
+		 * Node doesn't want to be a target, stay here and wait for a
+		 * PRLI from the remote node
+		 * if it really wants to connect to us as target
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK: {
+		/* Normal T, I+T, target-server rejected the process login */
+		/* This would be received only in the case where we sent
+		 * LS_RJT for the PRLI, so
+		 * do nothing.   (note: as T only we could shutdown the node)
+		 */
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/*sm: / save sparams, set send_plogi_acc,
+		 *post implicit logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/* Restart node attach with new service parameters,
+		 * and send ACC
+		 */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		/* I, T, I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt),
+					node->attached);
+		/* sm: / send LOGO acc */
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Wait for a LOGO accept.
+ *
+ * <h3 class="desc">Description</h3>
+ * Waits for a LOGO accept completion.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process
+ * @param arg Per event optional argument
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx_s *ctx,
+			   enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_s *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/* sm: / post explicit logout */
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO, NULL);
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Device is ready.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_device_ready(struct efc_sm_ctx_s *ctx,
+		     enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	if (evt != EFC_EVT_FCP_CMD_RCVD)
+		node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		node->fcp_enabled = true;
+		if (node->init) {
+			efc_log_info(efc,
+				     "[%s] found (initiator) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			if (node->sport->enable_tgt)
+				efc->tt.scsi_new_node(efc, node);
+		}
+		if (node->targ) {
+			efc_log_info(efc,
+				     "[%s] found (target) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			if (node->sport->enable_ini)
+				efc->tt.scsi_new_node(efc, node);
+		}
+		break;
+
+	case EFC_EVT_EXIT:
+		node->fcp_enabled = false;
+		break;
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/* sm: / save sparams, set send_plogi_acc, post implicit
+		 * logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/*
+		 * Restart node attach with new service parameters,
+		 * and send ACC
+		 */
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_IMPLICIT_LOGO, NULL);
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD: {
+		/* T, I+T: remote initiator is slow to get started */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+
+		/* sm: / send PRLI acc */
+
+		efc->tt.els_send_resp(efc, node, ELS_PRLI,
+					be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_PRLO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* sm: / send PRLO acc */
+		efc->tt.els_send_resp(efc, node, ELS_PRLO,
+					be16_to_cpu(hdr->fh_ox_id));
+		/* need implicit logout? */
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+		/* sm: / send LOGO acc */
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	case EFC_EVT_ADISC_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* sm: / send ADISC acc */
+		efc->tt.els_send_resp(efc, node, ELS_ADISC,
+					be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_ABTS_RCVD:
+		/* sm: / process ABTS */
+		// This should not happpen
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		break;
+
+	case EFC_EVT_NODE_REFOUND:
+		break;
+
+	case EFC_EVT_NODE_MISSING:
+		if (node->sport->enable_rscn)
+			efc_node_transition(node, __efc_d_device_gone, NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		/* T, or I+T, PRLI accept completed ok */
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/* T, or I+T, PRLI accept failed to complete */
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		node_printf(node, "Failed to send PRLI LS_ACC\n");
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Node is gone (absent from GID_PT).
+ *
+ * <h3 class="desc">Description</h3>
+ * State entered when a node is detected as being gone (absent from GID_PT).
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process
+ * @param arg Per event optional argument
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_device_gone(struct efc_sm_ctx_s *ctx,
+		    enum efc_sm_event_e evt, void *arg)
+{
+	int rc = EFC_SCSI_CALL_COMPLETE;
+	int rc_2 = EFC_SCSI_CALL_COMPLETE;
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		static char const *labels[] = {"none", "initiator", "target",
+							"initiator+target"};
+
+		efc_log_info(efc, "[%s] missing (%s)    WWPN %s WWNN %s\n",
+			     node->display_name,
+				labels[(node->targ << 1) | (node->init)],
+						node->wwpn, node->wwnn);
+
+		switch (efc_node_get_enable(node)) {
+		case EFC_NODE_ENABLE_T_TO_T:
+		case EFC_NODE_ENABLE_I_TO_T:
+		case EFC_NODE_ENABLE_IT_TO_T:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_TARGET_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_T_TO_I:
+		case EFC_NODE_ENABLE_I_TO_I:
+		case EFC_NODE_ENABLE_IT_TO_I:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_T_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_I_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+						  EFC_SCSI_TARGET_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_IT_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			rc_2 = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_TARGET_MISSING);
+			break;
+
+		default:
+			rc = EFC_SCSI_CALL_COMPLETE;
+			break;
+		}
+
+		if (rc == EFC_SCSI_CALL_COMPLETE &&
+		    rc_2 == EFC_SCSI_CALL_COMPLETE)
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+
+		break;
+	}
+	case EFC_EVT_NODE_REFOUND:
+		/* two approaches, reauthenticate with PLOGI/PRLI, or ADISC */
+
+		/* reauthenticate with PLOGI/PRLI */
+		/* efc_node_transition(node, __efc_d_discovered, NULL); */
+
+		/* reauthenticate with ADISC */
+		/* sm: / send ADISC */
+		efc->tt.els_send(efc, node, ELS_ADISC,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_d_wait_adisc_rsp, NULL);
+		break;
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/* sm: / save sparams, set send_plogi_acc, post implicit
+		 * logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/*
+		 * Restart node attach with new service parameters, and send
+		 * ACC
+		 */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* most likely a stale frame (received prior to link down),
+		 * if attempt to send LOGO, will probably timeout and eat
+		 * up 20s; thus, drop FCP_CMND
+		 */
+		node_printf(node, "FCP_CMND received, drop\n");
+		break;
+	}
+	case EFC_EVT_LOGO_RCVD: {
+		/* I, T, I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+		/* sm: / send LOGO acc */
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup device_sm
+ * @brief Device node state machine: Wait for the ADISC response.
+ *
+ * <h3 class="desc">Description</h3>
+ * Waits for the ADISC response from the remote node.
+ *
+ * @param ctx Remote node state machine context.
+ * @param evt Event to process.
+ * @param arg Per event optional argument.
+ *
+ * @return Returns NULL.
+ */
+
+void *
+__efc_d_wait_adisc_rsp(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg)
+{
+	struct efc_node_cb_s *cbdata = arg;
+	struct efc_node_s *node = ctx->app;
+	struct efc_lport *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* received an LS_RJT, in this case, send shutdown
+		 * (explicit logo) event which will unregister the node,
+		 * and start over with PLOGI
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / post explicit logout */
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+				     NULL);
+		break;
+
+	case EFC_EVT_LOGO_RCVD: {
+		/* In this case, we have the equivalent of an LS_RJT for
+		 * the ADISC, so we need to abort the ADISC, and re-login
+		 * with PLOGI
+		 */
+		/* sm: / request abort, send LOGO acc */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
diff --git a/drivers/scsi/elx/libefc/efc_device.h b/drivers/scsi/elx/libefc/efc_device.h
new file mode 100644
index 000000000000..434308b87826
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_device.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Node state machine functions for remote device node sm
+ */
+
+#if !defined(__EFCT_DEVICE_H__)
+#define __EFCT_DEVICE_H__
+extern void
+efc_node_init_device(struct efc_node_s *node, bool send_plogi);
+extern void
+efc_process_prli_payload(struct efc_node_s *node,
+			 void *prli);
+extern void
+efc_d_send_prli_rsp(struct efc_node_s *node, uint16_t ox_id);
+extern void
+efc_send_ls_acc_after_attach(struct efc_node_s *node,
+			     struct fc_frame_header *hdr,
+			     enum efc_node_send_ls_acc_e ls);
+extern void *
+__efc_d_wait_loop(struct efc_sm_ctx_s *ctx,
+		  enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx_s *ctx,
+			    enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_init(struct efc_sm_ctx_s *ctx, enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_wait_plogi_rsp(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx_s *ctx,
+				  enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_wait_domain_attach(struct efc_sm_ctx_s *ctx,
+			   enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_wait_topology_notify(struct efc_sm_ctx_s *ctx,
+			     enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_wait_node_attach(struct efc_sm_ctx_s *ctx,
+			 enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx_s *ctx,
+				 enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_initiate_shutdown(struct efc_sm_ctx_s *ctx,
+			  enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_port_logged_in(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx_s *ctx,
+			   enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_device_ready(struct efc_sm_ctx_s *ctx,
+		     enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_device_gone(struct efc_sm_ctx_s *ctx,
+		    enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_wait_adisc_rsp(struct efc_sm_ctx_s *ctx,
+		       enum efc_sm_event_e evt, void *arg);
+extern void *
+__efc_d_wait_logo_rsp(struct efc_sm_ctx_s *ctx,
+		      enum efc_sm_event_e evt, void *arg);
+
+#endif /* __EFCT_DEVICE_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 15/32] elx: efct: Data structures and defines for hw operations
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (13 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 14/32] elx: libefc: FC node ELS and state handling James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 16/32] elx: efct: Driver initialization routines James Smart
                   ` (17 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch starts the population of the efct target mode
driver.  The driver is contained in the drivers/scsi/elx/efct
subdirectory.

This patch creates the efct directory and starts population of
the driver by adding SLI-4 configuration parameters, data structures
for configuring SLI-4 queues, converting from os to SLI-4 IO requests,
and handling async events.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.h | 1011 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 1011 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_hw.h

diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
new file mode 100644
index 000000000000..60e377b2e7e5
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -0,0 +1,1011 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef _EFCT_HW_H
+#define _EFCT_HW_H
+
+#include "../libefc_sli/sli4.h"
+#include "efct_utils.h"
+
+/*
+ * EFCT PCI IDs
+ */
+#define EFCT_VENDOR_ID			0x10df  /* Emulex */
+#define EFCT_DEVICE_ID_LPE31004		0xe307  /* LightPulse 16Gb x 4
+						 * FC (lancer-g6)
+						 */
+#define PCI_PRODUCT_EMULEX_LPE32002	0xe307  /* LightPulse 32Gb x 2
+						 * FC (lancer-g6)
+						 */
+#define EFCT_DEVICE_ID_G7		0xf407	/* LightPulse 32Gb x 4
+						 * FC (lancer-g7)
+						 */
+
+/*Define rq_threads seq cbuf size to 4K (equal to SLI RQ max length)*/
+#define EFCT_RQTHREADS_MAX_SEQ_CBUF     4096
+
+/*Default RQ entries len used by driver*/
+#define EFCT_HW_RQ_ENTRIES_MIN		512
+#define EFCT_HW_RQ_ENTRIES_DEF		1024
+#define EFCT_HW_RQ_ENTRIES_MAX		4096
+
+/*Defines the size of the RQ buffers used for each RQ*/
+#define EFCT_HW_RQ_SIZE_HDR             128
+#define EFCT_HW_RQ_SIZE_PAYLOAD         1024
+
+/*Define the maximum number of multi-receive queues*/
+#define EFCT_HW_MAX_MRQS		8
+
+/*
+ * Define count of when to set the WQEC bit in a submitted
+ * WQE, causing a consummed/released completion to be posted.
+ */
+#define EFCT_HW_WQEC_SET_COUNT		32
+
+/*Send frame timeout in seconds*/
+#define EFCT_HW_SEND_FRAME_TIMEOUT	10
+
+/*
+ * FDT Transfer Hint value, reads greater than this value
+ * will be segmented to implement fairness.   A value of zero disables
+ * the feature.
+ */
+#define EFCT_HW_FDT_XFER_HINT                   8192
+
+/*
+ * HW completion loop control parameters.
+ *
+ * The HW completion loop must terminate periodically
+ * to keep the OS happy.  The loop terminates when a predefined
+ * time has elapsed, but to keep the overhead of
+ * computing time down, the time is only checked after a
+ * number of loop iterations has completed.
+ *
+ * EFCT_HW_TIMECHECK_ITERATIONS	 number of loop iterations
+ * between time checks
+ *
+ */
+
+#define EFCT_HW_TIMECHECK_ITERATIONS	100
+#define EFCT_HW_MAX_NUM_MQ 1
+#define EFCT_HW_MAX_NUM_RQ 32
+#define EFCT_HW_MAX_NUM_EQ 16
+#define EFCT_HW_MAX_NUM_WQ 32
+
+#define OCE_HW_MAX_NUM_MRQ_PAIRS 16
+
+#define EFCT_HW_MAX_WQ_CLASS	4
+#define EFCT_HW_MAX_WQ_CPU	128
+
+/*
+ * A CQ will be assinged to each WQ
+ * (CQ must have 2X entries of the WQ for abort
+ * processing), plus a separate one for each RQ PAIR and one for MQ
+ */
+#define EFCT_HW_MAX_NUM_CQ \
+	((EFCT_HW_MAX_NUM_WQ * 2) + 1 + (OCE_HW_MAX_NUM_MRQ_PAIRS * 2))
+
+#define EFCT_HW_Q_HASH_SIZE	128
+#define EFCT_HW_RQ_HEADER_SIZE	128
+#define EFCT_HW_RQ_HEADER_INDEX	0
+
+/**
+ * @brief Options for efct_hw_command().
+ */
+enum {
+	/**< command executes synchronously and busy-waits for completion */
+	EFCT_CMD_POLL,
+	/**< command executes asynchronously. Uses callback */
+	EFCT_CMD_NOWAIT,
+};
+
+enum efct_hw_rtn_e {
+	EFCT_HW_RTN_SUCCESS = 0,
+	EFCT_HW_RTN_SUCCESS_SYNC = 1,
+	EFCT_HW_RTN_ERROR = -1,
+	EFCT_HW_RTN_NO_RESOURCES = -2,
+	EFCT_HW_RTN_NO_MEMORY = -3,
+	EFCT_HW_RTN_IO_NOT_ACTIVE = -4,
+	EFCT_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
+	EFCT_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
+	EFCT_HW_RTN_INVALID_ARG = -7,
+};
+
+#define EFCT_HW_RTN_IS_ERROR(e)	((e) < 0)
+
+enum efct_hw_reset_e {
+	EFCT_HW_RESET_FUNCTION,
+	EFCT_HW_RESET_FIRMWARE,
+	EFCT_HW_RESET_MAX
+};
+
+enum efct_hw_property_e {
+	EFCT_HW_N_IO,
+	EFCT_HW_N_SGL,
+	EFCT_HW_MAX_IO,
+	EFCT_HW_MAX_SGE,
+	EFCT_HW_MAX_SGL,
+	EFCT_HW_MAX_NODES,
+	EFCT_HW_MAX_RQ_ENTRIES,
+	EFCT_HW_TOPOLOGY,	/**< auto, nport, loop */
+	EFCT_HW_WWN_NODE,
+	EFCT_HW_WWN_PORT,
+	EFCT_HW_FW_REV,
+	EFCT_HW_FW_REV2,
+	EFCT_HW_IPL,
+	EFCT_HW_VPD,
+	EFCT_HW_VPD_LEN,
+	EFCT_HW_MODE,		/**< initiator, target, both */
+	EFCT_HW_LINK_SPEED,
+	EFCT_HW_IF_TYPE,
+	EFCT_HW_SLI_REV,
+	EFCT_HW_SLI_FAMILY,
+	EFCT_HW_RQ_PROCESS_LIMIT,
+	EFCT_HW_RQ_DEFAULT_BUFFER_SIZE,
+	EFCT_HW_AUTO_XFER_RDY_CAPABLE,
+	EFCT_HW_AUTO_XFER_RDY_XRI_CNT,
+	EFCT_HW_AUTO_XFER_RDY_SIZE,
+	EFCT_HW_AUTO_XFER_RDY_BLK_SIZE,
+	EFCT_HW_AUTO_XFER_RDY_T10_ENABLE,
+	EFCT_HW_AUTO_XFER_RDY_P_TYPE,
+	EFCT_HW_AUTO_XFER_RDY_REF_TAG_IS_LBA,
+	EFCT_HW_AUTO_XFER_RDY_APP_TAG_VALID,
+	EFCT_HW_AUTO_XFER_RDY_APP_TAG_VALUE,
+	EFCT_HW_DIF_CAPABLE,
+	EFCT_HW_DIF_SEED,
+	EFCT_HW_DIF_MODE,
+	EFCT_HW_DIF_MULTI_SEPARATE,
+	EFCT_HW_DUMP_MAX_SIZE,
+	EFCT_HW_DUMP_READY,
+	EFCT_HW_DUMP_PRESENT,
+	EFCT_HW_RESET_REQUIRED,
+	EFCT_HW_FW_ERROR,
+	EFCT_HW_FW_READY,
+	EFCT_HW_HIGH_LOGIN_MODE,
+	EFCT_HW_PREREGISTER_SGL,
+	EFCT_HW_HW_REV1,
+	EFCT_HW_HW_REV2,
+	EFCT_HW_HW_REV3,
+	EFCT_HW_ETH_LICENSE,
+	EFCT_HW_LINK_MODULE_TYPE,
+	EFCT_HW_NUM_CHUTES,
+	EFCT_HW_WAR_VERSION,
+	/**< enable driver timeouts for target WQEs */
+	EFCT_HW_EMULATE_TARGET_WQE_TIMEOUT,
+	EFCT_HW_LINK_CONFIG_SPEED,
+	EFCT_HW_CONFIG_TOPOLOGY,
+	EFCT_HW_BOUNCE,
+	EFCT_HW_PORTNUM,
+	EFCT_HW_BIOS_VERSION_STRING,
+	EFCT_HW_RQ_SELECT_POLICY,
+	EFCT_HW_SGL_CHAINING_CAPABLE,
+	EFCT_HW_SGL_CHAINING_ALLOWED,
+	EFCT_HW_SGL_CHAINING_HOST_ALLOCATED,
+	EFCT_HW_SEND_FRAME_CAPABLE,
+	EFCT_HW_RQ_SELECTION_POLICY,
+	EFCT_HW_RR_QUANTA,
+	EFCT_HW_FILTER_DEF,
+	EFCT_HW_MAX_VPORTS,
+	EFCT_ESOC,
+};
+
+enum {
+	EFCT_HW_TOPOLOGY_AUTO,
+	EFCT_HW_TOPOLOGY_NPORT,
+	EFCT_HW_TOPOLOGY_LOOP,
+	EFCT_HW_TOPOLOGY_NONE,
+	EFCT_HW_TOPOLOGY_MAX
+};
+
+enum {
+	EFCT_HW_MODE_INITIATOR,
+	EFCT_HW_MODE_TARGET,
+	EFCT_HW_MODE_BOTH,
+	EFCT_HW_MODE_MAX
+};
+
+/**
+ * @brief Port protocols
+ */
+
+enum efct_hw_port_protocol_e {
+	EFCT_HW_PORT_PROTOCOL_FCOE,
+	EFCT_HW_PORT_PROTOCOL_FC,
+	EFCT_HW_PORT_PROTOCOL_OTHER,
+};
+
+/**
+ * @brief pack fw revision values into a single uint64_t
+ */
+
+/* Two levels of macro needed due to expansion */
+#define HW_FWREV(a, b, c, d) (((uint64_t)(a) << 48) | ((uint64_t)(b) << 32)\
+			| ((uint64_t)(c) << 16) | ((uint64_t)(d)))
+
+#define EFCT_FW_VER_STR(a, b, c, d) (#a "." #b "." #c "." #d)
+
+/**
+ * @brief Defines DIF operation modes
+ */
+enum {
+	EFCT_HW_DIF_MODE_INLINE,
+	EFCT_HW_DIF_MODE_SEPARATE,
+};
+
+/**
+ * @brief T10 DIF operations.
+ */
+enum efct_hw_dif_oper_e {
+	EFCT_HW_DIF_OPER_DISABLED,
+	EFCT_HW_SGE_DIFOP_INNODIFOUTCRC,
+	EFCT_HW_SGE_DIFOP_INCRCOUTNODIF,
+	EFCT_HW_SGE_DIFOP_INNODIFOUTCHKSUM,
+	EFCT_HW_SGE_DIFOP_INCHKSUMOUTNODIF,
+	EFCT_HW_SGE_DIFOP_INCRCOUTCRC,
+	EFCT_HW_SGE_DIFOP_INCHKSUMOUTCHKSUM,
+	EFCT_HW_SGE_DIFOP_INCRCOUTCHKSUM,
+	EFCT_HW_SGE_DIFOP_INCHKSUMOUTCRC,
+	EFCT_HW_SGE_DIFOP_INRAWOUTRAW,
+};
+
+#define EFCT_HW_DIF_OPER_PASS_THRU	EFCT_HW_SGE_DIFOP_INCRCOUTCRC
+#define EFCT_HW_DIF_OPER_STRIP		EFCT_HW_SGE_DIFOP_INCRCOUTNODIF
+#define EFCT_HW_DIF_OPER_INSERT		EFCT_HW_SGE_DIFOP_INNODIFOUTCRC
+
+/**
+ * @brief T10 DIF block sizes.
+ */
+enum efct_hw_dif_blk_size_e {
+	EFCT_HW_DIF_BK_SIZE_512,
+	EFCT_HW_DIF_BK_SIZE_1024,
+	EFCT_HW_DIF_BK_SIZE_2048,
+	EFCT_HW_DIF_BK_SIZE_4096,
+	EFCT_HW_DIF_BK_SIZE_520,
+	EFCT_HW_DIF_BK_SIZE_4104,
+	EFCT_HW_DIF_BK_SIZE_NA = 0
+};
+
+/**
+ * @brief link module types
+ *
+ * (note: these just happen to match SLI4 values)
+ */
+
+enum {
+	EFCT_HW_LINK_MODULE_TYPE_1GB = 0x0004,
+	EFCT_HW_LINK_MODULE_TYPE_2GB = 0x0008,
+	EFCT_HW_LINK_MODULE_TYPE_4GB = 0x0040,
+	EFCT_HW_LINK_MODULE_TYPE_8GB = 0x0080,
+	EFCT_HW_LINK_MODULE_TYPE_10GB = 0x0100,
+	EFCT_HW_LINK_MODULE_TYPE_16GB = 0x0200,
+	EFCT_HW_LINK_MODULE_TYPE_32GB = 0x0400,
+};
+
+/**
+ * @brief T10 DIF information passed to the transport.
+ */
+
+struct efct_hw_dif_info_s {
+	enum efct_hw_dif_oper_e dif_oper;
+	enum efct_hw_dif_blk_size_e blk_size;
+	u32 ref_tag_cmp;
+	u32 ref_tag_repl;
+	u16 app_tag_cmp;
+	u16 app_tag_repl;
+	bool check_ref_tag;
+	bool check_app_tag;
+	bool check_guard;
+	bool auto_incr_ref_tag;
+	bool repl_app_tag;
+	bool repl_ref_tag;
+	bool dif_separate;
+
+	/* If the APP TAG is 0xFFFF, disable REF TAG and CRC field chk */
+	bool disable_app_ffff;
+
+	/* if the APP TAG is 0xFFFF and REF TAG is 0xFFFF_FFFF,
+	 * disable checking the received CRC field.
+	 */
+	bool disable_app_ref_ffff;
+	u16 dif_seed;
+	u8 dif;
+};
+
+enum efct_hw_io_type_e {
+	EFCT_HW_ELS_REQ,	/**< ELS request */
+	EFCT_HW_ELS_RSP,	/**< ELS response */
+	EFCT_HW_ELS_RSP_SID,	/**< ELS response, override the S_ID */
+	EFCT_HW_FC_CT,		/**< FC Common Transport */
+	EFCT_HW_FC_CT_RSP,	/**< FC Common Transport Response */
+	EFCT_HW_BLS_ACC,	/**< BLS accept (BA_ACC) */
+	EFCT_HW_BLS_ACC_SID,	/**< BLS accept (BA_ACC), override the S_ID */
+	EFCT_HW_BLS_RJT,	/**< BLS reject (BA_RJT) */
+	EFCT_HW_IO_TARGET_READ,
+	EFCT_HW_IO_TARGET_WRITE,
+	EFCT_HW_IO_TARGET_RSP,
+	EFCT_HW_IO_DNRX_REQUEUE,
+	EFCT_HW_IO_MAX,
+};
+
+enum efct_hw_io_state_e {
+	EFCT_HW_IO_STATE_FREE,
+	EFCT_HW_IO_STATE_INUSE,
+	EFCT_HW_IO_STATE_WAIT_FREE,
+	EFCT_HW_IO_STATE_WAIT_SEC_HIO,
+};
+
+/* Descriptive strings for the HW IO request types (note: these must always
+ * match up with the enum efct_hw_io_type_e declaration)
+ **/
+#define EFCT_HW_IO_TYPE_STRINGS \
+	"ELS request", \
+	"ELS response", \
+	"ELS response(set SID)", \
+	"FC CT request", \
+	"BLS accept", \
+	"BLS accept(set SID)", \
+	"BLS reject", \
+	"target read", \
+	"target write", \
+	"target response",
+
+struct efct_hw_s;
+/**
+ * @brief HW command context.
+ *
+ * Stores the state for the asynchronous commands sent to the hardware.
+ */
+struct efct_command_ctx_s {
+	struct list_head	list_entry;
+	/**< Callback function */
+	int	(*cb)(struct efct_hw_s *hw, int status, u8 *mqe, void *arg);
+	void	*arg;	/**< Argument for callback */
+	u8	*buf;	/**< buffer holding command / results */
+	void	*ctx;	/**< upper layer context */
+};
+
+struct efct_hw_sgl_s {
+	uintptr_t	addr;
+	size_t		len;
+};
+
+union efct_hw_io_param_u {
+	struct {
+		__be16	 ox_id;
+		__be16	 rx_id;
+		u8  payload[12];	/**< big enough for ABTS BA_ACC */
+	} bls;
+	struct {
+		u32 s_id;
+		u16 ox_id;
+		u16 rx_id;
+		u8  payload[12];	/**< big enough for ABTS BA_ACC */
+	} bls_sid;
+	struct {
+		u8	r_ctl;
+		u8	type;
+		u8	df_ctl;
+		u8 timeout;
+	} bcast;
+	struct {
+		u16 ox_id;
+		u8 timeout;
+	} els;
+	struct {
+		u32 s_id;
+		u16 ox_id;
+		u8 timeout;
+	} els_sid;
+	struct {
+		u8	r_ctl;
+		u8	type;
+		u8	df_ctl;
+		u8 timeout;
+	} fc_ct;
+	struct {
+		u8	r_ctl;
+		u8	type;
+		u8	df_ctl;
+		u8 timeout;
+		u16 ox_id;
+	} fc_ct_rsp;
+	struct {
+		u32 offset;
+		u16 ox_id;
+		u16 flags;
+		u8	cs_ctl;
+		enum efct_hw_dif_oper_e dif_oper;
+		enum efct_hw_dif_blk_size_e blk_size;
+		u8	timeout;
+		u32 app_id;
+	} fcp_tgt;
+	struct {
+		struct efc_dma_s	*cmnd;
+		struct efc_dma_s	*rsp;
+		enum efct_hw_dif_oper_e dif_oper;
+		enum efct_hw_dif_blk_size_e blk_size;
+		u32	cmnd_size;
+		u16	flags;
+		u8		timeout;
+		u32	first_burst;
+	} fcp_ini;
+};
+
+/**
+ * @brief WQ steering mode
+ */
+enum efct_hw_wq_steering_e {
+	EFCT_HW_WQ_STEERING_CLASS,
+	EFCT_HW_WQ_STEERING_REQUEST,
+	EFCT_HW_WQ_STEERING_CPU,
+};
+
+/**
+ * @brief HW wqe object
+ */
+struct efct_hw_wqe_s {
+	struct list_head	list_entry;
+	/**< set if abort wqe needs to be submitted */
+	bool		abort_wqe_submit_needed;
+	/**< set to 1 to have hardware to automatically send ABTS */
+	bool		send_abts;
+	u32	id;
+	u32	abort_reqtag;
+	/**< work queue entry buffer */
+	u8		*wqebuf;
+};
+
+/**
+ * @brief HW IO object.
+ *
+ * Stores the per-IO information necessary
+ * for both the lower (SLI) and upper
+ * layers (efct).
+ */
+struct efct_hw_io_s {
+	/* Owned by HW */
+
+	/* reference counter and callback function */
+	struct kref ref;
+	void (*release)(struct kref *arg);
+	/**< used for busy, wait_free, free lists */
+	struct list_head	list_entry;
+	/**< used for timed_wqe list */
+	struct list_head	wqe_link;
+	/**< used for io posted dnrx list */
+	struct list_head	dnrx_link;
+	/**< state of IO: free, busy, wait_free */
+	enum efct_hw_io_state_e state;
+	/**< Work queue object, with link for pending */
+	struct efct_hw_wqe_s	wqe;
+	/**< pointer back to hardware context */
+	struct efct_hw_s	*hw;
+	struct efc_remote_node_s	*rnode;
+	struct efc_dma_s	xfer_rdy;
+	u16	type;
+	/**< WQ assigned to the exchange */
+	struct hw_wq_s	*wq;
+	 /**< Exchange is active in FW */
+	bool		xbusy;
+	/**< Function called on IO completion */
+	int
+	(*done)(struct efct_hw_io_s *hio,
+		struct efc_remote_node_s *rnode,
+			u32 len, int status,
+			u32 ext, void *ul_arg);
+	/**< argument passed to "IO done" callback */
+	void		*arg;
+	/**< Function called on abort completion */
+	int
+	(*abort_done)(struct efct_hw_io_s *hio,
+		      struct efc_remote_node_s *rnode,
+			u32 len, int status,
+			u32 ext, void *ul_arg);
+	/**< argument passed to "abort done" callback */
+	void		*abort_arg;
+	/**< needed for bug O127585: length of IO */
+	size_t		length;
+	/**< timeout value for target WQEs */
+	u8		tgt_wqe_timeout;
+	/**< timestamp when current WQE was submitted */
+	u64	submit_ticks;
+
+	/**< if TRUE, latched status shld be returned */
+	bool		status_saved;
+	/**< if TRUE, abort is in progress */
+	bool		abort_in_progress;
+	u32	saved_status;	/**< latched status */
+	u32	saved_len;	/**< latched length */
+	u32	saved_ext;	/**< latched extended status */
+
+	/**< EQ that this HIO came up on */
+	struct hw_eq_s	*eq;
+	/**< WQ steering mode request */
+	enum efct_hw_wq_steering_e	wq_steering;
+	/**< WQ class if steering mode is Class */
+	u8		wq_class;
+
+	/*  Owned by SLI layer */
+	u16	reqtag;		/**< request tag for this HW IO */
+	/**< request tag for an abort of this HW IO
+	 * (note: this is a 32 bit value
+	 * to allow us to use UINT32_MAX as an uninitialized value)
+	 **/
+	u32	abort_reqtag;
+	u32	indicator;	/**< XRI */
+	struct efc_dma_s	def_sgl;/**< default scatter gather list */
+	u32	def_sgl_count;	/**< count of SGEs in default SGL */
+	struct efc_dma_s	*sgl;	/**< pointer to current active SGL */
+	u32	sgl_count;	/**< count of SGEs in io->sgl */
+	u32	first_data_sge;	/**< index of first data SGE */
+	struct efc_dma_s	*ovfl_sgl;	/**< overflow SGL */
+	u32	ovfl_sgl_count;	/**< count of SGEs in default SGL */
+	 /**< pointer to overflow segment len */
+	struct sli4_lsp_sge_s	*ovfl_lsp;
+	u32	n_sge;		/**< number of active SGEs */
+	u32	sge_offset;
+
+	/* Owned by upper layer */
+	/**< where upper layer can store ref to its IO */
+	void		*ul_io;
+};
+
+/**
+ * @brief HW callback type
+ *
+ * Typedef for HW "done" callback.
+ */
+typedef int (*efct_hw_done_t)(struct efct_hw_io_s *, struct efc_remote_node_s *,
+			      u32 len, int status, u32 ext, void *ul_arg);
+
+enum efct_hw_port_e {
+	EFCT_HW_PORT_INIT,
+	EFCT_HW_PORT_SHUTDOWN,
+};
+
+/**
+ * @brief Node group rpi reference
+ */
+struct efct_hw_rpi_ref_s {
+	atomic_t rpi_count;
+	atomic_t rpi_attached;
+};
+
+/**
+ * @brief HW link stat types
+ */
+enum efct_hw_link_stat_e {
+	EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT,
+	EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT,
+	EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT,
+	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT,
+	EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT,
+	EFCT_HW_LINK_STAT_CRC_COUNT,
+	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT,
+	EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT,
+	EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT,
+	EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_RCV_EOFA_COUNT,
+	EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_SOFF_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT,
+	EFCT_HW_LINK_STAT_MAX,		/**< must be last */
+};
+
+enum efct_hw_host_stat_e {
+	EFCT_HW_HOST_STAT_TX_KBYTE_COUNT,
+	EFCT_HW_HOST_STAT_RX_KBYTE_COUNT,
+	EFCT_HW_HOST_STAT_TX_FRAME_COUNT,
+	EFCT_HW_HOST_STAT_RX_FRAME_COUNT,
+	EFCT_HW_HOST_STAT_TX_SEQ_COUNT,
+	EFCT_HW_HOST_STAT_RX_SEQ_COUNT,
+	EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG,
+	EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP,
+	EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT,
+	EFCT_HW_HOST_STAT_RX_F_BSY_COUNT,
+	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT,
+	EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT,
+	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT,
+	EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT,
+	EFCT_HW_HOST_STAT_MAX /* MUST BE LAST */
+};
+
+enum efct_hw_state_e {
+	EFCT_HW_STATE_UNINITIALIZED,	/* power-on, no allefct, no init */
+	EFCT_HW_STATE_QUEUES_ALLOCATED,	/* chip is reset, alloc are cmpl
+					 *(queues not registered)
+					 */
+	EFCT_HW_STATE_ACTIVE,		/* chip is up an running */
+	EFCT_HW_STATE_RESET_IN_PROGRESS,/* chip is being reset */
+	EFCT_HW_STATE_TEARDOWN_IN_PROGRESS,/* teardown has been started */
+};
+
+/**
+ * @brief Structure to track optimized write buffers posted
+ * to chip owned XRIs.
+ *
+ * Note:
+ *	The rqindex will be set the following "fake" indexes.
+ *	This will be used when the buffer is returned via
+ *	efct_seq_free() to make the buffer available
+ *	for re-use on another XRI.
+ *
+ *	The dma->alloc pointer on the dummy header will be used to
+ *	get back to this structure when the buffer is freed.
+ *
+ *	More of these object may be allocated on the fly if more XRIs
+ *	are pushed to the chip.
+ */
+#define EFCT_HW_RQ_INDEX_DUMMY_HDR	0xFF00
+#define EFCT_HW_RQ_INDEX_DUMMY_DATA	0xFF01
+
+/**
+ * @brief Node group rpi reference
+ */
+struct efct_hw_link_stat_counts_s {
+	u8 overflow;
+	u32 counter;
+};
+
+/**
+ * @brief HW object describing fc host stats
+ */
+struct efct_hw_host_stat_counts_s {
+	u32 counter;
+};
+
+#define TID_HASH_BITS	8
+#define TID_HASH_LEN	BIT(TID_HASH_BITS)
+
+enum hw_cq_handler_e {
+	HW_CQ_HANDLER_LOCAL,
+	HW_CQ_HANDLER_THREAD,
+};
+
+#include "efct_hw_queues.h"
+
+/**
+ * @brief Structure used for the hash lookup of queue IDs
+ */
+struct efct_queue_hash_s {
+	bool in_use;
+	u16 id;
+	u16 index;
+};
+
+/**
+ * @brief Define the WQ callback object
+ */
+struct hw_wq_callback_s {
+	u16 instance_index;	/**< use for request tag */
+	void (*callback)(void *arg, u8 *cqe, int status);
+	void *arg;
+};
+
+struct efct_hw_config {
+	u32	n_eq; /**< number of event queues */
+	u32	n_cq; /**< number of completion queues */
+	u32	n_mq; /**< number of mailbox queues */
+	u32	n_rq; /**< number of receive queues */
+	u32	n_wq; /**< number of work queues */
+	u32	n_io; /**< total number of IO objects */
+	u32	n_sgl;/**< length of SGL */
+	u32	speed;	/** requested link speed in Mbps */
+	u32	topology;  /** requested link topology */
+	/** size of the buffers for first burst */
+	u32	rq_default_buffer_size;
+	u8         esoc;
+	/** The seed for the DIF CRC calculation */
+	u16	dif_seed;
+	u8		dif_mode; /**< DIF mode to use */
+	/** Enable driver target wqe timeouts */
+	u8		emulate_tgt_wqe_timeout;
+	bool		bounce;
+	/**< Queue topology string */
+	const char	*queue_topology;
+	/** MRQ RQ selection policy */
+	u8		rq_selection_policy;
+	 /** RQ quanta if rq_selection_policy == 2 */
+	u8		rr_quanta;
+	u32	filter_def[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+};
+
+/**
+ * @brief HW object
+ */
+struct efct_hw_s {
+	struct efct_s		*os;
+	struct sli4_s		sli;
+	u16	ulp_start;
+	u16	ulp_max;
+	u32	dump_size;
+	enum efct_hw_state_e state;
+	bool		hw_setup_called;
+	u8		sliport_healthcheck;
+	u16        watchdog_timeout;
+
+	/** HW configuration, subject to efct_hw_set()  */
+	struct efct_hw_config config;
+
+	/* calculated queue sizes for each type */
+	u32	num_qentries[SLI_QTYPE_MAX];
+
+	/* Storage for SLI queue objects */
+	struct sli4_queue_s	wq[EFCT_HW_MAX_NUM_WQ];
+	struct sli4_queue_s	rq[EFCT_HW_MAX_NUM_RQ];
+	u16	hw_rq_lookup[EFCT_HW_MAX_NUM_RQ];
+	struct sli4_queue_s	mq[EFCT_HW_MAX_NUM_MQ];
+	struct sli4_queue_s	cq[EFCT_HW_MAX_NUM_CQ];
+	struct sli4_queue_s	eq[EFCT_HW_MAX_NUM_EQ];
+
+	/* HW queue */
+	u32	eq_count;
+	u32	cq_count;
+	u32	mq_count;
+	u32	wq_count;
+	u32	rq_count;			/**< count of SLI RQs */
+	struct list_head	eq_list;
+
+	struct efct_queue_hash_s cq_hash[EFCT_HW_Q_HASH_SIZE];
+	struct efct_queue_hash_s rq_hash[EFCT_HW_Q_HASH_SIZE];
+	struct efct_queue_hash_s wq_hash[EFCT_HW_Q_HASH_SIZE];
+
+	/* Storage for HW queue objects */
+	struct hw_wq_s	*hw_wq[EFCT_HW_MAX_NUM_WQ];
+	struct hw_rq_s	*hw_rq[EFCT_HW_MAX_NUM_RQ];
+	struct hw_mq_s	*hw_mq[EFCT_HW_MAX_NUM_MQ];
+	struct hw_cq_s	*hw_cq[EFCT_HW_MAX_NUM_CQ];
+	struct hw_eq_s	*hw_eq[EFCT_HW_MAX_NUM_EQ];
+	/**< count of hw_rq[] entries */
+	u32	hw_rq_count;
+	/**< count of multirq RQs */
+	u32	hw_mrq_count;
+
+	 /**< pool per class WQs */
+	struct efct_varray_s	*wq_class_array[EFCT_HW_MAX_WQ_CLASS];
+	/**< pool per CPU WQs */
+	struct efct_varray_s	*wq_cpu_array[EFCT_HW_MAX_WQ_CPU];
+
+	/* Sequence objects used in incoming frame processing */
+	struct efct_array_s	*seq_pool;
+
+	/** Maintain an ordered, linked list of outstanding HW commands. */
+	spinlock_t	cmd_lock;
+	struct list_head	cmd_head;
+	struct list_head	cmd_pending;
+	u32	cmd_head_count;
+
+	struct sli4_link_event_s link;
+	struct efc_domain_s *domain;
+
+	u16	fcf_indicator;
+
+	/**< pointer array of IO objects */
+	struct efct_hw_io_s	**io;
+	/**< array of WQE buffs mapped to IO objects */
+	u8		*wqe_buffs;
+
+	/**< IO lock to synchronize list access */
+	spinlock_t	io_lock;
+	/**< IO lock to synchronize IO aborting */
+	spinlock_t	io_abort_lock;
+	/**< List of IO objects in use */
+	struct list_head	io_inuse;
+	/**< List of IO objects with a timed target WQE */
+	struct list_head	io_timed_wqe;
+	/**< List of IO objects waiting to be freed */
+	struct list_head	io_wait_free;
+	/**< List of IO objects available for allocation */
+	struct list_head	io_free;
+
+	struct efc_dma_s	loop_map;
+
+	struct efc_dma_s	xfer_rdy;
+
+	struct efc_dma_s	dump_sges;
+
+	struct efc_dma_s	rnode_mem;
+
+	struct efct_hw_rpi_ref_s *rpi_ref;
+
+	char		*hw_war_version;
+
+	atomic_t io_alloc_failed_count;
+
+#define HW_MAX_TCMD_THREADS		16
+	struct efct_hw_qtop_s	*qtop;		/* pointer to queue topology */
+
+	 /**< stat: wq sumbit count */
+	u32	tcmd_wq_submit[EFCT_HW_MAX_NUM_WQ];
+	/**< stat: wq complete count */
+	u32	tcmd_wq_complete[EFCT_HW_MAX_NUM_WQ];
+
+	struct timer_list	wqe_timer;	/**< Timer to periodically
+						  *check for WQE timeouts
+						  **/
+	struct timer_list	watchdog_timer;	/**< Timer for heartbeat */
+	bool	in_active_wqe_timer;		/* < TRUE if currently in
+						 * active wqe timer handler
+						 */
+	bool	active_wqe_timer_shutdown;	/* TRUE if wqe
+						 * timer is to be shutdown
+						 */
+
+	struct list_head	iopc_list;	/*< list of IO
+						 *processing contexts
+						 */
+	spinlock_t	iopc_list_lock;		/**< lock for iopc_list */
+
+	struct efct_pool_s	*wq_reqtag_pool; /* < pool of
+						  * struct hw_wq_callback_s obj
+						  */
+
+	atomic_t	send_frame_seq_id;	/* < send frame
+						 * sequence ID
+						 */
+};
+
+enum efct_hw_io_count_type_e {
+	EFCT_HW_IO_INUSE_COUNT,
+	EFCT_HW_IO_FREE_COUNT,
+	EFCT_HW_IO_WAIT_FREE_COUNT,
+	EFCT_HW_IO_N_TOTAL_IO_COUNT,
+};
+
+/*
+ * HW queue data structures
+ */
+
+struct hw_eq_s {
+	struct list_head	list_entry;
+	enum sli4_qtype_e type;		/**< must be second */
+	u32 instance;
+	u32 entry_count;
+	u32 entry_size;
+	struct efct_hw_s *hw;
+	struct sli4_queue_s *queue;
+	struct list_head cq_list;
+	u32 use_count;
+	struct efct_varray_s *wq_array;		/*<< array of WQs */
+};
+
+struct hw_cq_s {
+	struct list_head list_entry;
+	enum sli4_qtype_e type;		/**< must be second */
+	u32 instance;		/*<< CQ instance (cq_idx) */
+	u32 entry_count;		/*<< Number of entries */
+	u32 entry_size;		/*<< entry size */
+	struct hw_eq_s *eq;			/*<< parent EQ */
+	struct sli4_queue_s *queue;		/**< pointer to SLI4 queue */
+	struct list_head q_list;	/**< list of children queues */
+	u32 use_count;
+};
+
+void hw_thread_cq_handler(struct efct_hw_s *hw, struct hw_cq_s *cq);
+
+struct hw_q_s {
+	struct list_head list_entry;
+	enum sli4_qtype_e type;		/*<< must be second */
+};
+
+struct hw_mq_s {
+	struct list_head list_entry;
+	enum sli4_qtype_e type;		/*<< must be second */
+	u32 instance;
+
+	u32 entry_count;
+	u32 entry_size;
+	struct hw_cq_s *cq;
+	struct sli4_queue_s *queue;
+
+	u32 use_count;
+};
+
+struct hw_wq_s {
+	struct list_head list_entry;
+	enum sli4_qtype_e type;		/*<< must be second */
+	u32 instance;
+	struct efct_hw_s *hw;
+
+	u32 entry_count;
+	u32 entry_size;
+	struct hw_cq_s *cq;
+	struct sli4_queue_s *queue;
+	u32 class;
+	u8 ulp;
+
+	/* WQ consumed */
+	u32 wqec_set_count;	/* how often IOs are
+				 * submitted with wqce set
+				 */
+	u32 wqec_count;		/* current wqce counter */
+	u32 free_count;		/* free count */
+	u32 total_submit_count;	/* total submit count */
+	struct list_head pending_list;	/* list of IOs pending for this WQ */
+
+	/*
+	 * HW IO allocated for use with Send Frame
+	 */
+	struct efct_hw_io_s *send_frame_io;
+
+	/* Stats */
+	u32 use_count;		/*<< use count */
+	u32 wq_pending_count;	/* count of HW IOs that
+				 * were queued on the WQ pending list
+				 */
+};
+
+struct hw_rq_s {
+	struct list_head list_entry;
+	enum sli4_qtype_e type;			/*<< must be second */
+	u32 instance;
+
+	u32 entry_count;
+	u32 use_count;
+	u32 hdr_entry_size;
+	u32 first_burst_entry_size;
+	u32 data_entry_size;
+	u8 ulp;
+	bool is_mrq;
+	u32 base_mrq_id;
+
+	struct hw_cq_s *cq;
+
+	u8 filter_mask;		/* Filter mask value */
+	struct sli4_queue_s *hdr;
+	struct sli4_queue_s *first_burst;
+	struct sli4_queue_s *data;
+
+	struct efc_hw_rq_buffer_s *hdr_buf;
+	struct efc_hw_rq_buffer_s *fb_buf;
+	struct efc_hw_rq_buffer_s *payload_buf;
+
+	struct efc_hw_sequence_s **rq_tracker; /* RQ tracker for this RQ */
+};
+
+struct efct_hw_global_s {
+	const char	*queue_topology_string;	/**< queue topo str */
+};
+
+extern struct efct_hw_global_s hw_global;
+
+struct efct_hw_send_frame_context_s {
+	/* structure elements used by HW */
+	struct efct_hw_s *hw;			/**> pointer to HW */
+	struct hw_wq_callback_s *wqcb;	/**> WQ callback object, request tag */
+	struct efct_hw_wqe_s wqe;	/* > WQE buf obj(may be queued
+					 * on WQ pending list)
+					 */
+	void (*callback)(int status, void *arg);	/* > final
+							 * callback function
+							 */
+	void *arg;			/**> final callback argument */
+
+	/* General purpose elements */
+	struct efc_hw_sequence_s *seq;
+	struct efc_dma_s payload;	/**> a payload DMA buffer */
+};
+
+#define EFCT_HW_OBJECT_G5              0xfeaa0001
+#define EFCT_HW_OBJECT_G6              0xfeaa0003
+#define EFCT_FILE_TYPE_GROUP            0xf7
+#define EFCT_FILE_ID_GROUP              0xa2
+struct efct_hw_grp_hdr {
+	u32 size;
+	__be32	magic_number;
+	u32 word2;
+	u8 rev_name[128];
+	u8 date[12];
+	u8 revision[32];
+};
+
+#endif /* __EFCT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 16/32] elx: efct: Driver initialization routines
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (14 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 15/32] elx: efct: Data structures and defines for hw operations James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 17/32] elx: efct: Hardware queues creation and deletion James Smart
                   ` (16 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Emulex FC Target driver init, attach and hardware setup routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_driver.c | 1243 +++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_driver.h |  154 +++++
 drivers/scsi/elx/efct/efct_hw.c     | 1298 +++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h     |   15 +
 drivers/scsi/elx/efct/efct_xport.c  |  665 ++++++++++++++++++
 drivers/scsi/elx/efct/efct_xport.h  |  216 ++++++
 6 files changed, 3591 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_driver.c
 create mode 100644 drivers/scsi/elx/efct/efct_driver.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.h

diff --git a/drivers/scsi/elx/efct/efct_driver.c b/drivers/scsi/elx/efct/efct_driver.c
new file mode 100644
index 000000000000..4928e5753d88
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_driver.c
@@ -0,0 +1,1243 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_utils.h"
+
+#include "efct_els.h"
+#include "efct_hw.h"
+#include "efct_unsol.h"
+#include "efct_scsi.h"
+
+static int efct_proc_open(struct inode *inode, struct file *file);
+static int efct_proc_get(struct seq_file *m, void *v);
+
+static void efct_device_interrupt_handler(struct efct_s *efct, u32 vector);
+static void efct_teardown_msix(struct efct_s *efct);
+static int efct_fw_reset(struct efct_s *efct);
+static int
+efct_firmware_write(struct efct_s *efct, const u8 *buf, size_t buf_len,
+		    u8 *change_status);
+static int
+efct_efclib_config(struct efct_s *efct, struct libefc_function_template *tt);
+
+struct efct_s *efct_devices[MAX_EFCT_DEVICES];
+
+static const struct file_operations efct_proc_fops = {
+	.owner = THIS_MODULE,
+	.open = efct_proc_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+};
+
+static int logmask;
+module_param(logmask, int, 0444);
+MODULE_PARM_DESC(logmask, "logging bitmask (default 0)");
+
+#define FW_WRITE_BUFSIZE (64 * 1024)
+struct efct_fw_write_result {
+	struct completion done;
+	int status;
+	u32 actual_xfer;
+	u32 change_status;
+};
+
+struct libefc_function_template efct_libefc_templ = {
+	.hw_domain_alloc = efct_hw_domain_alloc,
+	.hw_domain_attach = efct_hw_domain_attach,
+	.hw_domain_free = efct_hw_domain_free,
+	.hw_domain_force_free = efct_hw_domain_force_free,
+	.domain_hold_frames = efct_domain_hold_frames,
+	.domain_accept_frames = efct_domain_accept_frames,
+
+	.hw_port_alloc = efct_hw_port_alloc,
+	.hw_port_attach = efct_hw_port_attach,
+	.hw_port_free = efct_hw_port_free,
+
+	.hw_node_alloc = efct_hw_node_alloc,
+	.hw_node_attach = efct_hw_node_attach,
+	.hw_node_detach = efct_hw_node_detach,
+	.hw_node_free_resources = efct_hw_node_free_resources,
+	.node_purge_pending = efct_node_purge_pending,
+
+	.scsi_io_alloc_disable = efct_scsi_io_alloc_disable,
+	.scsi_io_alloc_enable = efct_scsi_io_alloc_enable,
+	.scsi_validate_node = efct_scsi_validate_initiator,
+	.new_domain = efct_scsi_tgt_new_domain,
+	.del_domain = efct_scsi_tgt_del_domain,
+	.new_sport = efct_scsi_tgt_new_sport,
+	.del_sport = efct_scsi_tgt_del_sport,
+	.scsi_new_node = efct_scsi_new_initiator,
+	.scsi_del_node = efct_scsi_del_initiator,
+
+	.els_send = efct_els_req_send,
+	.els_send_ct = efct_els_send_ct,
+	.els_send_resp = efct_els_resp_send,
+	.bls_send_acc_hdr = efct_bls_send_acc_hdr,
+	.send_flogi_p2p_acc = efct_send_flogi_p2p_acc,
+	.send_ct_rsp = efct_send_ct_rsp,
+	.send_ls_rjt = efct_send_ls_rjt,
+
+	.node_io_cleanup = efct_node_io_cleanup,
+	.node_els_cleanup = efct_node_els_cleanup,
+	.node_abort_all_els = efct_node_abort_all_els,
+
+	.dispatch_fcp_cmd = efct_dispatch_fcp_cmd,
+	.recv_abts_frame = efct_node_recv_abts_frame,
+};
+
+static char *queue_topology =
+	"eq cq rq cq mq $nulp($nwq(cq wq:ulp=$rpt1)) cq wq:len=256:class=1";
+/**
+ * @brief Perform driver wide initialization
+ *
+ * This function is called prior to enumerating PCI devices, with subsequent
+ * calls to efct_device_attach.  For EFCT, this function invokes the back end
+ * functions efct_scsi_tgt_driver_init(), and efct_scsi_ini_driver_init()
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+int
+efct_device_init(void)
+{
+	int rc;
+
+	hw_global.queue_topology_string = queue_topology;
+
+	/* driver-wide init for target-server */
+	rc = efct_scsi_tgt_driver_init();
+	if (rc) {
+		pr_err("efct_scsi_tgt_init failed rc=%d\n",
+			     rc);
+		return -1;
+	}
+
+	rc = efct_scsi_reg_fc_transport();
+	if (rc) {
+		pr_err("failed to register to FC host\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/**
+ * @brief Perform driver wide shutdown complete actions
+ *
+ * This function is called shutdown for all devices has completed
+ *
+ * @return none
+ */
+void
+efct_device_shutdown(void)
+{
+	efct_scsi_release_fc_transport();
+
+	efct_scsi_tgt_driver_exit();
+}
+
+/*
+ * @brief allocate efct device
+ *
+ * @param nid Numa node ID
+ *
+ * @return pointer to EFCT structure
+ */
+
+void *efct_device_alloc(u32 nid)
+{
+	struct efct_s *efct = NULL;
+	u32 i;
+
+	efct = kmalloc_node(sizeof(*efct), GFP_ATOMIC, nid);
+
+	if (efct) {
+		memset(efct, 0, sizeof(*efct));
+		for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
+			if (!efct_devices[i]) {
+				efct->instance_index = i;
+				efct_devices[i] = efct;
+				break;
+			}
+		}
+
+		if (i == ARRAY_SIZE(efct_devices)) {
+			pr_err("Exceeded max supported devices.\n");
+			kfree(efct);
+			efct = NULL;
+		} else {
+			efct->attached = false;
+		}
+	}
+	return efct;
+}
+
+static int
+efct_fw_reset(struct efct_s *efct)
+{
+	int rc = 0;
+	int index = 0;
+	u8 bus, dev;
+	struct efct_s *other_efct;
+
+	bus = efct->pcidev->bus->number;
+	dev = PCI_SLOT(efct->pcidev->devfn);
+
+	while ((other_efct = efct_get_instance(index++)) != NULL) {
+		u8 other_bus, other_dev;
+
+		other_bus = other_efct->pcidev->bus->number;
+		other_dev = PCI_SLOT(other_efct->pcidev->devfn);
+
+		if (bus == other_bus && dev == other_dev &&
+		    timer_pending(&other_efct->xport->stats_timer)) {
+			efc_log_debug(other_efct,
+				       "removing link stats timer\n");
+			del_timer(&other_efct->xport->stats_timer);
+		}
+	}
+
+	if (efct_hw_reset(&efct->hw, EFCT_HW_RESET_FIRMWARE)) {
+		efc_log_test(efct, "failed to reset firmware\n");
+		rc = -1;
+	} else {
+		efc_log_debug(efct,
+			       "successfully reset firmware.Now resetting port\n");
+		/* now flag all functions on the same device
+		 * as this port as uninitialized
+		 */
+		index = 0;
+
+		while ((other_efct = efct_get_instance(index++)) != NULL) {
+			u8 other_bus, other_dev;
+
+			other_bus = other_efct->pcidev->bus->number;
+			other_dev = PCI_SLOT(other_efct->pcidev->devfn);
+
+			if (bus == other_bus && dev == other_dev) {
+				if (other_efct->hw.state !=
+						EFCT_HW_STATE_UNINITIALIZED) {
+					other_efct->hw.state =
+						EFCT_HW_STATE_QUEUES_ALLOCATED;
+				}
+				efct_device_detach(efct);
+				rc = efct_device_attach(efct);
+
+				efc_log_debug(other_efct,
+					       "re-start driver with new firmware\n");
+			}
+		}
+	}
+	return rc;
+}
+
+static void
+efct_fw_write_cb(int status, u32 actual_write_length,
+		 u32 change_status, void *arg)
+{
+	struct efct_fw_write_result *result = arg;
+
+	result->status = status;
+	result->actual_xfer = actual_write_length;
+	result->change_status = change_status;
+
+	complete(&result->done);
+}
+
+static int
+efct_firmware_write(struct efct_s *efct, const u8 *buf, size_t buf_len,
+		    u8 *change_status)
+{
+	int rc = 0;
+	u32 bytes_left;
+	u32 xfer_size;
+	u32 offset;
+	struct efc_dma_s dma;
+	int last = 0;
+	struct efct_fw_write_result result;
+
+	init_completion(&result.done);
+
+	bytes_left = buf_len;
+	offset = 0;
+
+	dma.size = FW_WRITE_BUFSIZE;
+	dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+				      dma.size, &dma.phys, GFP_DMA);
+	if (!dma.virt)
+		return -ENOMEM;
+
+	while (bytes_left > 0) {
+		if (bytes_left > FW_WRITE_BUFSIZE)
+			xfer_size = FW_WRITE_BUFSIZE;
+		else
+			xfer_size = bytes_left;
+
+		memcpy(dma.virt, buf + offset, xfer_size);
+
+		if (bytes_left == xfer_size)
+			last = 1;
+
+		efct_hw_firmware_write(&efct->hw, &dma, xfer_size, offset,
+				       last, efct_fw_write_cb, &result);
+
+		if (wait_for_completion_interruptible(&result.done) != 0) {
+			rc = -ENXIO;
+			break;
+		}
+
+		if (result.actual_xfer == 0 || result.status != 0) {
+			rc = -EFAULT;
+			break;
+		}
+
+		if (last)
+			*change_status = result.change_status;
+
+		bytes_left -= result.actual_xfer;
+		offset += result.actual_xfer;
+	}
+
+	dma_free_coherent(&efct->pcidev->dev, dma.size, dma.virt, dma.phys);
+	return rc;
+}
+
+	int
+efct_request_firmware_update(struct efct_s *efct)
+{
+	int rc = 0;
+	u8 file_name[256], fw_change_status;
+	const struct firmware *fw;
+	struct efct_hw_grp_hdr *fw_image;
+
+	snprintf(file_name, 256, "%s.grp", efct->model);
+	rc = request_firmware(&fw, file_name, &efct->pcidev->dev);
+	if (rc) {
+		efc_log_err(efct, "Firmware file(%s) not found.\n", file_name);
+		return rc;
+	}
+	fw_image = (struct efct_hw_grp_hdr *)fw->data;
+
+	/* Check if firmware provided is compatible with this particular
+	 * Adapter of not
+	 */
+	if ((be32_to_cpu(fw_image->magic_number) != EFCT_HW_OBJECT_G5) &&
+	    (be32_to_cpu(fw_image->magic_number) != EFCT_HW_OBJECT_G6)) {
+		efc_log_warn(efct,
+			      "Invalid FW image found Magic: 0x%x Size: %ld\n",
+			be32_to_cpu(fw_image->magic_number), fw->size);
+		rc = -1;
+		goto exit;
+	}
+
+	if (!strncmp(efct->fw_version, fw_image->revision,
+		     strnlen(fw_image->revision, 16))) {
+		efc_log_debug(efct,
+			       "No update req. Firmware is already up to date.\n");
+		rc = 0;
+		goto exit;
+	}
+	rc = efct_firmware_write(efct, fw->data, fw->size, &fw_change_status);
+	if (rc) {
+		efc_log_err(efct,
+			     "Firmware update failed. Return code = %d\n", rc);
+	} else {
+		efc_log_info(efct, "Firmware updated successfully\n");
+		switch (fw_change_status) {
+		case 0x00:
+			efc_log_debug(efct,
+				       "No reset needed, new firmware is active.\n");
+			break;
+		case 0x01:
+			efc_log_warn(efct,
+				      "A physical device reset (host reboot) is needed to activate the new firmware\n");
+			break;
+		case 0x02:
+		case 0x03:
+			efc_log_debug(efct,
+				       "firmware is resetting to activate the new firmware\n");
+			efct_fw_reset(efct);
+			break;
+		default:
+			efc_log_debug(efct,
+				       "Unexected value change_status: %d\n",
+				fw_change_status);
+			break;
+		}
+	}
+
+exit:
+	release_firmware(fw);
+
+	return rc;
+}
+
+/**
+ * @brief free efct device
+ *
+ * @param efct pointer to efct structure
+ *
+ * @return none
+ */
+
+void efct_device_free(struct efct_s *efct)
+{
+	if (efct) {
+		efct_devices[efct->instance_index] = NULL;
+
+		kfree(efct);
+	}
+}
+
+/**
+ * @brief return the number of interrupts required per HBA
+ *
+ * @param efct pointer to efct structure
+ *
+ * @return the number of interrupts or a negative value on error.
+ */
+int
+efct_device_interrupts_required(struct efct_s *efct)
+{
+	if (efct_hw_setup(&efct->hw, efct, efct->pcidev)
+				!= EFCT_HW_RTN_SUCCESS) {
+		return -1;
+	}
+	return efct_hw_qtop_eq_count(&efct->hw);
+}
+
+static int
+efct_efclib_config(struct efct_s *efct, struct libefc_function_template *tt)
+{
+	struct efc_lport *efc;
+	struct sli4_s	*sli;
+
+	efc = kmalloc(sizeof(*efc), GFP_KERNEL);
+	if (!efc)
+		return -1;
+
+	memset(efc, 0, sizeof(struct efc_lport));
+	efct->efcport = efc;
+
+	memcpy(&efc->tt, tt, sizeof(*tt));
+	efc->base = efct;
+	efc->pcidev = efct->pcidev;
+
+	efc->def_wwnn = efct_get_wwn(&efct->hw, EFCT_HW_WWN_NODE);
+	efc->def_wwpn = efct_get_wwn(&efct->hw, EFCT_HW_WWN_PORT);
+	efc->enable_tgt = 1;
+	efc->log_level = EFC_LOG_LIB;
+
+	sli = &efct->hw.sli;
+	efc->max_xfer_size = sli->sge_supported_length *
+			     sli_get_max_sgl(&efct->hw.sli);
+
+	efcport_init(efc);
+
+	return 0;
+}
+
+/**
+ * @brief Initialize resources when pci devices attach
+ *
+ * @param efct pointer to efct structure
+ *
+ * @return 0 for success, a negative error code value for failure.
+ */
+
+int
+efct_device_attach(struct efct_s *efct)
+{
+	u32 rc = 0, i = 0;
+
+	if (efct->attached) {
+		efc_log_warn(efct, "Device is already attached\n");
+		rc = -1;
+	} else {
+		snprintf(efct->display_name, sizeof(efct->display_name),
+			 "[%s%d] ", "fc",  efct->instance_index);
+
+		efct->logmask = logmask;
+		efct->enable_numa_support = 1;
+		efct->filter_def = "0,0,0,0";
+		efct->max_isr_time_msec = EFCT_OS_MAX_ISR_TIME_MSEC;
+		efct->model =
+			(efct->pcidev->device == EFCT_DEVICE_ID_LPE31004) ?
+			"LPE31004" : "unknown";
+		efct->fw_version = (const char *)efct_hw_get_ptr(&efct->hw,
+							EFCT_HW_FW_REV);
+		efct->driver_version = EFCT_DRIVER_VERSION;
+
+		efct->efct_req_fw_upgrade = true;
+
+		/* Allocate transport object and bring online */
+		efct->xport = efct_xport_alloc(efct);
+		if (!efct->xport) {
+			efc_log_err(efct, "failed to allocate transport object\n");
+			rc = -1;
+		} else if (efct_xport_attach(efct->xport) != 0) {
+			efc_log_err(efct, "failed to attach transport object\n");
+			rc = -1;
+		} else if (efct_xport_initialize(efct->xport) != 0) {
+			efc_log_err(efct, "failed to initialize transport object\n");
+			rc = -1;
+		} else if (efct_efclib_config(efct, &efct_libefc_templ)) {
+			efc_log_err(efct, "failed to init efclib\n");
+			rc = -1;
+		} else if (efct_start_event_processing(efct)) {
+			efc_log_err(efct, "failed to start event processing\n");
+			rc = -1;
+		} else {
+			for (i = 0; i < efct->n_msix_vec; i++) {
+				efc_log_debug(efct, "irq %d enabled\n",
+					efct->msix_vec[i].vector);
+				enable_irq(efct->msix_vec[i].vector);
+			}
+		}
+
+		efct->desc = efct->hw.sli.modeldesc;
+		efc_log_info(efct, "adapter model description: %s\n",
+			      efct->desc);
+
+		if (rc == 0) {
+			efct->attached = true;
+		} else {
+			efct_teardown_msix(efct);
+			if (efct->xport) {
+				efct_xport_free(efct->xport);
+				efct->xport = NULL;
+			}
+		}
+
+		if (efct->efct_req_fw_upgrade) {
+			efc_log_debug(efct, "firmware update is in progress\n");
+			efct_request_firmware_update(efct);
+		}
+	}
+
+	return rc;
+}
+
+/**
+ * @brief interrupt handler
+ *
+ * Interrupt handler
+ *
+ * @param efct pointer to efct structure
+ * @param vector Zero-based interrupt vector number.
+ *
+ * @return none
+ */
+
+static void
+efct_device_interrupt_handler(struct efct_s *efct, u32 vector)
+{
+	efct_hw_process(&efct->hw, vector, efct->max_isr_time_msec);
+}
+
+/**
+ * @brief free resources when pci device detach
+ *
+ * @param efct pointer to efct structure
+ *
+ * @return 0 for success, a negative error code value for failure.
+ */
+
+int
+efct_device_detach(struct efct_s *efct)
+{
+	int rc = 0;
+
+	if (efct) {
+		if (!efct->attached) {
+			efc_log_warn(efct, "Device is not attached\n");
+			return -1;
+		}
+
+		rc = efct_xport_control(efct->xport, EFCT_XPORT_SHUTDOWN);
+		if (rc)
+			efc_log_err(efct, "Transport Shutdown timed out\n");
+
+		efct_stop_event_processing(efct);
+
+		if (efct_xport_detach(efct->xport) != 0)
+			efc_log_err(efct, "Transport detach failed\n");
+
+		efct_xport_free(efct->xport);
+		efct->xport = NULL;
+
+		efcport_destroy(efct->efcport);
+		kfree(efct->efcport);
+
+		efct->attached = false;
+	}
+
+	return 0;
+}
+
+/**
+ * @brief handle MSIX interrupts
+ *
+ * Interrupt entry point for MSIX interrupts.
+ * Simply schedule the interrupt tasklet
+ *
+ * @param irq interrupt request number
+ * @param handle pointer to interrupt context structure
+ *
+ * @return IRQ_HANDLED (always handled)
+ */
+
+static irqreturn_t
+efct_intr_msix(int irq, void *handle)
+{
+	struct efct_os_intr_context_s *intr_context = handle;
+
+	complete(&intr_context->done);
+	return IRQ_HANDLED;
+}
+
+/**
+ * @brief Process interrupt events
+ *
+ * Process events in a kernel thread context.  A counting semaphore is used,
+ * this function waits on the semaphore, the interrupt handler increments it.
+ *
+ * @param thread pointer to the thread object
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+
+static int
+efct_intr_thread(struct efct_os_intr_context_s *intr_context)
+{
+	struct efct_s *efct = intr_context->efct;
+	int rc;
+	u32 tstart, tnow;
+
+	tstart = jiffies_to_msecs(jiffies);
+
+	while (!kthread_should_stop()) {
+		rc = wait_for_completion_timeout(&intr_context->done,
+				  usecs_to_jiffies(100000));
+		if (!rc)
+			continue;
+
+		efct_device_interrupt_handler(efct, intr_context->index);
+
+		/* If we've been running for too long, then yield */
+		tnow = jiffies_to_msecs(jiffies);
+		if ((tnow - tstart) > 5000) {
+			cond_resched();
+			tstart = tnow;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * @brief setup MSIX interrupts
+ *
+ * Sets up MSIX interrupts.  Currently a single vector is requested and used.
+ *
+ * @param efct pointer to efct structure
+ * @param num_interrupts The number of MSI-X interrupts to acquire
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+
+static int
+efct_setup_msix(struct efct_s *efct, u32 num_interrupts)
+{
+	int	rc = 0;
+	u32 i;
+
+	if (!pci_find_capability(efct->pcidev, PCI_CAP_ID_MSIX)) {
+		dev_err(&efct->pcidev->dev,
+			"%s : MSI-X not available\n", __func__);
+		return -EINVAL;
+	}
+
+	if (num_interrupts > ARRAY_SIZE(efct->msix_vec)) {
+		dev_err(&efct->pcidev->dev,
+			"%s : num_interrupts: %d greater than vectors\n",
+			__func__, num_interrupts);
+		return -1;
+	}
+
+	efct->n_msix_vec = num_interrupts;
+	for (i = 0; i < num_interrupts; i++)
+		efct->msix_vec[i].entry = i;
+
+	rc = pci_enable_msix_exact(efct->pcidev,
+				   efct->msix_vec, efct->n_msix_vec);
+	if (!rc) {
+		for (i = 0; i < num_interrupts; i++) {
+			rc = request_irq(efct->msix_vec[i].vector,
+					 efct_intr_msix,
+					 0, EFCT_DRIVER_NAME,
+					 &efct->intr_context[i]);
+			if (rc)
+				break;
+		}
+	} else {
+		dev_err(&efct->pcidev->dev,
+			"%s : rc % d returned, IRQ allocation failed\n",
+			   __func__, rc);
+	}
+
+	return rc;
+}
+
+/**
+ * @brief tear down MSIX interrupts
+ *
+ * Previously setup MSIX interrupts are taken down
+ *
+ * @param efct pointer to efct structure
+ *
+ * @return none
+ */
+
+static void
+efct_teardown_msix(struct efct_s *efct)
+{
+	u32 i;
+
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		synchronize_irq(efct->msix_vec[i].vector);
+		free_irq(efct->msix_vec[i].vector,
+			 &efct->intr_context[i]);
+	}
+	pci_disable_msix(efct->pcidev);
+}
+
+static struct pci_device_id efct_pci_table[] = {
+	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_ID_LPE31004), 0},
+	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_ID_G7), 0},
+	{}	/* terminate list */
+};
+
+/**
+ * @brief return pointer to efct structure given instance index
+ *
+ * A pointer to an efct structure is returned given an instance index.
+ *
+ * @param index index to efct_devices array
+ *
+ * @return efct pointer
+ */
+
+struct efct_s *efct_get_instance(u32 index)
+{
+	if (index < ARRAY_SIZE(efct_devices))
+		return efct_devices[index];
+
+	return NULL;
+}
+
+/**
+ * @brief instantiate PCI device
+ *
+ * This is the PCI device probe entry point,
+ * called by the Linux PCI subsystem once for
+ * each matching device/function.
+ *
+ * The efct structure is allocated, and initiatlized
+ *
+ * @param pdev pointer PCI device structure
+ * @param ent poitner to PCI device Id structure
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+
+static int
+efct_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+	struct efct_s		*efct = NULL;
+	int		rc;
+	u32	i, r;
+	int		num_interrupts = 0;
+	int		nid;				/* Numa node id */
+	struct task_struct	*thread = NULL;
+
+	dev_info(&pdev->dev, "%s\n", EFCT_DRIVER_NAME);
+
+	rc = pci_enable_device_mem(pdev);
+	if (rc)
+		goto efct_pci_probe_err_enable;
+
+	pci_set_master(pdev);
+
+	rc = pci_set_mwi(pdev);
+	if (rc) {
+		dev_info(&pdev->dev,
+			 "pci_set_mwi returned %d\n", rc);
+		goto efct_pci_probe_err_set_mwi;
+	}
+
+	rc = pci_request_regions(pdev, EFCT_DRIVER_NAME);
+	if (rc) {
+		dev_err(&pdev->dev, "pci_request_regions failed\n");
+		goto efct_pci_probe_err_request_regions;
+	}
+
+	/* Fetch the Numa node id for this device */
+	nid = dev_to_node(&pdev->dev);
+	if (nid < 0) {
+		dev_err(&pdev->dev,
+			"Warning Numa node ID is %d\n", nid);
+		nid = 0;
+	}
+
+	/* Allocate efct */
+	efct = efct_device_alloc(nid);
+	if (!efct) {
+		dev_err(&pdev->dev, "Failed to allocate efct_t\n");
+		rc = -ENOMEM;
+		goto efct_pci_probe_err_efct_device_alloc;
+	}
+
+	efct->pcidev = pdev;
+
+	if (efct->enable_numa_support)
+		efct->numa_node = nid;
+
+	/* Map all memory BARs */
+	for (i = 0, r = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) {
+			efct->reg[r] = ioremap(pci_resource_start(pdev, i),
+						  pci_resource_len(pdev, i));
+			r++;
+		}
+
+		/*
+		 * If the 64-bit attribute is set, both this BAR and the
+		 * next form the complete address. Skip processing the
+		 * next BAR.
+		 */
+		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM_64)
+			i++;
+	}
+
+	pci_set_drvdata(pdev, efct);
+
+	if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0 ||
+	    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) {
+		dev_warn(&pdev->dev,
+			 "trying DMA_BIT_MASK(32)\n");
+		if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0 ||
+		    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) {
+			dev_err(&pdev->dev,
+				"setting DMA_BIT_MASK failed\n");
+			rc = -1;
+			goto efct_pci_probe_err_setup_thread;
+		}
+	}
+
+	num_interrupts = efct_device_interrupts_required(efct);
+	if (num_interrupts < 0) {
+		efc_log_err(efct, "efct_device_interrupts_required failed\n");
+		rc = -1;
+		goto efct_pci_probe_err_setup_thread;
+	}
+
+	/*
+	 * Initialize MSIX interrupts, note,
+	 * efct_setup_msix() enables the interrupt
+	 */
+	rc = efct_setup_msix(efct, num_interrupts);
+	if (rc) {
+		dev_err(&pdev->dev, "Can't setup msix\n");
+		goto efct_pci_probe_err_setup_msix;
+	}
+	/* Disable interrupt for now */
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		efc_log_debug(efct, "irq %d disabled\n",
+			       efct->msix_vec[i].vector);
+		disable_irq(efct->msix_vec[i].vector);
+	}
+
+	rc = efct_device_attach((struct efct_s *)efct);
+	if (rc)
+		goto efct_pci_probe_err_setup_msix;
+
+	return 0;
+
+efct_pci_probe_err_setup_msix:
+	for (i = 0; i < (u32)num_interrupts; i++) {
+		thread = efct->intr_context[i].thread;
+		if (!thread)
+			continue;
+
+		/* Call stop */
+		kthread_stop(thread);
+	}
+
+efct_pci_probe_err_setup_thread:
+	pci_set_drvdata(pdev, NULL);
+
+	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (efct->reg[i])
+			iounmap(efct->reg[i]);
+	}
+	efct_device_free(efct);
+efct_pci_probe_err_efct_device_alloc:
+	pci_release_regions(pdev);
+efct_pci_probe_err_request_regions:
+	pci_clear_mwi(pdev);
+efct_pci_probe_err_set_mwi:
+	pci_disable_device(pdev);
+efct_pci_probe_err_enable:
+	return rc;
+}
+
+/**
+ * @brief remove PCI device instance
+ *
+ * Called when driver is unloaded, once for each PCI device/function instance.
+ *
+ * @param pdev pointer to PCI device structure
+ *
+ * @return none
+ */
+
+static void
+efct_pci_remove(struct pci_dev *pdev)
+{
+	struct efct_s *efct = pci_get_drvdata(pdev);
+	u32	i;
+
+	if (!efct)
+		return;
+
+	efct_device_detach(efct);
+
+	efct_teardown_msix(efct);
+
+	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (efct->reg[i])
+			iounmap(efct->reg[i]);
+	}
+
+	pci_set_drvdata(pdev, NULL);
+
+	efct_devices[efct->instance_index] = NULL;
+
+	efct_device_free(efct);
+
+	pci_release_regions(pdev);
+
+	pci_disable_device(pdev);
+}
+
+/**
+ * efct_device_prep_dev_for_reset - Prepare EFCT device for pci slot reset
+ * @efct: pointer to EFCT data structure.
+ *
+ * This routine is called to prepare the EFCT device for PCI slot reset. It
+ * disables the device interrupt and pci device, and aborts the internal FCP
+ * pending I/Os
+ */
+static void
+efct_device_prep_for_reset(struct efct_s *efct, struct pci_dev *pdev)
+{
+	if (efct) {
+		efc_log_debug(efct,
+			       "PCI channel disable preparing for reset\n");
+		efct_device_detach(efct);
+		/* Disable interrupt and pci device */
+		efct_teardown_msix(efct);
+	}
+	pci_disable_device(pdev);
+}
+
+/**
+ * efct_device_prep_for_recover - Prepare EFCT device for pci slot recover
+ * @efct: pointer to EFCT hba data structure.
+ *
+ * This routine is called to prepare the SLI4 device for PCI slot recover. It
+ * aborts all the outstanding SCSI I/Os to the pci device.
+ */
+static void
+efct_device_prep_for_recover(struct efct_s *efct)
+{
+	if (efct) {
+		efc_log_debug(efct, "PCI channel preparing for recovery\n");
+		efct_hw_io_abort_all(&efct->hw);
+	}
+}
+
+/**
+ * efct_pci_io_error_detected - method for handling PCI I/O error
+ * @pdev: pointer to PCI device.
+ * @state: the current PCI connection state.
+ *
+ * This routine is registered to the PCI subsystem for error handling. This
+ * function is called by the PCI subsystem after a PCI bus error affecting
+ * this device has been detected. When this routine is invoked, it dispatches
+ * device error detected handling routine, which will perform the proper
+ * error detected operation.
+ *
+ * Return codes
+ * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery
+ * PCI_ERS_RESULT_DISCONNECT - device could not be recovered
+ */
+static pci_ers_result_t
+efct_pci_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+{
+	struct efct_s *efct = pci_get_drvdata(pdev);
+	pci_ers_result_t rc;
+
+	switch (state) {
+	case pci_channel_io_normal:
+		efct_device_prep_for_recover(efct);
+		rc = PCI_ERS_RESULT_CAN_RECOVER;
+		break;
+	case pci_channel_io_frozen:
+		efct_device_prep_for_reset(efct, pdev);
+		rc = PCI_ERS_RESULT_NEED_RESET;
+		break;
+	case pci_channel_io_perm_failure:
+		efct_device_detach(efct);
+		rc = PCI_ERS_RESULT_DISCONNECT;
+		break;
+	default:
+		efc_log_debug(efct, "Unknown PCI error state:0x%x\n",
+			       state);
+		efct_device_prep_for_reset(efct, pdev);
+		rc = PCI_ERS_RESULT_NEED_RESET;
+		break;
+	}
+
+	return rc;
+}
+
+static pci_ers_result_t
+efct_pci_io_slot_reset(struct pci_dev *pdev)
+{
+	int rc;
+	struct efct_s *efct = pci_get_drvdata(pdev);
+
+	rc = pci_enable_device_mem(pdev);
+	if (rc) {
+		efc_log_err(efct,
+			     "failed to re-enable PCI device after reset.\n");
+		return PCI_ERS_RESULT_DISCONNECT;
+	}
+
+	/*
+	 * As the new kernel behavior of pci_restore_state() API call clears
+	 * device saved_state flag, need to save the restored state again.
+	 */
+
+	pci_save_state(pdev);
+
+	pci_set_master(pdev);
+
+	rc = efct_setup_msix(efct, efct->n_msix_vec);
+	if (rc)
+		efc_log_err(efct, "rc %d returned, IRQ allocation failed\n",
+			    rc);
+
+	/* Perform device reset */
+	efct_device_detach(efct);
+	/* Bring device to online*/
+	efct_device_attach(efct);
+
+	return PCI_ERS_RESULT_RECOVERED;
+}
+
+static void
+efct_pci_io_resume(struct pci_dev *pdev)
+{
+	struct efct_s *efct = pci_get_drvdata(pdev);
+
+	/* Perform device reset */
+	efct_device_detach(efct);
+	/* Bring device to online*/
+	efct_device_attach(efct);
+}
+
+/**
+ * @brief Start event processing
+ *
+ * Start up the threads for event processing
+ *
+ * @param efct pointer to EFCT structure
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+
+int
+efct_start_event_processing(struct efct_s *efct)
+{
+	u32 i;
+
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		char label[32];
+		struct efct_os_intr_context_s *intr_ctx = NULL;
+
+		intr_ctx = &efct->intr_context[i];
+
+		intr_ctx->efct = efct;
+		intr_ctx->index = i;
+
+		init_completion(&intr_ctx->done);
+
+		snprintf(label, sizeof(label),
+			 "efct:%d:%d", efct->instance_index, i);
+
+		intr_ctx->thread =
+			kthread_create((int(*)(void *)) efct_intr_thread,
+				       intr_ctx, label);
+
+		if (IS_ERR(intr_ctx->thread)) {
+			efc_log_err(efct, "kthread_create failed: %ld\n",
+				     PTR_ERR(intr_ctx->thread));
+			intr_ctx->thread = NULL;
+
+			return -1;
+		}
+
+		wake_up_process(intr_ctx->thread);
+	}
+
+	return 0;
+}
+
+/**
+ * @brief Stop event processing
+ *
+ * Interrupts are disabled, and any asynchronous thread (or tasklet) is stopped.
+ *
+ * @param efct pointer to EFCT structure
+ *
+ * @return none
+ */
+
+void
+efct_stop_event_processing(struct efct_s *efct)
+{
+	u32 i;
+	struct task_struct *thread = NULL;
+
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		disable_irq(efct->msix_vec[i].vector);
+
+		thread = efct->intr_context[i].thread;
+		if (!thread)
+			continue;
+
+		/* Call stop */
+		kthread_stop(thread);
+	}
+}
+
+MODULE_DEVICE_TABLE(pci, efct_pci_table);
+
+static struct pci_error_handlers efct_pci_err_handler = {
+	.error_detected = efct_pci_io_error_detected,
+	.slot_reset = efct_pci_io_slot_reset,
+	.resume = efct_pci_io_resume,
+};
+
+static struct pci_driver efct_pci_driver = {
+	.name		= EFCT_DRIVER_NAME,
+	.id_table	= efct_pci_table,
+	.probe		= efct_pci_probe,
+	.remove		= efct_pci_remove,
+	.err_handler	= &efct_pci_err_handler,
+};
+
+static int efct_proc_open(struct inode *indoe, struct file *file)
+{
+	return single_open(file, efct_proc_get, NULL);
+}
+
+static int efct_proc_get(struct seq_file *m, void *v)
+{
+	u32 i;
+	u32 j;
+	u32 device_count = 0;
+
+	for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
+		if (efct_devices[i])
+			device_count++;
+	}
+
+	seq_printf(m, "%d\n", device_count);
+
+	for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
+		if (efct_devices[i]) {
+			struct efct_s *efct = efct_devices[i];
+
+			for (j = 0; j < efct->n_msix_vec; j++) {
+				seq_printf(m, "%d,%d,%d\n", i,
+					   efct->msix_vec[j].vector,
+					-1);
+			}
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * @brief driver load entry point
+ *
+ * Called when driver is loaded, device class EFCT_DRIVER_NAME is created, and
+ * PCI devices are enumerated.
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+
+static
+int __init efct_init(void)
+{
+	int	rc = -ENODEV;
+
+	rc = efct_device_init();
+	if (rc) {
+		pr_err("efct_device_init failed rc=%d\n", rc);
+		return -ENOMEM;
+	}
+
+	rc = pci_register_driver(&efct_pci_driver);
+	if (rc)
+		goto l1;
+
+	proc_create(EFCT_DRIVER_NAME, 0444, NULL, &efct_proc_fops);
+	return rc;
+
+l1:
+	efct_device_shutdown();
+	return rc;
+}
+
+/**
+ * @brief driver unload entry point
+ *
+ * Called when driver is unloaded.
+ * PCI devices are removed, and device class objects
+ * removed.
+ *
+ * @return none
+ */
+
+static void __exit efct_exit(void)
+{
+	pci_unregister_driver(&efct_pci_driver);
+	remove_proc_entry(EFCT_DRIVER_NAME, NULL);
+	efct_device_shutdown();
+}
+
+module_init(efct_init);
+module_exit(efct_exit);
+MODULE_VERSION(EFCT_DRIVER_VERSION);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Broadcom");
diff --git a/drivers/scsi/elx/efct/efct_driver.h b/drivers/scsi/elx/efct/efct_driver.h
new file mode 100644
index 000000000000..75b4e6fa18a9
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_driver.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_DRIVER_H__)
+#define __EFCT_DRIVER_H__
+
+/***************************************************************************
+ * OS specific includes
+ */
+#include <stdarg.h>
+#include <linux/version.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/interrupt.h>
+#include <asm-generic/ioctl.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <asm/byteorder.h>
+#include <linux/timer.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/sched.h>
+#include <asm/current.h>
+#include <asm/cacheflush.h>
+#include <linux/pagemap.h>
+#include <linux/kthread.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include <linux/random.h>
+#include <linux/sched.h>
+#include <linux/jiffies.h>
+#include <linux/ctype.h>
+#include <linux/debugfs.h>
+#include <linux/firmware.h>
+#include <linux/sched/signal.h>
+#include "../include/efc_common.h"
+
+#define EFCT_DRIVER_NAME	"efct"
+#define EFCT_DRIVER_VERSION	"1.0.0.0"
+
+/* EFCT_OS_MAX_ISR_TIME_MSEC -
+ * maximum time driver code should spend in an interrupt
+ * or kernel thread context without yielding
+ */
+#define EFCT_OS_MAX_ISR_TIME_MSEC		1000
+
+#define EFCT_FC_RQ_SIZE_DEFAULT			1024
+#define EFCT_FC_MAX_SGL				64
+#define EFCT_FC_DIF_SEED			0
+
+/* Timeouts */
+#define EFCT_FC_ELS_SEND_DEFAULT_TIMEOUT	0
+#define EFCT_FC_ELS_DEFAULT_RETRIES		3
+#define EFCT_FC_FLOGI_TIMEOUT_SEC		5
+#define EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC    30000000 /* 30 seconds */
+
+/* Watermark */
+#define EFCT_WATERMARK_HIGH_PCT			90
+#define EFCT_WATERMARK_LOW_PCT			80
+#define EFCT_IO_WATERMARK_PER_INITIATOR		8
+
+#include "efct_utils.h"
+#include "../libefc/efclib.h"
+#include "efct_hw.h"
+#include "efct_io.h"
+#include "efct_xport.h"
+
+#define EFCT_PCI_MAX_REGS   6
+#define MAX_PCI_INTERRUPTS 16
+struct efct_s {
+	struct pci_dev	*pcidev;
+	void __iomem *reg[EFCT_PCI_MAX_REGS];
+
+	struct msix_entry msix_vec[MAX_PCI_INTERRUPTS];
+	u32 n_msix_vec;
+	struct efct_os_intr_context_s intr_context[MAX_PCI_INTERRUPTS];
+	u32 numa_node;
+
+	char display_name[EFC_DISPLAY_NAME_LENGTH];
+	bool attached;
+	struct efct_scsi_tgt_s tgt_efct;
+	struct efct_xport_s *xport;	/* Pointer to transport object */
+	struct efc_lport *efcport;	/* Discovery library object */
+	struct Scsi_Host *shost;	/* Scsi host for fc_host entries*/
+	int ctrlmask;
+	int logmask;
+	u32 max_isr_time_msec;		/* Maximum ISR time */
+
+	const char *desc;
+	u32 instance_index;
+
+	const char *model;
+	const char *driver_version;
+	const char *fw_version;
+
+	struct efct_hw_s hw;
+
+	u32 num_vports;
+	u32 rq_selection_policy;
+	char *filter_def;
+
+	bool soft_wwn_enable;
+
+	/*
+	 * Target IO timer value:
+	 * Zero: target command timeout disabled.
+	 * Non-zero: Timeout value, in seconds, for target commands
+	 */
+	u32 target_io_timer_sec;
+
+	int speed;
+	int topology;
+
+	bool enable_numa_support;	/* NUMA support enabled */
+	u8 efct_req_fw_upgrade;
+	u16 sw_feature_cap;
+	struct dentry *sess_debugfs_dir;
+};
+
+#define MAX_EFCT_DEVICES  64
+extern struct efct_s *efct_devices[MAX_EFCT_DEVICES];
+
+#define efct_is_fc_initiator_enabled()	(efct->enable_ini)
+#define efct_is_fc_target_enabled()	(efct->enable_tgt)
+
+struct efct_s *efct_get_instance(u32 index);
+void efct_stop_event_processing(struct efct_s *efct_os);
+int efct_start_event_processing(struct efct_s *efct_os);
+
+void *efct_device_alloc(u32 nid);
+int efct_device_interrupts_required(struct efct_s *efct);
+int efct_device_attach(struct efct_s *efct);
+int efct_device_detach(struct efct_s *efct);
+void efct_device_free(struct efct_s *efct);
+int efct_device_ioctl(struct efct_s *efct, unsigned int cmd,
+		      unsigned long arg);
+int efct_device_init(void);
+void efct_device_init_complete(void);
+void efct_device_shutdown(void);
+void efct_device_shutdown_complete(void);
+int efct_request_firmware_update(struct efct_s *efct);
+
+#endif /* __EFCT_DRIVER_H__ */
diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
new file mode 100644
index 000000000000..ecb3ccbf7c4c
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -0,0 +1,1298 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+
+#define EFCT_HW_MQ_DEPTH		128
+#define EFCT_HW_WQ_TIMER_PERIOD_MS	500
+
+#define EFCT_HW_REQUE_XRI_REGTAG	65534
+
+/* HW global data */
+struct efct_hw_global_s hw_global;
+static void
+efct_hw_adjust_wqs(struct efct_hw_s *hw);
+
+/* WQE timeouts */
+static void
+target_wqe_timer_cb(struct timer_list *);
+static void
+shutdown_target_wqe_timer(struct efct_hw_s *hw);
+
+static enum efct_hw_rtn_e
+efct_hw_link_event_init(struct efct_hw_s *hw)
+{
+	hw->link.status = SLI_LINK_STATUS_MAX;
+	hw->link.topology = SLI_LINK_TOPO_NONE;
+	hw->link.medium = SLI_LINK_MEDIUM_MAX;
+	hw->link.speed = 0;
+	hw->link.loop_map = NULL;
+	hw->link.fc_id = U32_MAX;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @brief Adjust the number of WQs and CQs within the HW.
+ *
+ * @par Description
+ * Calculates the number of WQs and associated CQs needed in the HW based on
+ * the number of IOs. Calculates the starting CQ index for each WQ, RQ and
+ * MQ.
+ *
+ * @param hw Hardware context allocated by the caller.
+ */
+static void
+efct_hw_adjust_wqs(struct efct_hw_s *hw)
+{
+	u32 max_wq_num = hw->sli.qinfo.max_qcount[SLI_QTYPE_WQ];
+	u32 max_wq_entries = hw->num_qentries[SLI_QTYPE_WQ];
+	u32 max_cq_entries = hw->num_qentries[SLI_QTYPE_CQ];
+
+	/*
+	 * possibly adjust the the size of the WQs so that the CQ is twice as
+	 * big as the WQ to allow for 2 completions per IO. This allows us to
+	 * handle multi-phase as well as aborts.
+	 */
+	if (max_cq_entries < max_wq_entries * 2) {
+		hw->num_qentries[SLI_QTYPE_WQ] = max_cq_entries / 2;
+		max_wq_entries =  hw->num_qentries[SLI_QTYPE_WQ];
+	}
+
+	/*
+	 * Calculate the number of WQs to use base on the number of IOs.
+	 *
+	 * Note: We need to reserve room for aborts which must be sent down
+	 *       the same WQ as the IO. So we allocate enough WQ space to
+	 *       handle 2 times the number of IOs. Half of the space will be
+	 *       used for normal IOs and the other hwf is reserved for aborts.
+	 */
+	hw->config.n_wq = ((hw->config.n_io * 2) + (max_wq_entries - 1))
+			    / max_wq_entries;
+
+	/* make sure we haven't exceeded the max supported in the HW */
+	if (hw->config.n_wq > EFCT_HW_MAX_NUM_WQ)
+		hw->config.n_wq = EFCT_HW_MAX_NUM_WQ;
+
+	/* make sure we haven't exceeded the chip maximum */
+	if (hw->config.n_wq > max_wq_num)
+		hw->config.n_wq = max_wq_num;
+
+}
+
+static inline void
+efct_hw_add_io_timed_wqe(struct efct_hw_s *hw, struct efct_hw_io_s *io)
+{
+	unsigned long flags = 0;
+
+	if (hw->config.emulate_tgt_wqe_timeout && io->tgt_wqe_timeout) {
+		/*
+		 * Active WQE list currently only used for
+		 * target WQE timeouts.
+		 */
+		spin_lock_irqsave(&hw->io_lock, flags);
+		INIT_LIST_HEAD(&io->wqe_link);
+		list_add_tail(&io->wqe_link, &hw->io_timed_wqe);
+		io->submit_ticks = jiffies_64;
+		spin_unlock_irqrestore(&hw->io_lock, flags);
+	}
+}
+
+static inline void
+efct_hw_remove_io_timed_wqe(struct efct_hw_s *hw, struct efct_hw_io_s *io)
+{
+	unsigned long flags = 0;
+
+	if (hw->config.emulate_tgt_wqe_timeout) {
+		/*
+		 * If target wqe timeouts are enabled,
+		 * remove from active wqe list.
+		 */
+		spin_lock_irqsave(&hw->io_lock, flags);
+		if (io->wqe_link.next)
+			list_del(&io->wqe_link);
+		spin_unlock_irqrestore(&hw->io_lock, flags);
+	}
+}
+
+/**
+ * @ingroup devInitShutdown
+ * @brief If this is physical port 0, then read the max dump size.
+ *
+ * @par Description
+ * Queries the FW for the maximum dump size
+ *
+ * @param hw Hardware context allocated by the caller.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+static enum efct_hw_rtn_e
+efct_hw_read_max_dump_size(struct efct_hw_s *hw)
+{
+	u8	buf[SLI4_BMBX_SIZE];
+	u8 func;
+	struct efct_s *efct = hw->os;
+	int	rc = 0;
+
+	/* attempt to detemine the dump size for function 0 only. */
+	func = PCI_FUNC(efct->pcidev->devfn);
+	if (func == 0) {
+		if (!sli_cmd_common_set_dump_location(&hw->sli, buf,
+						     SLI4_BMBX_SIZE, 1, 0,
+						     NULL, 0)) {
+			struct sli4_rsp_cmn_set_dump_location_s *rsp =
+				(struct sli4_rsp_cmn_set_dump_location_s *)
+				(buf + offsetof(struct sli4_cmd_sli_config_s,
+						payload.embed));
+
+			rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL,
+					     NULL);
+			if (rc != EFCT_HW_RTN_SUCCESS) {
+				efc_log_test(hw->os,
+					      "set dump location cmd failed\n");
+				return rc;
+			}
+			hw->dump_size =
+				(le32_to_cpu(rsp->buffer_length_dword) &
+				 RSP_SET_DUMP_BUFFER_LEN);
+			efc_log_debug(hw->os, "Dump size %x\n",
+				       hw->dump_size);
+		}
+	}
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @ingroup devInitShutdown
+ * @brief Set up the Hardware Abstraction Layer module.
+ *
+ * @par Description
+ * Calls set up to configure the hardware.
+ *
+ * @param hw Hardware context allocated by the caller.
+ * @param os Device abstraction.
+ * @param port_type Protocol type of port, such as FC and NIC.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_setup(struct efct_hw_s *hw, void *os, struct pci_dev *pdev)
+{
+	u32 i;
+	struct sli4_s *sli = &hw->sli;
+
+	if (!hw) {
+		pr_err("bad parameter(s) hw=%p\n", hw);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->hw_setup_called)
+		return EFCT_HW_RTN_SUCCESS;
+
+	/*
+	 * efct_hw_init() relies on NULL pointers indicating that a structure
+	 * needs allocation. If a structure is non-NULL, efct_hw_init() won't
+	 * free/realloc that memory
+	 */
+	memset(hw, 0, sizeof(struct efct_hw_s));
+
+	hw->hw_setup_called = true;
+
+	hw->os = os;
+
+	spin_lock_init(&hw->cmd_lock);
+	INIT_LIST_HEAD(&hw->cmd_head);
+	INIT_LIST_HEAD(&hw->cmd_pending);
+	hw->cmd_head_count = 0;
+
+	spin_lock_init(&hw->io_lock);
+	spin_lock_init(&hw->io_abort_lock);
+
+	atomic_set(&hw->io_alloc_failed_count, 0);
+
+	hw->config.speed = FC_LINK_SPEED_AUTO_16_8_4;
+	hw->config.dif_seed = 0;
+	if (sli_setup(&hw->sli, hw->os, pdev, ((struct efct_s *)os)->reg)) {
+		efc_log_err(hw->os, "SLI setup failed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	efct_hw_link_event_init(hw);
+
+	sli_callback(&hw->sli, SLI4_CB_LINK, efct_hw_cb_link, hw);
+
+	/*
+	 * Set all the queue sizes to the maximum allowed.
+	 */
+	for (i = 0; i < ARRAY_SIZE(hw->num_qentries); i++)
+		hw->num_qentries[i] = hw->sli.qinfo.max_qentries[i];
+
+	/*
+	 * The RQ assignment for RQ pair mode.
+	 */
+	hw->config.rq_default_buffer_size = EFCT_HW_RQ_SIZE_PAYLOAD;
+	hw->config.n_io = hw->sli.extent[SLI_RSRC_XRI].size;
+
+	(void)efct_hw_read_max_dump_size(hw);
+
+	/* calculate the number of WQs required. */
+	efct_hw_adjust_wqs(hw);
+
+	/* Set the default dif mode */
+	if (!(sli->features & SLI4_REQFEAT_DIF &&
+	      sli->t10_dif_inline_capable)) {
+		efc_log_test(hw->os,
+			      "not inline capable, setting mode to separate\n");
+		hw->config.dif_mode = EFCT_HW_DIF_MODE_SEPARATE;
+	}
+
+	hw->config.queue_topology = hw_global.queue_topology_string;
+
+	hw->qtop = efct_hw_qtop_parse(hw, hw->config.queue_topology);
+	if (!hw->qtop) {
+		efc_log_crit(hw->os, "Queue topology string is invalid\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	hw->config.n_eq = hw->qtop->entry_counts[QTOP_EQ];
+	hw->config.n_cq = hw->qtop->entry_counts[QTOP_CQ];
+	hw->config.n_rq = hw->qtop->entry_counts[QTOP_RQ];
+	hw->config.n_wq = hw->qtop->entry_counts[QTOP_WQ];
+	hw->config.n_mq = hw->qtop->entry_counts[QTOP_MQ];
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static void
+efct_logfcfi(struct efct_hw_s *hw, u32 j, u32 i, u32 id)
+{
+	efc_log_info(hw->os,
+		      "REG_FCFI: filter[%d] %08X -> RQ[%d] id=%d\n",
+		     j, hw->config.filter_def[j], i, id);
+}
+
+/**
+ * @ingroup devInitShutdown
+ * @brief Allocate memory structures to prepare for the device operation.
+ *
+ * @par Description
+ * Allocates memory structures needed by the device and prepares the device
+ * for operation.
+ * @n @n @b Note: This function may be called more than once (for example, at
+ * initialization and then after a reset), but the size of the internal
+ * resources may not be changed without tearing down the HW
+ * (efct_hw_teardown()).
+ *
+ * @param hw Hardware context allocated by the caller.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_init(struct efct_hw_s *hw)
+{
+	enum efct_hw_rtn_e	rc;
+	u32	i = 0;
+	u8		buf[SLI4_BMBX_SIZE];
+	u32	max_rpi;
+	int		rem_count;
+	u32	count;
+	unsigned long flags = 0;
+	struct efct_hw_io_s *temp;
+	struct sli4_cmd_rq_cfg_s rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+	struct sli4_s *sli = &hw->sli;
+	struct efct_s *efct = hw->os;
+
+	/*
+	 * Make sure the command lists are empty. If this is start-of-day,
+	 * they'll be empty since they were just initialized in efct_hw_setup.
+	 * If we've just gone through a reset, the command and command pending
+	 * lists should have been cleaned up as part of the reset
+	 * (efct_hw_reset()).
+	 */
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+		if (!list_empty(&hw->cmd_head)) {
+			efc_log_test(hw->os, "command found on cmd list\n");
+			spin_unlock_irqrestore(&hw->cmd_lock, flags);
+			return EFCT_HW_RTN_ERROR;
+		}
+		if (!list_empty(&hw->cmd_pending)) {
+			efc_log_test(hw->os,
+				      "command found on pending list\n");
+			spin_unlock_irqrestore(&hw->cmd_lock, flags);
+			return EFCT_HW_RTN_ERROR;
+		}
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	/* Free RQ buffers if prevously allocated */
+	efct_hw_rx_free(hw);
+
+	/*
+	 * The IO queues must be initialized here for the reset case. The
+	 * efct_hw_init_io() function will re-add the IOs to the free list.
+	 * The cmd_head list should be OK since we free all entries in
+	 * efct_hw_command_cancel() that is called in the efct_hw_reset().
+	 */
+
+	/* If we are in this function due to a reset, there may be stale items
+	 * on lists that need to be removed.  Clean them up.
+	 */
+	rem_count = 0;
+	if (hw->io_wait_free.next) {
+		while ((!list_empty(&hw->io_wait_free))) {
+			rem_count++;
+			temp = list_first_entry(&hw->io_wait_free,
+						struct efct_hw_io_s,
+						list_entry);
+			list_del(&temp->list_entry);
+		}
+		if (rem_count > 0) {
+			efc_log_debug(hw->os,
+				       "rmvd %d items from io_wait_free list\n",
+				rem_count);
+		}
+	}
+	rem_count = 0;
+	if (hw->io_inuse.next) {
+		while ((!list_empty(&hw->io_inuse))) {
+			rem_count++;
+			temp = list_first_entry(&hw->io_inuse,
+						struct efct_hw_io_s,
+						list_entry);
+			list_del(&temp->list_entry);
+		}
+		if (rem_count > 0)
+			efc_log_debug(hw->os,
+				       "rmvd %d items from io_inuse list\n",
+				       rem_count);
+	}
+	rem_count = 0;
+	if (hw->io_free.next) {
+		while ((!list_empty(&hw->io_free))) {
+			rem_count++;
+			temp = list_first_entry(&hw->io_free,
+						struct efct_hw_io_s,
+						list_entry);
+			list_del(&temp->list_entry);
+		}
+		if (rem_count > 0)
+			efc_log_debug(hw->os,
+				       "rmvd %d items from io_free list\n",
+				       rem_count);
+	}
+
+	INIT_LIST_HEAD(&hw->io_inuse);
+	INIT_LIST_HEAD(&hw->io_free);
+	INIT_LIST_HEAD(&hw->io_wait_free);
+	INIT_LIST_HEAD(&hw->io_timed_wqe);
+
+	/* If MRQ not required, Make sure we dont request feature. */
+	if (hw->config.n_rq == 1)
+		hw->sli.features &= (~SLI4_REQFEAT_MRQP);
+
+	if (sli_init(&hw->sli)) {
+		efc_log_err(hw->os, "SLI failed to initialize\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (hw->sliport_healthcheck) {
+		rc = efct_hw_config_sli_port_health_check(hw, 0, 1);
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "Enable port Health check fail\n");
+			return rc;
+		}
+	}
+
+	/*
+	 * Set FDT transfer hint, only works on Lancer
+	 */
+	if (hw->sli.if_type == SLI4_INTF_IF_TYPE_2 &&
+	    EFCT_HW_FDT_XFER_HINT != 0) {
+		/*
+		 * Non-fatal error. In particular, we can disregard failure to
+		 * set EFCT_HW_FDT_XFER_HINT on devices with legacy firmware
+		 * that do not support EFCT_HW_FDT_XFER_HINT feature.
+		 */
+		efct_hw_config_set_fdt_xfer_hint(hw, EFCT_HW_FDT_XFER_HINT);
+	}
+
+	/*
+	 * Verify that we have not exceeded any queue sizes
+	 */
+	if (hw->config.n_eq > sli->qinfo.max_qcount[SLI_QTYPE_EQ]) {
+		efc_log_err(hw->os, "requested %d EQ but %d allowed\n",
+			     hw->config.n_eq,
+			sli->qinfo.max_qcount[SLI_QTYPE_EQ]);
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (hw->config.n_cq > sli->qinfo.max_qcount[SLI_QTYPE_CQ]) {
+		efc_log_err(hw->os, "requested %d CQ but %d allowed\n",
+			     hw->config.n_cq,
+			sli->qinfo.max_qcount[SLI_QTYPE_CQ]);
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (hw->config.n_mq > sli->qinfo.max_qcount[SLI_QTYPE_MQ]) {
+		efc_log_err(hw->os, "requested %d MQ but %d allowed\n",
+			     hw->config.n_mq,
+			sli->qinfo.max_qcount[SLI_QTYPE_MQ]);
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (hw->config.n_rq > sli->qinfo.max_qcount[SLI_QTYPE_RQ]) {
+		efc_log_err(hw->os, "requested %d RQ but %d allowed\n",
+			     hw->config.n_rq,
+			sli->qinfo.max_qcount[SLI_QTYPE_RQ]);
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (hw->config.n_wq > sli->qinfo.max_qcount[SLI_QTYPE_WQ]) {
+		efc_log_err(hw->os, "requested %d WQ but %d allowed\n",
+			     hw->config.n_wq,
+			sli->qinfo.max_qcount[SLI_QTYPE_WQ]);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* zero the hashes */
+	memset(hw->cq_hash, 0, sizeof(hw->cq_hash));
+	efc_log_debug(hw->os, "Max CQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_CQ, EFCT_HW_Q_HASH_SIZE);
+
+	memset(hw->rq_hash, 0, sizeof(hw->rq_hash));
+	efc_log_debug(hw->os, "Max RQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_RQ, EFCT_HW_Q_HASH_SIZE);
+
+	memset(hw->wq_hash, 0, sizeof(hw->wq_hash));
+	efc_log_debug(hw->os, "Max WQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_WQ, EFCT_HW_Q_HASH_SIZE);
+
+	rc = efct_hw_init_queues(hw, hw->qtop);
+	if (rc != EFCT_HW_RTN_SUCCESS)
+		return rc;
+
+	max_rpi = sli->extent[SLI_RSRC_RPI].size;
+	i = sli_fc_get_rpi_requirements(&hw->sli, max_rpi);
+	if (i) {
+		struct efc_dma_s payload_memory;
+
+		rc = EFCT_HW_RTN_ERROR;
+
+		if (hw->rnode_mem.size) {
+			dma_free_coherent(&efct->pcidev->dev,
+					  hw->rnode_mem.size,
+					  hw->rnode_mem.virt,
+					  hw->rnode_mem.phys);
+			memset(&hw->rnode_mem, 0, sizeof(struct efc_dma_s));
+		}
+
+		hw->rnode_mem.size = i;
+		hw->rnode_mem.virt = dma_alloc_coherent(&efct->pcidev->dev,
+							hw->rnode_mem.size,
+							&hw->rnode_mem.phys,
+							GFP_DMA);
+		if (!hw->rnode_mem.virt) {
+			efc_log_err(hw->os,
+				     "remote node memory allocation fail\n");
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+
+		payload_memory.size = 0;
+		if (!sli_cmd_post_hdr_templates(&hw->sli, buf,
+					       SLI4_BMBX_SIZE,
+						    &hw->rnode_mem,
+						    U16_MAX,
+						    &payload_memory)) {
+			rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL,
+					     NULL);
+
+			if (payload_memory.size != 0) {
+				/*
+				 * The command was non-embedded - need to
+				 * free the dma buffer
+				 */
+				dma_free_coherent(&efct->pcidev->dev,
+						  payload_memory.size,
+						  payload_memory.virt,
+						  payload_memory.phys);
+				memset(&payload_memory, 0,
+				       sizeof(struct efc_dma_s));
+			}
+		}
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os,
+				     "header template registration failed\n");
+			return rc;
+		}
+	}
+
+	/* Allocate and post RQ buffers */
+	rc = efct_hw_rx_allocate(hw);
+	if (rc) {
+		efc_log_err(hw->os, "rx_allocate failed\n");
+		return rc;
+	}
+
+	/* Populate hw->seq_free_list */
+	if (!hw->seq_pool) {
+		u32 count = 0;
+		u32 i;
+
+		/*
+		 * Sum up the total number of RQ entries, to use to allocate
+		 * the sequence object pool
+		 */
+		for (i = 0; i < hw->hw_rq_count; i++)
+			count += hw->hw_rq[i]->entry_count;
+
+		hw->seq_pool = efct_array_alloc(hw->os,
+					sizeof(struct efc_hw_sequence_s),
+						count);
+		if (!hw->seq_pool) {
+			efc_log_err(hw->os, "malloc seq_pool failed\n");
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+
+	if (efct_hw_rx_post(hw))
+		efc_log_err(hw->os, "WARNING - error posting RQ buffers\n");
+
+	/* Allocate rpi_ref if not previously allocated */
+	if (!hw->rpi_ref) {
+		hw->rpi_ref = kmalloc_array(max_rpi, sizeof(*hw->rpi_ref),
+				      GFP_KERNEL);
+		if (!hw->rpi_ref)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		memset(hw->rpi_ref, 0, max_rpi * sizeof(*hw->rpi_ref));
+	}
+
+	for (i = 0; i < max_rpi; i++) {
+		atomic_set(&hw->rpi_ref[i].rpi_count, 0);
+		atomic_set(&hw->rpi_ref[i].rpi_attached, 0);
+	}
+
+	/*
+	 * Register a FCFI to allow unsolicited frames to be routed to the
+	 * driver
+	 */
+	if (hw->hw_mrq_count) {
+		efc_log_info(hw->os, "using REG_FCFI MRQ\n");
+
+		rc = efct_hw_config_mrq(hw,
+					SLI4_CMD_REG_FCFI_SET_FCFI_MODE,
+				0);
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os,
+				     "REG_FCFI_MRQ FCFI reg failed\n");
+			return rc;
+		}
+
+		rc = efct_hw_config_mrq(hw,
+					SLI4_CMD_REG_FCFI_SET_MRQ_MODE,
+					0);
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os,
+				     "REG_FCFI_MRQ MRQ reg failed\n");
+			return rc;
+		}
+	} else {
+		u32 min_rq_count;
+
+		efc_log_info(hw->os, "using REG_FCFI standard\n");
+
+		/*
+		 * Set the filter match/mask values from hw's
+		 * filter_def values
+		 */
+		for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+			rq_cfg[i].rq_id = 0xffff;
+			rq_cfg[i].r_ctl_mask = (u8)
+					hw->config.filter_def[i];
+			rq_cfg[i].r_ctl_match = (u8)
+					(hw->config.filter_def[i] >>
+					 8);
+			rq_cfg[i].type_mask = (u8)
+					 (hw->config.filter_def[i] >>
+					  16);
+			rq_cfg[i].type_match = (u8)
+					 (hw->config.filter_def[i] >>
+					  24);
+		}
+
+		/*
+		 * Update the rq_id's of the FCF configuration
+		 * (don't update more than the number of rq_cfg
+		 * elements)
+		 */
+		min_rq_count = (hw->hw_rq_count <
+				SLI4_CMD_REG_FCFI_NUM_RQ_CFG)
+				? hw->hw_rq_count :
+				SLI4_CMD_REG_FCFI_NUM_RQ_CFG;
+		for (i = 0; i < min_rq_count; i++) {
+			struct hw_rq_s *rq = hw->hw_rq[i];
+			u32 j;
+
+			for (j = 0; j < SLI4_CMD_REG_FCFI_NUM_RQ_CFG;
+			     j++) {
+				u32 mask = (rq->filter_mask != 0) ?
+						 rq->filter_mask : 1;
+
+				if (mask & (1U << j)) {
+					rq_cfg[j].rq_id = rq->hdr->id;
+					efct_logfcfi(hw, j, i,
+						     rq->hdr->id);
+				}
+			}
+		}
+
+		rc = EFCT_HW_RTN_ERROR;
+		if (!sli_cmd_reg_fcfi(&hw->sli, buf,
+				     SLI4_BMBX_SIZE, 0,
+					  rq_cfg)) {
+			rc = efct_hw_command(hw, buf, EFCT_CMD_POLL,
+					     NULL, NULL);
+		}
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os,
+				     "FCFI registration failed\n");
+			return rc;
+		}
+		hw->fcf_indicator =
+		le16_to_cpu(((struct sli4_cmd_reg_fcfi_s *)buf)->fcfi);
+	}
+
+	/*
+	 * Allocate the WQ request tag pool, if not previously allocated
+	 * (the request tag value is 16 bits, thus the pool allocation size
+	 * of 64k)
+	 */
+	rc = efct_hw_reqtag_init(hw);
+	if (rc) {
+		efc_log_err(hw->os, "efct_hw_reqtag_init failed %d\n", rc);
+		return rc;
+	}
+
+	rc = efct_hw_setup_io(hw);
+	if (rc) {
+		efc_log_err(hw->os, "IO allocation failure\n");
+		return rc;
+	}
+
+	rc = efct_hw_init_io(hw);
+	if (rc) {
+		efc_log_err(hw->os, "IO initialization failure\n");
+		return rc;
+	}
+
+	/* Set the DIF seed - only for lancer right now */
+	if (sli->if_type == SLI4_INTF_IF_TYPE_2 &&
+	    efct_hw_set_dif_seed(hw) != EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(hw->os, "Failed to set DIF seed value\n");
+		return rc;
+	}
+
+	/*
+	 * Arming the EQ allows (e.g.) interrupts when CQ completions write EQ
+	 * entries
+	 */
+	for (i = 0; i < hw->eq_count; i++)
+		sli_queue_arm(&hw->sli, &hw->eq[i], true);
+
+	/*
+	 * Initialize RQ hash
+	 */
+	for (i = 0; i < hw->rq_count; i++)
+		efct_hw_queue_hash_add(hw->rq_hash, hw->rq[i].id, i);
+
+	/*
+	 * Initialize WQ hash
+	 */
+	for (i = 0; i < hw->wq_count; i++)
+		efct_hw_queue_hash_add(hw->wq_hash, hw->wq[i].id, i);
+
+	/*
+	 * Arming the CQ allows (e.g.) MQ completions to write CQ entries
+	 */
+	for (i = 0; i < hw->cq_count; i++) {
+		efct_hw_queue_hash_add(hw->cq_hash, hw->cq[i].id, i);
+		sli_queue_arm(&hw->sli, &hw->cq[i], true);
+	}
+
+	/* record the fact that the queues are functional */
+	hw->state = EFCT_HW_STATE_ACTIVE;
+
+	/* finally kick off periodic timer to check for timed out target WQEs */
+	if (hw->config.emulate_tgt_wqe_timeout) {
+		timer_setup(&hw->wqe_timer, &target_wqe_timer_cb, 0);
+
+		mod_timer(&hw->wqe_timer, jiffies +
+			  msecs_to_jiffies(EFCT_HW_WQ_TIMER_PERIOD_MS));
+	}
+	/*
+	 * Allocate a HW IOs for send frame.  Allocate one for each Class 1
+	 * WQ, or if there are none of those, allocate one for WQ[0]
+	 */
+	count = efct_varray_get_count(hw->wq_class_array[1]);
+	if (count > 0) {
+		struct hw_wq_s *wq;
+
+		for (i = 0; i < count; i++) {
+			wq = efct_varray_iter_next(hw->wq_class_array[1]);
+			wq->send_frame_io = efct_hw_io_alloc(hw);
+			if (!wq->send_frame_io)
+				efc_log_err(hw->os,
+					     "alloc for send_frame_io failed\n");
+		}
+	} else {
+		hw->hw_wq[0]->send_frame_io = efct_hw_io_alloc(hw);
+		if (!hw->hw_wq[0]->send_frame_io)
+			efc_log_err(hw->os,
+				     "alloc for send_frame_io failed\n");
+	}
+
+	/* Initialize send frame frame sequence id */
+	atomic_set(&hw->send_frame_seq_id, 0);
+
+	/* Initialize watchdog timer if enabled by user */
+	if (hw->watchdog_timeout) {
+		if (hw->watchdog_timeout < 1 ||
+		    hw->watchdog_timeout > 65534)
+			efc_log_err(hw->os,
+				     "WDT out of range: range is 1 - 65534\n");
+		else if (!efct_hw_config_watchdog_timer(hw))
+			efc_log_info(hw->os,
+				      "WDT timer config with tmo = %d secs\n",
+				     hw->watchdog_timeout);
+	}
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @brief Configure Multi-RQ
+ *
+ * @param hw       Hardware context allocated by the caller.
+ * @param mode      1 to set MRQ filters and 0 to set FCFI index
+ * @param fcf_index valid in mode 0
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+static int
+efct_hw_config_mrq(struct efct_hw_s *hw, u8 mode, u16 fcf_index)
+{
+	u8 buf[SLI4_BMBX_SIZE], mrq_bitmask = 0;
+	struct hw_rq_s *rq;
+	struct sli4_cmd_reg_fcfi_mrq_s *rsp = NULL;
+	u32 i, j;
+	struct sli4_cmd_rq_cfg_s rq_filter[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
+	int rc;
+
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
+		goto issue_cmd;
+
+	/* Set the filter match/mask values from hw's filter_def values */
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		rq_filter[i].rq_id = 0xffff;
+		rq_filter[i].r_ctl_mask  = (u8)
+					    hw->config.filter_def[i];
+		rq_filter[i].r_ctl_match = (u8)
+					    (hw->config.filter_def[i] >> 8);
+		rq_filter[i].type_mask   = (u8)
+					    (hw->config.filter_def[i] >> 16);
+		rq_filter[i].type_match  = (u8)
+					    (hw->config.filter_def[i] >> 24);
+	}
+
+	/* Accumulate counts for each filter type used, build rq_ids[] list */
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		rq = hw->hw_rq[i];
+		for (j = 0; j < SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG; j++) {
+			if (rq->filter_mask & (1U << j)) {
+				if (rq_filter[j].rq_id != 0xffff) {
+					/*
+					 * Already used. Bailout ifts not RQset
+					 * case
+					 */
+					if (!rq->is_mrq ||
+					    rq_filter[j].rq_id !=
+					     rq->base_mrq_id) {
+						efc_log_err(hw->os,
+							     "Wrong q top.\n");
+						return EFCT_HW_RTN_ERROR;
+					}
+					continue;
+				}
+
+				if (rq->is_mrq) {
+					rq_filter[j].rq_id = rq->base_mrq_id;
+					mrq_bitmask |= (1U << j);
+				} else {
+					rq_filter[j].rq_id = rq->hdr->id;
+				}
+			}
+		}
+	}
+
+issue_cmd:
+	/* Invoke REG_FCFI_MRQ */
+	rc = sli_cmd_reg_fcfi_mrq(&hw->sli,
+				  buf,	/* buf */
+				 SLI4_BMBX_SIZE, /* size */
+				 mode, /* mode 1 */
+				 fcf_index, /* fcf_index */
+				 /* RQ selection policy*/
+				 hw->config.rq_selection_policy,
+				 mrq_bitmask, /* MRQ bitmask */
+				 hw->hw_mrq_count, /* num_mrqs */
+				 rq_filter);/* RQ filter */
+	if (rc) {
+		efc_log_err(hw->os,
+			     "sli_cmd_reg_fcfi_mrq() failed: %d\n", rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+
+	rsp = (struct sli4_cmd_reg_fcfi_mrq_s *)buf;
+
+	if (rc != EFCT_HW_RTN_SUCCESS ||
+	    le16_to_cpu(rsp->hdr.status)) {
+		efc_log_err(hw->os,
+			     "FCFI MRQ reg failed. cmd = %x status = %x\n",
+			     rsp->hdr.command,
+			     le16_to_cpu(rsp->hdr.status));
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
+		hw->fcf_indicator = le16_to_cpu(rsp->fcfi);
+	return 0;
+}
+
+enum efct_hw_rtn_e
+efct_hw_set(struct efct_hw_s *hw, enum efct_hw_property_e prop, u32 value)
+{
+	enum efct_hw_rtn_e		rc = EFCT_HW_RTN_SUCCESS;
+	struct sli4_s *sli = &hw->sli;
+
+	switch (prop) {
+	case EFCT_HW_N_IO:
+		if (value > sli->extent[SLI_RSRC_XRI].size ||
+		    value == 0) {
+			efc_log_test(hw->os,
+				      "IO value out of range %d vs %d\n",
+				     value,
+				sli->extent[SLI_RSRC_XRI].size);
+			rc = EFCT_HW_RTN_ERROR;
+		} else {
+			hw->config.n_io = value;
+		}
+		break;
+	case EFCT_HW_N_SGL:
+		value += SLI4_SGE_MAX_RESERVED;
+		if (value > sli_get_max_sgl(&hw->sli)) {
+			efc_log_test(hw->os,
+				      "SGL value out of range %d vs %d\n",
+				     value, sli_get_max_sgl(&hw->sli));
+			rc = EFCT_HW_RTN_ERROR;
+		} else {
+			hw->config.n_sgl = value;
+		}
+		break;
+	case EFCT_HW_TOPOLOGY:
+		switch (value) {
+		case EFCT_HW_TOPOLOGY_AUTO:
+			sli_set_topology(&hw->sli,
+					 SLI4_READ_CFG_TOPO_FC);
+			break;
+		case EFCT_HW_TOPOLOGY_NPORT:
+			sli_set_topology(&hw->sli, SLI4_READ_CFG_TOPO_FC_DA);
+			break;
+		case EFCT_HW_TOPOLOGY_LOOP:
+			sli_set_topology(&hw->sli, SLI4_READ_CFG_TOPO_FC_AL);
+			break;
+		default:
+			efc_log_test(hw->os,
+				      "unsupported topology %#x\n", value);
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		hw->config.topology = value;
+		break;
+	case EFCT_HW_LINK_SPEED:
+
+		switch (value) {
+		case 0:		/* Auto-speed negotiation */
+			hw->config.speed = FC_LINK_SPEED_AUTO_16_8_4;
+			break;
+		case 2000:	/* FC speeds */
+			hw->config.speed = FC_LINK_SPEED_2G;
+			break;
+		case 4000:
+			hw->config.speed = FC_LINK_SPEED_4G;
+			break;
+		case 8000:
+			hw->config.speed = FC_LINK_SPEED_8G;
+			break;
+		case 16000:
+			hw->config.speed = FC_LINK_SPEED_16G;
+			break;
+		case 32000:
+			hw->config.speed = FC_LINK_SPEED_32G;
+			break;
+		default:
+			efc_log_test(hw->os, "unsupported speed %d\n", value);
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_RQ_PROCESS_LIMIT: {
+		struct hw_rq_s *rq;
+		u32 i;
+
+		/* For each hw_rq object, set its parent CQ limit value */
+		for (i = 0; i < hw->hw_rq_count; i++) {
+			rq = hw->hw_rq[i];
+			hw->cq[rq->cq->instance].proc_limit = value;
+		}
+		break;
+	}
+	case EFCT_HW_RQ_DEFAULT_BUFFER_SIZE:
+		hw->config.rq_default_buffer_size = value;
+		break;
+	case EFCT_ESOC:
+		hw->config.esoc = value;
+		break;
+	case EFCT_HW_HIGH_LOGIN_MODE:
+		rc = sli_set_hlm(&hw->sli, value);
+		break;
+	case EFCT_HW_PREREGISTER_SGL:
+		rc = sli_set_sgl_preregister(&hw->sli, value);
+		break;
+	case EFCT_HW_EMULATE_TARGET_WQE_TIMEOUT:
+		hw->config.emulate_tgt_wqe_timeout = value;
+		break;
+	case EFCT_HW_BOUNCE:
+		hw->config.bounce = value;
+		break;
+	case EFCT_HW_RQ_SELECTION_POLICY:
+		hw->config.rq_selection_policy = value;
+		break;
+	case EFCT_HW_RR_QUANTA:
+		hw->config.rr_quanta = value;
+		break;
+	default:
+		efc_log_test(hw->os, "unsupported property %#x\n", prop);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn_e
+efct_hw_set_ptr(struct efct_hw_s *hw, enum efct_hw_property_e prop,
+		void *value)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+
+	switch (prop) {
+	case EFCT_HW_WAR_VERSION:
+		hw->hw_war_version = value;
+		break;
+	case EFCT_HW_FILTER_DEF: {
+		char *p = NULL;
+		char *token;
+		u32 idx = 0;
+
+		for (idx = 0; idx < ARRAY_SIZE(hw->config.filter_def); idx++)
+			hw->config.filter_def[idx] = 0;
+
+		p = kstrdup(value, GFP_KERNEL);
+		if (!p || !*p) {
+			efc_log_err(hw->os, "p is NULL\n");
+			break;
+		}
+
+		idx = 0;
+		while ((token = strsep(&p, ",")) && *token) {
+			if (kstrtou32(token, 0, &hw->config.filter_def[idx++]))
+				efc_log_err(hw->os, "kstrtoint failed\n");
+
+			if (!p || !*p)
+				break;
+
+			if (idx == ARRAY_SIZE(hw->config.filter_def))
+				break;
+		}
+		kfree(p);
+
+		break;
+	}
+	default:
+		efc_log_test(hw->os, "unsupported property %#x\n", prop);
+		rc = EFCT_HW_RTN_ERROR;
+		break;
+	}
+	return rc;
+}
+
+enum efct_hw_rtn_e
+efct_hw_get(struct efct_hw_s *hw, enum efct_hw_property_e prop,
+	    u32 *value)
+{
+	enum efct_hw_rtn_e		rc = EFCT_HW_RTN_SUCCESS;
+	int			tmp;
+	struct sli4_s *sli = &hw->sli;
+
+	if (!value)
+		return EFCT_HW_RTN_ERROR;
+
+	*value = 0;
+
+	switch (prop) {
+	case EFCT_HW_N_IO:
+		*value = hw->config.n_io;
+		break;
+	case EFCT_HW_N_SGL:
+		*value = (hw->config.n_sgl - SLI4_SGE_MAX_RESERVED);
+		break;
+	case EFCT_HW_MAX_IO:
+		*value = sli->extent[SLI_RSRC_XRI].size;
+		break;
+	case EFCT_HW_MAX_NODES:
+		*value = sli->extent[SLI_RSRC_RPI].size;
+		break;
+	case EFCT_HW_MAX_RQ_ENTRIES:
+		*value = hw->num_qentries[SLI_QTYPE_RQ];
+		break;
+	case EFCT_HW_RQ_DEFAULT_BUFFER_SIZE:
+		*value = hw->config.rq_default_buffer_size;
+		break;
+	case EFCT_HW_MAX_SGE:
+		*value = sli->sge_supported_length;
+		break;
+	case EFCT_HW_MAX_SGL:
+		*value = sli_get_max_sgl(&hw->sli);
+		break;
+	case EFCT_HW_TOPOLOGY:
+		/*
+		 * Infer link.status based on link.speed.
+		 * Report EFCT_HW_TOPOLOGY_NONE if the link is down.
+		 */
+		if (hw->link.speed == 0) {
+			*value = EFCT_HW_TOPOLOGY_NONE;
+			break;
+		}
+		switch (hw->link.topology) {
+		case SLI_LINK_TOPO_NPORT:
+			*value = EFCT_HW_TOPOLOGY_NPORT;
+			break;
+		case SLI_LINK_TOPO_LOOP:
+			*value = EFCT_HW_TOPOLOGY_LOOP;
+			break;
+		case SLI_LINK_TOPO_NONE:
+			*value = EFCT_HW_TOPOLOGY_NONE;
+			break;
+		default:
+			efc_log_test(hw->os,
+				      "unsupported topology %#x\n",
+				     hw->link.topology);
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+		break;
+	case EFCT_HW_CONFIG_TOPOLOGY:
+		*value = hw->config.topology;
+		break;
+	case EFCT_HW_LINK_SPEED:
+		*value = hw->link.speed;
+		break;
+	case EFCT_HW_LINK_CONFIG_SPEED:
+		switch (hw->config.speed) {
+		case FC_LINK_SPEED_10G:
+			*value = 10000;
+			break;
+		case FC_LINK_SPEED_AUTO_16_8_4:
+			*value = 0;
+			break;
+		case FC_LINK_SPEED_2G:
+			*value = 2000;
+			break;
+		case FC_LINK_SPEED_4G:
+			*value = 4000;
+			break;
+		case FC_LINK_SPEED_8G:
+			*value = 8000;
+			break;
+		case FC_LINK_SPEED_16G:
+			*value = 16000;
+			break;
+		case FC_LINK_SPEED_32G:
+			*value = 32000;
+			break;
+		default:
+			efc_log_test(hw->os,
+				      "unsupported speed %#x\n",
+				     hw->config.speed);
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+		break;
+	case EFCT_HW_IF_TYPE:
+		*value = sli->if_type;
+		break;
+	case EFCT_HW_SLI_REV:
+		*value = sli->sli_rev;
+		break;
+	case EFCT_HW_SLI_FAMILY:
+		*value = sli->sli_family;
+		break;
+	case EFCT_HW_DIF_CAPABLE:
+		*value = sli->features & SLI4_REQFEAT_DIF;
+		break;
+	case EFCT_HW_DIF_SEED:
+		*value = hw->config.dif_seed;
+		break;
+	case EFCT_HW_DIF_MODE:
+		*value = hw->config.dif_mode;
+		break;
+	case EFCT_HW_DIF_MULTI_SEPARATE:
+		/* Lancer supports multiple DIF separates */
+		if (hw->sli.if_type == SLI4_INTF_IF_TYPE_2)
+			*value = true;
+		else
+			*value = false;
+		break;
+	case EFCT_HW_DUMP_MAX_SIZE:
+		*value = hw->dump_size;
+		break;
+	case EFCT_HW_DUMP_READY:
+		*value = sli_dump_is_ready(&hw->sli);
+		break;
+	case EFCT_HW_DUMP_PRESENT:
+		*value = sli_dump_is_present(&hw->sli);
+		break;
+	case EFCT_HW_RESET_REQUIRED:
+		tmp = sli_reset_required(&hw->sli);
+		if (tmp < 0)
+			rc = EFCT_HW_RTN_ERROR;
+		else
+			*value = tmp;
+		break;
+	case EFCT_HW_FW_ERROR:
+		*value = sli_fw_error_status(&hw->sli);
+		break;
+	case EFCT_HW_FW_READY:
+		*value = sli_fw_ready(&hw->sli);
+		break;
+	case EFCT_HW_HIGH_LOGIN_MODE:
+		*value = sli->features & SLI4_REQFEAT_HLM;
+		break;
+	case EFCT_HW_PREREGISTER_SGL:
+		*value = sli->sgl_pre_registration_required;
+		break;
+	case EFCT_HW_HW_REV1:
+		*value = sli->hw_rev[0];
+		break;
+	case EFCT_HW_HW_REV2:
+		*value = sli->hw_rev[1];
+		break;
+	case EFCT_HW_HW_REV3:
+		*value = sli->hw_rev[2];
+		break;
+	case EFCT_HW_LINK_MODULE_TYPE:
+		*value = sli->link_module_type;
+		break;
+	case EFCT_HW_EMULATE_TARGET_WQE_TIMEOUT:
+		*value = hw->config.emulate_tgt_wqe_timeout;
+		break;
+	case EFCT_HW_VPD_LEN:
+		*value = sli->vpd_length;
+		break;
+	case EFCT_HW_SEND_FRAME_CAPABLE:
+		*value = 0;
+		break;
+	case EFCT_HW_RQ_SELECTION_POLICY:
+		*value = hw->config.rq_selection_policy;
+		break;
+	case EFCT_HW_RR_QUANTA:
+		*value = hw->config.rr_quanta;
+		break;
+	case EFCT_HW_MAX_VPORTS:
+		*value = sli->extent[SLI_RSRC_VPI].size;
+		break;
+	default:
+		efc_log_test(hw->os, "unsupported property %#x\n", prop);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
+
+void *
+efct_hw_get_ptr(struct efct_hw_s *hw, enum efct_hw_property_e prop)
+{
+	void	*rc = NULL;
+	struct sli4_s *sli = &hw->sli;
+
+	switch (prop) {
+	case EFCT_HW_WWN_NODE:
+		rc = sli->wwnn;
+		break;
+	case EFCT_HW_WWN_PORT:
+		rc = sli->wwpn;
+		break;
+	case EFCT_HW_VPD:
+		/* make sure VPD length is non-zero */
+		if (sli->vpd_length)
+			rc = sli->vpd_data.virt;
+		break;
+	case EFCT_HW_FW_REV:
+		rc = sli->fw_name[0];
+		break;
+	case EFCT_HW_FW_REV2:
+		rc = sli->fw_name[1];
+		break;
+	case EFCT_HW_IPL:
+		rc = sli->ipl_name;
+		break;
+	case EFCT_HW_PORTNUM:
+		rc = sli->port_name;
+		break;
+	case EFCT_HW_BIOS_VERSION_STRING:
+		rc = sli->bios_version_string;
+		break;
+	default:
+		efc_log_test(hw->os, "unsupported property %#x\n", prop);
+	}
+
+	return rc;
+}
+
+/*
+ * @brief Return the WWN as a uint64_t.
+ *
+ * <h3 class="desc">Description</h3>
+ * Calls the HW property function for the WWNN or WWPN, and returns the value
+ * as a uint64_t.
+ *
+ * @param hw Pointer to the HW object.
+ * @param prop HW property.
+ *
+ * @return Returns uint64_t request value.
+ */
+
+uint64_t
+efct_get_wwn(struct efct_hw_s *hw, enum efct_hw_property_e prop)
+{
+	u8 *p = efct_hw_get_ptr(hw, prop);
+	u64 value = 0;
+
+	if (p) {
+		u32 i;
+
+		for (i = 0; i < sizeof(value); i++)
+			value = (value << 8) | p[i];
+	}
+
+	return value;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 60e377b2e7e5..9636e6dbe259 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -1008,4 +1008,19 @@ struct efct_hw_grp_hdr {
 	u8 revision[32];
 };
 
+extern enum efct_hw_rtn_e
+efct_hw_setup(struct efct_hw_s *hw, void *os, struct pci_dev *pdev);
+enum efct_hw_rtn_e efct_hw_init(struct efct_hw_s *hw);
+extern enum efct_hw_rtn_e
+efct_hw_get(struct efct_hw_s *hw, enum efct_hw_property_e prop, u32 *value);
+extern void *
+efct_hw_get_ptr(struct efct_hw_s *hw, enum efct_hw_property_e prop);
+extern enum efct_hw_rtn_e
+efct_hw_set(struct efct_hw_s *hw, enum efct_hw_property_e prop, u32 value);
+extern enum efct_hw_rtn_e
+efct_hw_set_ptr(struct efct_hw_s *hw, enum efct_hw_property_e prop,
+		void *value);
+extern uint64_t
+efct_get_wwn(struct efct_hw_s *hw, enum efct_hw_property_e prop);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
new file mode 100644
index 000000000000..83782794225f
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -0,0 +1,665 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_unsol.h"
+
+/* Post node event callback argument. */
+struct efct_xport_post_node_event_s {
+	struct completion done;
+	atomic_t refcnt;
+	struct efc_node_s *node;
+	u32	evt;
+	void *context;
+};
+
+static struct dentry *efct_debugfs_root;
+static atomic_t efct_debugfs_count;
+
+static struct scsi_host_template efct_template = {
+	.module			= THIS_MODULE,
+	.name			= EFCT_DRIVER_NAME,
+	.supported_mode		= MODE_TARGET,
+};
+
+/* globals */
+static struct fc_function_template efct_xport_functions;
+static struct fc_function_template efct_vport_functions;
+
+static struct scsi_transport_template *efct_xport_fc_tt;
+static struct scsi_transport_template *efct_vport_fc_tt;
+
+static void efct_xport_link_stats_cb(int status, u32 num_counters,
+				     struct efct_hw_link_stat_counts_s *c,
+				     void *arg);
+static void efct_xport_host_stats_cb(int status, u32 num_counters,
+				     struct efct_hw_host_stat_counts_s *c,
+				     void *arg);
+static void efct_xport_async_link_stats_cb(int status,
+					   u32 num_counters,
+				struct efct_hw_link_stat_counts_s *counters,
+				void *arg);
+static void efct_xport_async_host_stats_cb(int status,
+					   u32 num_counters,
+				struct efct_hw_host_stat_counts_s *counters,
+				void *arg);
+static void efct_xport_stats_timer_cb(struct timer_list *t);
+static void efct_xport_config_stats_timer(struct efct_s *efct);
+
+/**
+ * @brief Allocate a transport object.
+ *
+ * @par Description
+ * A transport object is allocated, and associated with a device instance.
+ *
+ * @param efct Pointer to device instance.
+ *
+ * @return Returns the pointer to the allocated transport object,
+ * or NULL if failed.
+ */
+struct efct_xport_s *
+efct_xport_alloc(struct efct_s *efct)
+{
+	struct efct_xport_s *xport;
+
+	xport = kmalloc(sizeof(*xport), GFP_KERNEL);
+	if (!xport)
+		return xport;
+
+	memset(xport, 0, sizeof(*xport));
+	xport->efct = efct;
+	return xport;
+}
+
+static int
+efct_xport_init_debugfs(struct efct_s *efct)
+{
+	/* Setup efct debugfs root directory */
+	if (!efct_debugfs_root) {
+		efct_debugfs_root = debugfs_create_dir("efct", NULL);
+		atomic_set(&efct_debugfs_count, 0);
+		if (!efct_debugfs_root) {
+			efc_log_err(efct, "failed to create debugfs entry\n");
+			goto debugfs_fail;
+		}
+	}
+
+	/* Create a directory for sessions in root */
+	if (!efct->sess_debugfs_dir) {
+		efct->sess_debugfs_dir = debugfs_create_dir("sessions", NULL);
+		if (!efct->sess_debugfs_dir) {
+			efc_log_err(efct,
+				     "failed to create debugfs entry for sessions\n");
+			goto debugfs_fail;
+		}
+		atomic_inc(&efct_debugfs_count);
+	}
+
+	return 0;
+
+debugfs_fail:
+	return -1;
+}
+
+static void efct_xport_delete_debugfs(struct efct_s *efct)
+{
+	/* Remove session debugfs directory */
+	debugfs_remove(efct->sess_debugfs_dir);
+	efct->sess_debugfs_dir = NULL;
+	atomic_dec(&efct_debugfs_count);
+
+	if (atomic_read(&efct_debugfs_count) == 0) {
+		/* remove root debugfs directory */
+		debugfs_remove(efct_debugfs_root);
+		efct_debugfs_root = NULL;
+	}
+}
+
+/**
+ * @brief Do as much allocation as possible, but do not initialization
+ * the device.
+ *
+ * @par Description
+ * Performs the functions required to get a device ready to run.
+ *
+ * @param xport Pointer to transport object.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+efct_xport_attach(struct efct_xport_s *xport)
+{
+	struct efct_s *efct = xport->efct;
+	int rc;
+	u32 max_sgl;
+	u32 n_sgl;
+	u32 value;
+
+	xport->fcfi.hold_frames = true;
+	spin_lock_init(&xport->fcfi.pend_frames_lock);
+	INIT_LIST_HEAD(&xport->fcfi.pend_frames);
+
+	rc = efct_hw_setup(&efct->hw, efct, efct->pcidev);
+	if (rc) {
+		efc_log_err(efct, "%s: Can't setup hardware\n", efct->desc);
+		return -1;
+	}
+
+	efct_hw_set(&efct->hw, EFCT_HW_RQ_SELECTION_POLICY,
+		    efct->rq_selection_policy);
+	efct_hw_get(&efct->hw, EFCT_HW_RQ_SELECTION_POLICY, &value);
+	efc_log_debug(efct, "RQ Selection Policy: %d\n", value);
+
+	efct_hw_set_ptr(&efct->hw, EFCT_HW_FILTER_DEF,
+			(void *)efct->filter_def);
+
+	efct_hw_get(&efct->hw, EFCT_HW_MAX_SGL, &max_sgl);
+	max_sgl -= SLI4_SGE_MAX_RESERVED;
+	n_sgl = (max_sgl > EFCT_FC_MAX_SGL) ? EFCT_FC_MAX_SGL : max_sgl;
+
+	/* Note: number of SGLs must be set for efc_node_create_pool */
+	if (efct_hw_set(&efct->hw, EFCT_HW_N_SGL, n_sgl) !=
+			EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(efct,
+			     "%s: Can't set number of SGLs\n", efct->desc);
+		return -1;
+	}
+
+	efc_log_debug(efct, "%s: Configured for %d SGLs\n", efct->desc,
+		       n_sgl);
+
+	xport->io_pool = efct_io_pool_create(efct, EFCT_NUM_SCSI_IOS, n_sgl);
+	if (!xport->io_pool) {
+		efc_log_err(efct, "Can't allocate IO pool\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/**
+ * @brief Initializes the device.
+ *
+ * @par Description
+ * Performs the functions required to make a device functional.
+ *
+ * @param xport Pointer to transport object.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+efct_xport_initialize(struct efct_xport_s *xport)
+{
+	struct efct_s *efct = xport->efct;
+	int rc;
+	u32 max_hw_io;
+	u32 max_sgl;
+	u32 rq_limit;
+
+	/* booleans used for cleanup if initialization fails */
+	bool ini_device_set = false;
+	bool tgt_device_set = false;
+	bool hw_initialized = false;
+
+	efct_hw_get(&efct->hw, EFCT_HW_MAX_IO, &max_hw_io);
+	if (efct_hw_set(&efct->hw, EFCT_HW_N_IO, max_hw_io) !=
+			EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(efct, "%s: Can't set number of IOs\n",
+			     efct->desc);
+		return -1;
+	}
+
+	efct_hw_get(&efct->hw, EFCT_HW_MAX_SGL, &max_sgl);
+	max_sgl -= SLI4_SGE_MAX_RESERVED;
+
+	efct_hw_get(&efct->hw, EFCT_HW_MAX_IO, &max_hw_io);
+
+	if (efct_hw_set(&efct->hw, EFCT_HW_TOPOLOGY, efct->topology) !=
+			EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(efct, "%s: Can't set the toplogy\n", efct->desc);
+		return -1;
+	}
+	efct_hw_set(&efct->hw, EFCT_HW_RQ_DEFAULT_BUFFER_SIZE,
+		    EFCT_FC_RQ_SIZE_DEFAULT);
+
+	if (efct_hw_set(&efct->hw, EFCT_HW_LINK_SPEED, efct->speed) !=
+			EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(efct, "%s: Can't set the link speed\n",
+			     efct->desc);
+		return -1;
+	}
+
+	if (efct->target_io_timer_sec) {
+		efc_log_debug(efct, "setting target io timer=%d\n",
+			       efct->target_io_timer_sec);
+		efct_hw_set(&efct->hw, EFCT_HW_EMULATE_TARGET_WQE_TIMEOUT,
+			    true);
+	}
+
+	/* Initialize vport list */
+	INIT_LIST_HEAD(&xport->vport_list);
+	spin_lock_init(&xport->io_pending_lock);
+	INIT_LIST_HEAD(&xport->io_pending_list);
+	atomic_set(&xport->io_active_count, 0);
+	atomic_set(&xport->io_pending_count, 0);
+	atomic_set(&xport->io_total_free, 0);
+	atomic_set(&xport->io_total_pending, 0);
+	atomic_set(&xport->io_alloc_failed_count, 0);
+	atomic_set(&xport->io_pending_recursing, 0);
+	rc = efct_hw_init(&efct->hw);
+	if (rc) {
+		efc_log_err(efct, "efct_hw_init failure\n");
+		goto efct_xport_init_cleanup;
+	} else {
+		hw_initialized = true;
+	}
+
+	rq_limit = max_hw_io / 2;
+	if (efct_hw_set(&efct->hw, EFCT_HW_RQ_PROCESS_LIMIT, rq_limit) !=
+			EFCT_HW_RTN_SUCCESS)
+		efc_log_err(efct, "%s: Can't set the RQ process limit\n",
+			     efct->desc);
+
+	rc = efct_scsi_tgt_new_device(efct);
+	if (rc) {
+		efc_log_err(efct, "failed to initialize target\n");
+		goto efct_xport_init_cleanup;
+	} else {
+		tgt_device_set = true;
+	}
+
+	rc = efct_scsi_new_device(efct);
+	if (rc) {
+		efc_log_err(efct, "failed to initialize initiator\n");
+		goto efct_xport_init_cleanup;
+	} else {
+		ini_device_set = true;
+	}
+
+	/* Get FC link and host statistics perodically*/
+	efct_xport_config_stats_timer(efct);
+
+	efct_xport_init_debugfs(efct);
+
+	return 0;
+
+efct_xport_init_cleanup:
+	if (tgt_device_set)
+		efct_scsi_tgt_del_device(efct);
+
+	if (hw_initialized) {
+		/* efct_hw_teardown can only execute after efct_hw_init */
+		efct_hw_teardown(&efct->hw);
+	}
+
+	return -1;
+}
+
+/**
+ * @brief Return status on a link.
+ *
+ * @par Description
+ * Returns status information about a link.
+ *
+ * @param xport Pointer to transport object.
+ * @param cmd Command to execute.
+ * @param result Pointer to result value.
+ *
+ * efct_xport_status(*xport, EFCT_XPORT_PORT_STATUS)
+ * efct_xport_status(*xport, EFCT_XPORT_LINK_SPEED, *result)
+ *	return link speed in MB/sec
+ * efct_xport_status(*xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED, *result)
+ *	[in] *result is speed to check in MB/s
+ *	returns 1 if supported, 0 if not
+ * efct_xport_status(*xport, EFCT_XPORT_LINK_STATISTICS, *result)
+ *	return link/host port stats
+ * efct_xport_status(*xport, EFCT_XPORT_LINK_STAT_RESET, *result)
+ *	resets link/host stats
+ *
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+
+int
+efct_xport_status(struct efct_xport_s *xport, enum efct_xport_status_e cmd,
+		  union efct_xport_stats_u *result)
+{
+	u32 rc = 0;
+	struct efct_s *efct = NULL;
+	union efct_xport_stats_u value;
+	enum efct_hw_rtn_e hw_rc;
+
+	efct = xport->efct;
+
+	switch (cmd) {
+	case EFCT_XPORT_CONFIG_PORT_STATUS:
+		if (xport->configured_link_state == 0) {
+			/*
+			 * Initial state is offline. configured_link_state is
+			 * set to online explicitly when port is brought online
+			 */
+			xport->configured_link_state = EFCT_XPORT_PORT_OFFLINE;
+		}
+		result->value = xport->configured_link_state;
+		break;
+
+	case EFCT_XPORT_PORT_STATUS:
+		/* Determine port status based on link speed. */
+		hw_rc = efct_hw_get(&efct->hw, EFCT_HW_LINK_SPEED,
+				    &value.value);
+		if (hw_rc == EFCT_HW_RTN_SUCCESS) {
+			if (value.value == 0)
+				result->value = 0;
+			else
+				result->value = 1;
+			rc = 0;
+		} else {
+			rc = -1;
+		}
+		break;
+
+	case EFCT_XPORT_LINK_SPEED: {
+		u32 speed;
+
+		result->value = 0;
+
+		rc = efct_hw_get(&efct->hw, EFCT_HW_LINK_SPEED, &speed);
+		if (rc == 0)
+			result->value = speed;
+		break;
+	}
+
+	case EFCT_XPORT_IS_SUPPORTED_LINK_SPEED: {
+		u32 speed;
+		u32 link_module_type;
+
+		speed = result->value;
+
+		rc = efct_hw_get(&efct->hw, EFCT_HW_LINK_MODULE_TYPE,
+				 &link_module_type);
+		if (rc == 0) {
+			switch (speed) {
+			case 1000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_1GB) != 0;
+				break;
+			case 2000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_2GB) != 0;
+				break;
+			case 4000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_4GB) != 0;
+				break;
+			case 8000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_8GB) != 0;
+				break;
+			case 10000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_10GB) != 0;
+				break;
+			case 16000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_16GB) != 0;
+				break;
+			case 32000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_32GB) != 0;
+				break;
+			default:
+				rc = 0;
+				break;
+			}
+		} else {
+			rc = 0;
+		}
+		break;
+	}
+	case EFCT_XPORT_LINK_STATISTICS:
+		memcpy((void *)result, &efct->xport->fc_xport_stats,
+		       sizeof(union efct_xport_stats_u));
+		break;
+	case EFCT_XPORT_LINK_STAT_RESET: {
+		/* Create a completion to synchronize the stat reset process. */
+		init_completion(&result->stats.done);
+
+		/* First reset the link stats */
+		rc = efct_hw_get_link_stats(&efct->hw, 0, 1, 1,
+					    efct_xport_link_stats_cb, result);
+
+		/* Wait for completion to be signaled when the cmd completes */
+		if (wait_for_completion_interruptible(&result->stats.done)) {
+			/* Undefined failure */
+			efc_log_test(efct, "sem wait failed\n");
+			rc = -ENXIO;
+			break;
+		}
+
+		/* Next reset the host stats */
+		rc = efct_hw_get_host_stats(&efct->hw, 1,
+					    efct_xport_host_stats_cb, result);
+
+		/* Wait for completion to be signaled when the cmd completes */
+		if (wait_for_completion_interruptible(&result->stats.done)) {
+			/* Undefined failure */
+			efc_log_test(efct, "sem wait failed\n");
+			rc = -ENXIO;
+			break;
+		}
+		break;
+	}
+	default:
+		rc = -1;
+		break;
+	}
+
+	return rc;
+}
+
+static void
+efct_xport_stats_timer_cb(struct timer_list *t)
+{
+	struct efct_xport_s *xport = from_timer(xport, t, stats_timer);
+	struct efct_s *efct = xport->efct;
+
+	efct_xport_config_stats_timer(efct);
+}
+
+/**
+ * @brief Get FC link and host Statistics periodically
+ *
+ * @param hw Hardware context.
+ *
+ * @return NONE.
+ */
+static void
+efct_xport_config_stats_timer(struct efct_s *efct)
+{
+	u32 timeout = 3 * 1000;
+	struct efct_xport_s *xport = NULL;
+
+	if (!efct) {
+		pr_err("%s: failed to locate EFCT device\n", __func__);
+		return;
+	}
+
+	xport = efct->xport;
+	efct_hw_get_link_stats(&efct->hw, 0, 0, 0,
+			       efct_xport_async_link_stats_cb,
+			       &xport->fc_xport_stats);
+	efct_hw_get_host_stats(&efct->hw, 0, efct_xport_async_host_stats_cb,
+			       &xport->fc_xport_stats);
+
+	timer_setup(&xport->stats_timer,
+		    &efct_xport_stats_timer_cb, 0);
+	mod_timer(&xport->stats_timer,
+		  jiffies + msecs_to_jiffies(timeout));
+}
+
+int
+efct_scsi_new_device(struct efct_s *efct)
+{
+	struct Scsi_Host *shost = NULL;
+	int error = 0;
+	struct efct_vport_s *vport = NULL;
+	union efct_xport_stats_u speed;
+	u32 supported_speeds = 0;
+
+	shost = scsi_host_alloc(&efct_template, sizeof(*vport));
+	if (!shost) {
+		efc_log_err(efct, "failed to allocate Scsi_Host struct\n");
+		return -1;
+	}
+
+	/* save shost to initiator-client context */
+	efct->shost = shost;
+
+	/* save efct information to shost LLD-specific space */
+	vport = (struct efct_vport_s *)shost->hostdata;
+	vport->efct = efct;
+
+	/*
+	 * Set initial can_queue value to the max SCSI IOs. This is the maximum
+	 * global queue depth (as opposed to the per-LUN queue depth --
+	 * .cmd_per_lun This may need to be adjusted for I+T mode.
+	 */
+	shost->can_queue = efct_scsi_get_property(efct, EFCT_SCSI_MAX_IOS);
+	shost->max_cmd_len = 16; /* 16-byte CDBs */
+	shost->max_id = 0xffff;
+	shost->max_lun = 0xffffffff;
+
+	/*
+	 * can only accept (from mid-layer) as many SGEs as we've
+	 * pre-registered
+	 */
+	shost->sg_tablesize = efct_scsi_get_property(efct, EFCT_SCSI_MAX_SGL);
+
+	/* attach FC Transport template to shost */
+	shost->transportt = efct_xport_fc_tt;
+	efc_log_debug(efct, "transport template=%p\n", efct_xport_fc_tt);
+
+	/* get pci_dev structure and add host to SCSI ML */
+	error = scsi_add_host_with_dma(shost, &efct->pcidev->dev,
+				       &efct->pcidev->dev);
+	if (error) {
+		efc_log_test(efct, "failed scsi_add_host_with_dma\n");
+		return -1;
+	}
+
+	/* Set symbolic name for host port */
+	snprintf(fc_host_symbolic_name(shost),
+		 sizeof(fc_host_symbolic_name(shost)),
+		     "Emulex %s FV%s DV%s", efct->model,
+		     efct->fw_version, efct->driver_version);
+
+	/* Set host port supported classes */
+	fc_host_supported_classes(shost) = FC_COS_CLASS3;
+
+	speed.value = 1000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_1GBIT;
+	}
+	speed.value = 2000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_2GBIT;
+	}
+	speed.value = 4000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_4GBIT;
+	}
+	speed.value = 8000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_8GBIT;
+	}
+	speed.value = 10000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_10GBIT;
+	}
+	speed.value = 16000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_16GBIT;
+	}
+	speed.value = 32000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_32GBIT;
+	}
+
+	fc_host_supported_speeds(shost) = supported_speeds;
+
+	fc_host_node_name(shost) = efct_get_wwn(&efct->hw, EFCT_HW_WWN_NODE);
+	fc_host_port_name(shost) = efct_get_wwn(&efct->hw, EFCT_HW_WWN_PORT);
+	fc_host_max_npiv_vports(shost) = 128;
+
+	return 0;
+}
+
+struct scsi_transport_template *
+efct_attach_fc_transport(void)
+{
+	struct scsi_transport_template *efct_fc_template = NULL;
+
+	efct_fc_template = fc_attach_transport(&efct_xport_functions);
+
+	if (!efct_fc_template)
+		pr_err("failed to attach EFCT with fc transport\n");
+
+	return efct_fc_template;
+}
+
+struct scsi_transport_template *
+efct_attach_vport_fc_transport(void)
+{
+	struct scsi_transport_template *efct_fc_template = NULL;
+
+	efct_fc_template = fc_attach_transport(&efct_vport_functions);
+
+	if (!efct_fc_template)
+		pr_err("failed to attach EFCT with fc transport\n");
+
+	return efct_fc_template;
+}
+
+int
+efct_scsi_reg_fc_transport(void)
+{
+	/* attach to appropriate scsi_tranport_* module */
+	efct_xport_fc_tt = efct_attach_fc_transport();
+	if (!efct_xport_fc_tt) {
+		pr_err("%s: failed to attach to scsi_transport_*", __func__);
+		return -1;
+	}
+
+	efct_vport_fc_tt = efct_attach_vport_fc_transport();
+	if (!efct_vport_fc_tt) {
+		pr_err("%s: failed to attach to scsi_transport_*", __func__);
+		efct_release_fc_transport(efct_xport_fc_tt);
+		efct_xport_fc_tt = NULL;
+		return -1;
+	}
+
+	return 0;
+}
+
+int
+efct_scsi_release_fc_transport(void)
+{
+	/* detach from scsi_transport_* */
+	efct_release_fc_transport(efct_xport_fc_tt);
+	efct_xport_fc_tt = NULL;
+	if (efct_vport_fc_tt)
+		efct_release_fc_transport(efct_vport_fc_tt);
+	efct_vport_fc_tt = NULL;
+
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_xport.h b/drivers/scsi/elx/efct/efct_xport.h
new file mode 100644
index 000000000000..ad6a6bfaf8fb
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_xport.h
@@ -0,0 +1,216 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_XPORT_H__)
+#define __EFCT_XPORT_H__
+
+/**
+ * @brief FCFI lookup/pending frames
+ */
+struct efct_xport_fcfi_s {
+	/* lock to protect pending frames access*/
+	spinlock_t	pend_frames_lock;
+	struct list_head	pend_frames;
+	/* hold pending frames */
+	bool hold_frames;
+	/* count of pending frames that were processed */
+	u32	pend_frames_processed;
+};
+
+enum efct_xport_ctrl_e {
+	EFCT_XPORT_PORT_ONLINE = 1,
+	EFCT_XPORT_PORT_OFFLINE,
+	EFCT_XPORT_SHUTDOWN,
+	EFCT_XPORT_POST_NODE_EVENT,
+	EFCT_XPORT_WWNN_SET,
+	EFCT_XPORT_WWPN_SET,
+};
+
+enum efct_xport_status_e {
+	EFCT_XPORT_PORT_STATUS,
+	EFCT_XPORT_CONFIG_PORT_STATUS,
+	EFCT_XPORT_LINK_SPEED,
+	EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+	EFCT_XPORT_LINK_STATISTICS,
+	EFCT_XPORT_LINK_STAT_RESET,
+	EFCT_XPORT_IS_QUIESCED
+};
+
+struct efct_xport_link_stats_s {
+	bool		rec;
+	bool		gec;
+	bool		w02of;
+	bool		w03of;
+	bool		w04of;
+	bool		w05of;
+	bool		w06of;
+	bool		w07of;
+	bool		w08of;
+	bool		w09of;
+	bool		w10of;
+	bool		w11of;
+	bool		w12of;
+	bool		w13of;
+	bool		w14of;
+	bool		w15of;
+	bool		w16of;
+	bool		w17of;
+	bool		w18of;
+	bool		w19of;
+	bool		w20of;
+	bool		w21of;
+	bool		clrc;
+	bool		clof1;
+	u32	link_failure_error_count;
+	u32	loss_of_sync_error_count;
+	u32	loss_of_signal_error_count;
+	u32	primitive_sequence_error_count;
+	u32	invalid_transmission_word_error_count;
+	u32	crc_error_count;
+	u32	primitive_sequence_event_timeout_count;
+	u32	elastic_buffer_overrun_error_count;
+	u32	arbitration_fc_al_timeout_count;
+	u32	advertised_receive_bufftor_to_buffer_credit;
+	u32	current_receive_buffer_to_buffer_credit;
+	u32	advertised_transmit_buffer_to_buffer_credit;
+	u32	current_transmit_buffer_to_buffer_credit;
+	u32	received_eofa_count;
+	u32	received_eofdti_count;
+	u32	received_eofni_count;
+	u32	received_soff_count;
+	u32	received_dropped_no_aer_count;
+	u32	received_dropped_no_available_rpi_resources_count;
+	u32	received_dropped_no_available_xri_resources_count;
+};
+
+struct efct_xport_host_stats_s {
+	bool		cc;
+	u32	transmit_kbyte_count;
+	u32	receive_kbyte_count;
+	u32	transmit_frame_count;
+	u32	receive_frame_count;
+	u32	transmit_sequence_count;
+	u32	receive_sequence_count;
+	u32	total_exchanges_originator;
+	u32	total_exchanges_responder;
+	u32	receive_p_bsy_count;
+	u32	receive_f_bsy_count;
+	u32	dropped_frames_due_to_no_rq_buffer_count;
+	u32	empty_rq_timeout_count;
+	u32	dropped_frames_due_to_no_xri_count;
+	u32	empty_xri_pool_count;
+};
+
+struct efct_xport_host_statistics_s {
+	struct completion done;
+	struct efct_xport_link_stats_s link_stats;
+	struct efct_xport_host_stats_s host_stats;
+};
+
+union efct_xport_stats_u {
+	u32 value;
+	struct efct_xport_host_statistics_s stats;
+};
+
+struct efct_xport_fcp_stats_s {
+	u64	input_bytes;
+	u64	output_bytes;
+	u64	input_requests;
+	u64	output_requests;
+	u64	control_requests;
+};
+
+/**
+ * @brief Transport private values
+ */
+struct efct_xport_s {
+	struct efct_s *efct;
+	/* wwpn requested by user for primary sport */
+	u64 req_wwpn;
+	/* wwnn requested by user for primary sport */
+	u64 req_wwnn;
+
+	struct efct_xport_fcfi_s fcfi;
+
+	/* Nodes */
+	/* number of allocated nodes */
+	u32 nodes_count;
+	/* array of pointers to nodes */
+	struct efc_node_s **nodes;
+	/* linked list of free nodes */
+	struct list_head nodes_free_list;
+
+	/* Io pool and counts */
+	/* pointer to IO pool */
+	struct efct_io_pool_s *io_pool;
+	/* used to track how often IO pool is empty */
+	atomic_t io_alloc_failed_count;
+	/* lock for io_pending_list */
+	spinlock_t io_pending_lock;
+	/* list of IOs waiting for HW resources
+	 *  lock: xport->io_pending_lock
+	 *  link: efct_io_s->io_pending_link
+	 */
+	struct list_head io_pending_list;
+	/* count of totals IOS allocated */
+	atomic_t io_total_alloc;
+	/* count of totals IOS free'd */
+	atomic_t io_total_free;
+	/* count of totals IOS that were pended */
+	atomic_t io_total_pending;
+	/* count of active IOS */
+	atomic_t io_active_count;
+	/* count of pending IOS */
+	atomic_t io_pending_count;
+	/* non-zero if efct_scsi_check_pending is executing */
+	atomic_t io_pending_recursing;
+
+	/* vport */
+	/* list of VPORTS (NPIV) */
+	struct list_head vport_list;
+
+	/* Port */
+	/* requested link state */
+	u32 configured_link_state;
+
+	/* Timer for Statistics */
+	struct timer_list     stats_timer;
+	union efct_xport_stats_u fc_xport_stats;
+	struct efct_xport_fcp_stats_s fcp_stats;
+};
+
+struct efct_rport_data_s {
+	struct efc_node_s *node;
+};
+
+extern struct efct_xport_s *
+efct_xport_alloc(struct efct_s *efct);
+extern int
+efct_xport_attach(struct efct_xport_s *xport);
+extern int
+efct_xport_initialize(struct efct_xport_s *xport);
+extern int
+efct_xport_detach(struct efct_xport_s *xport);
+extern int
+efct_xport_control(struct efct_xport_s *xport, enum efct_xport_ctrl_e cmd, ...);
+extern int
+efct_xport_status(struct efct_xport_s *xport, enum efct_xport_status_e cmd,
+		  union efct_xport_stats_u *result);
+extern void
+efct_xport_free(struct efct_xport_s *xport);
+
+int efct_lnx_xport_attach(void);
+struct scsi_transport_template *efct_attach_fc_transport(void);
+struct scsi_transport_template *efct_attach_vport_fc_transport(void);
+void efct_lnx_xport_detach(void);
+void
+efct_release_fc_transport(struct scsi_transport_template *transport_template);
+void efct_lnx_xport_remove_host(struct Scsi_Host *shost);
+int efct_lnx_xport_new_tgt(struct efc_node_s *node);
+int efct_lnx_xport_init_tgt(struct scsi_device *sdev);
+int efct_lnx_xport_del_tgt(struct efc_node_s *node,
+			   enum efct_scsi_del_target_reason_e reason);
+#endif /* __EFCT_XPORT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 17/32] elx: efct: Hardware queues creation and deletion
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (15 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 16/32] elx: efct: Driver initialization routines James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 18/32] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
                   ` (15 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for queue creation, deletion, and configuration.
Driven by strings describing configuration topology with
parsers for the strings.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw_queues.c | 1717 ++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw_queues.h |   66 ++
 2 files changed, 1783 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.h

diff --git a/drivers/scsi/elx/efct/efct_hw_queues.c b/drivers/scsi/elx/efct/efct_hw_queues.c
new file mode 100644
index 000000000000..5196aa75553c
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw_queues.c
@@ -0,0 +1,1717 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+#include "efct_hw_queues.h"
+#include "efct_unsol.h"
+
+static int
+efct_hw_rqpair_find(struct efct_hw_s *hw, u16 rq_id);
+static struct efc_hw_sequence_s *
+efct_hw_rqpair_get(struct efct_hw_s *hw, u16 rqindex, u16 bufindex);
+static int
+efct_hw_rqpair_put(struct efct_hw_s *hw, struct efc_hw_sequence_s *seq);
+/**
+ * @brief Initialize queues
+ *
+ * Given the parsed queue topology spec, the SLI queues are created and
+ * initialized
+ *
+ * @param hw pointer to HW object
+ * @param qtop pointer to queue topology
+ *
+ * @return returns 0 for success, an error code value for failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_init_queues(struct efct_hw_s *hw, struct efct_hw_qtop_s *qtop)
+{
+	u32 i, j, k;
+	u32 default_lengths[QTOP_LAST], len;
+	u32 rqset_len = 0, rqset_count = 0;
+	u8 rqset_filter_mask = 0;
+	struct hw_eq_s *eqs[EFCT_HW_MAX_MRQS];
+	struct hw_cq_s *cqs[EFCT_HW_MAX_MRQS];
+	struct hw_rq_s *rqs[EFCT_HW_MAX_MRQS];
+	struct efct_hw_qtop_entry_s *qt, *next_qt;
+	struct efct_hw_mrq_s mrq;
+	bool use_mrq = false;
+
+	struct hw_eq_s *eq = NULL;
+	struct hw_cq_s *cq = NULL;
+	struct hw_wq_s *wq = NULL;
+	struct hw_rq_s *rq = NULL;
+	struct hw_mq_s *mq = NULL;
+
+	mrq.num_pairs = 0;
+	default_lengths[QTOP_EQ] = 1024;
+	default_lengths[QTOP_CQ] = hw->num_qentries[SLI_QTYPE_CQ];
+	default_lengths[QTOP_WQ] = hw->num_qentries[SLI_QTYPE_WQ];
+	default_lengths[QTOP_RQ] = hw->num_qentries[SLI_QTYPE_RQ];
+	default_lengths[QTOP_MQ] = EFCT_HW_MQ_DEPTH;
+
+	hw->eq_count = 0;
+	hw->cq_count = 0;
+	hw->mq_count = 0;
+	hw->wq_count = 0;
+	hw->rq_count = 0;
+	hw->hw_rq_count = 0;
+	INIT_LIST_HEAD(&hw->eq_list);
+
+	/* If MRQ is requested, Check if it is supported by SLI. */
+	if (hw->config.n_rq > 1 &&
+	    !(hw->sli.features & SLI4_REQFEAT_MRQP)) {
+		efc_log_err(hw->os, "MRQ topology not supported by SLI4.\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->config.n_rq > 1)
+		use_mrq = true;
+
+	/* Allocate class WQ pools */
+	for (i = 0; i < ARRAY_SIZE(hw->wq_class_array); i++) {
+		hw->wq_class_array[i] = efct_varray_alloc(hw->os,
+							  EFCT_HW_MAX_NUM_WQ);
+		if (!hw->wq_class_array[i]) {
+			efc_log_err(hw->os,
+				     "efct_varray_alloc for wq_class failed\n");
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+
+	/* Allocate per CPU WQ pools */
+	for (i = 0; i < ARRAY_SIZE(hw->wq_cpu_array); i++) {
+		hw->wq_cpu_array[i] = efct_varray_alloc(hw->os,
+							EFCT_HW_MAX_NUM_WQ);
+		if (!hw->wq_cpu_array[i]) {
+			efc_log_err(hw->os,
+				     "efct_varray_alloc for wq_class failed\n");
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+
+	for (i = 0, qt = qtop->entries; i < qtop->inuse_count; i++, qt++) {
+		if (i == qtop->inuse_count - 1)
+			next_qt = NULL;
+		else
+			next_qt = qt + 1;
+
+		switch (qt->entry) {
+		case QTOP_EQ:
+			len = (qt->len) ? qt->len : default_lengths[QTOP_EQ];
+
+			if (qt->set_default) {
+				default_lengths[QTOP_EQ] = len;
+				break;
+			}
+
+			eq = efct_hw_new_eq(hw, len);
+			if (!eq) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+			break;
+
+		case QTOP_CQ:
+			len = (qt->len) ? qt->len : default_lengths[QTOP_CQ];
+
+			if (qt->set_default) {
+				default_lengths[QTOP_CQ] = len;
+				break;
+			}
+
+			/* If this CQ is for MRQ, then delay the creation */
+			if (!use_mrq || next_qt->entry != QTOP_RQ) {
+				if (!eq)
+					return EFCT_HW_RTN_NO_MEMORY;
+
+				cq = efct_hw_new_cq(eq, len);
+				if (!cq) {
+					efct_hw_queue_teardown(hw);
+					return EFCT_HW_RTN_NO_MEMORY;
+				}
+			}
+			break;
+
+		case QTOP_WQ: {
+			len = (qt->len) ? qt->len : default_lengths[QTOP_WQ];
+			if (qt->set_default) {
+				default_lengths[QTOP_WQ] = len;
+				break;
+			}
+
+			if ((hw->ulp_start + qt->ulp) > hw->ulp_max) {
+				efc_log_err(hw->os,
+					     "invalid ULP %d WQ\n", qt->ulp);
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+
+			wq = efct_hw_new_wq(cq, len,
+					    qt->class, hw->ulp_start + qt->ulp);
+			if (!wq) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+
+			/* Place this WQ on the EQ WQ array */
+			if (efct_varray_add(eq->wq_array, wq)) {
+				efc_log_err(hw->os,
+					     "QTOP_WQ:EQ efct_varray_add fail\n");
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_ERROR;
+			}
+
+			/* Place this WQ on the HW class array */
+			if (qt->class < ARRAY_SIZE(hw->wq_class_array)) {
+				if (efct_varray_add
+					(hw->wq_class_array[qt->class], wq)) {
+					efc_log_err(hw->os,
+						     "HW wq_class_array efct_varray_add failed\n");
+					efct_hw_queue_teardown(hw);
+					return EFCT_HW_RTN_ERROR;
+				}
+			} else {
+				efc_log_err(hw->os,
+					     "Invalid class value: %d\n",
+					    qt->class);
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_ERROR;
+			}
+
+			/*
+			 * Place this WQ on the per CPU list, asumming that EQs
+			 * are mapped to cpu given by the EQ instance modulo
+			 * number of CPUs
+			 */
+			if (efct_varray_add(hw->wq_cpu_array[eq->instance %
+					   num_online_cpus()], wq)) {
+				efc_log_err(hw->os,
+					     "HW wq_cpu_array efct_varray_add failed\n");
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_ERROR;
+			}
+
+			break;
+		}
+		case QTOP_RQ: {
+			len = (qt->len) ? qt->len : EFCT_HW_RQ_ENTRIES_DEF;
+
+			/*
+			 * Use the max supported queue length
+			 * if qtop rq len is not a valid value
+			 */
+			if (len > default_lengths[QTOP_RQ] ||
+			    (len % EFCT_HW_RQ_ENTRIES_MIN)) {
+				efc_log_info(hw->os,
+					      "QTOP RQ len %d is invalid. Using max supported RQ len %d\n",
+					len, default_lengths[QTOP_RQ]);
+				len = default_lengths[QTOP_RQ];
+			}
+
+			if (qt->set_default) {
+				default_lengths[QTOP_RQ] = len;
+				break;
+			}
+
+			if ((hw->ulp_start + qt->ulp) > hw->ulp_max) {
+				efc_log_err(hw->os,
+					     "invalid ULP %d RQ\n", qt->ulp);
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+
+			if (use_mrq) {
+				k = mrq.num_pairs;
+				mrq.rq_cfg[k].len = len;
+				mrq.rq_cfg[k].ulp = hw->ulp_start + qt->ulp;
+				mrq.rq_cfg[k].filter_mask = qt->filter_mask;
+				mrq.rq_cfg[k].eq = eq;
+				mrq.num_pairs++;
+			} else {
+				rq = efct_hw_new_rq(cq, len,
+						    hw->ulp_start + qt->ulp);
+				if (!rq) {
+					efct_hw_queue_teardown(hw);
+					return EFCT_HW_RTN_NO_MEMORY;
+				}
+				rq->filter_mask = qt->filter_mask;
+			}
+			break;
+		}
+
+		case QTOP_MQ:
+			len = (qt->len) ? qt->len : default_lengths[QTOP_MQ];
+			if (qt->set_default) {
+				default_lengths[QTOP_MQ] = len;
+				break;
+			}
+
+			if (!cq)
+				return EFCT_HW_RTN_NO_MEMORY;
+
+			mq = efct_hw_new_mq(cq, len);
+			if (!mq) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+			break;
+
+		default:
+			efc_log_crit(hw->os, "Unknown Queue\n");
+			break;
+		}
+	}
+
+	if (mrq.num_pairs) {
+		/* First create normal RQs. */
+		for (i = 0; i < mrq.num_pairs; i++) {
+			for (j = 0; j < mrq.num_pairs; j++) {
+				if (i != j &&
+				    mrq.rq_cfg[i].filter_mask ==
+				     mrq.rq_cfg[j].filter_mask) {
+					/* This should be created using set */
+					if (rqset_filter_mask &&
+					    rqset_filter_mask !=
+					     mrq.rq_cfg[i].filter_mask) {
+						efc_log_crit(hw->os,
+							      "Cant create > 1 RQ Set\n");
+						efct_hw_queue_teardown(hw);
+						return EFCT_HW_RTN_ERROR;
+					} else if (!rqset_filter_mask) {
+						rqset_filter_mask =
+						      mrq.rq_cfg[i].filter_mask;
+						rqset_len = mrq.rq_cfg[i].len;
+					}
+					eqs[rqset_count] = mrq.rq_cfg[i].eq;
+					rqset_count++;
+					break;
+				}
+			}
+			if (j == mrq.num_pairs) {
+				/* Normal RQ */
+				cq = efct_hw_new_cq(mrq.rq_cfg[i].eq,
+						    default_lengths[QTOP_CQ]);
+				if (!cq) {
+					efct_hw_queue_teardown(hw);
+					return EFCT_HW_RTN_NO_MEMORY;
+				}
+
+				rq = efct_hw_new_rq(cq, mrq.rq_cfg[i].len,
+						    mrq.rq_cfg[i].ulp);
+				if (!rq) {
+					efct_hw_queue_teardown(hw);
+					return EFCT_HW_RTN_NO_MEMORY;
+				}
+				rq->filter_mask = mrq.rq_cfg[i].filter_mask;
+			}
+		}
+
+		/* Now create RQ Set */
+		if (rqset_count) {
+			/* Create CQ set */
+			if (efct_hw_new_cq_set(eqs, cqs, rqset_count,
+					       default_lengths[QTOP_CQ])) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_ERROR;
+			}
+
+			/* Create RQ set */
+			if (efct_hw_new_rq_set(cqs, rqs, rqset_count,
+					       rqset_len)) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_ERROR;
+			}
+
+			for (i = 0; i < rqset_count ; i++) {
+				rqs[i]->filter_mask = rqset_filter_mask;
+				rqs[i]->is_mrq = true;
+				rqs[i]->base_mrq_id = rqs[0]->hdr->id;
+			}
+
+			hw->hw_mrq_count = rqset_count;
+		}
+	}
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @brief Allocate a new EQ object
+ *
+ * A new EQ object is instantiated
+ *
+ * @param hw pointer to HW object
+ * @param entry_count number of entries in the EQ
+ *
+ * @return pointer to allocated EQ object
+ */
+struct hw_eq_s *
+efct_hw_new_eq(struct efct_hw_s *hw, u32 entry_count)
+{
+	struct hw_eq_s *eq = kmalloc(sizeof(*eq), GFP_KERNEL);
+
+	if (eq) {
+		memset(eq, 0, sizeof(*eq));
+		eq->type = SLI_QTYPE_EQ;
+		eq->hw = hw;
+		eq->entry_count = entry_count;
+		eq->instance = hw->eq_count++;
+		eq->queue = &hw->eq[eq->instance];
+		INIT_LIST_HEAD(&eq->cq_list);
+
+		eq->wq_array = efct_varray_alloc(hw->os, EFCT_HW_MAX_NUM_WQ);
+		if (!eq->wq_array) {
+			kfree(eq);
+			eq = NULL;
+		} else {
+			if (sli_queue_alloc(&hw->sli, SLI_QTYPE_EQ,
+					    eq->queue,
+					    entry_count, NULL)) {
+				efc_log_err(hw->os,
+					     "EQ[%d] allocation failure\n",
+					    eq->instance);
+				kfree(eq);
+				eq = NULL;
+			} else {
+				sli_eq_modify_delay(&hw->sli, eq->queue,
+						    1, 0, 8);
+				hw->hw_eq[eq->instance] = eq;
+				INIT_LIST_HEAD(&eq->list_entry);
+				list_add_tail(&eq->list_entry, &hw->eq_list);
+				efc_log_debug(hw->os,
+					       "create eq[%2d] id %3d len %4d\n",
+					      eq->instance, eq->queue->id,
+					      eq->entry_count);
+			}
+		}
+	}
+	return eq;
+}
+
+/**
+ * @brief Allocate a new CQ object
+ *
+ * A new CQ object is instantiated
+ *
+ * @param eq pointer to parent EQ object
+ * @param entry_count number of entries in the CQ
+ *
+ * @return pointer to allocated CQ object
+ */
+struct hw_cq_s *
+efct_hw_new_cq(struct hw_eq_s *eq, u32 entry_count)
+{
+	struct efct_hw_s *hw = eq->hw;
+	struct hw_cq_s *cq = kmalloc(sizeof(*cq), GFP_KERNEL);
+
+	if (cq) {
+		memset(cq, 0, sizeof(*cq));
+		cq->eq = eq;
+		cq->type = SLI_QTYPE_CQ;
+		cq->instance = eq->hw->cq_count++;
+		cq->entry_count = entry_count;
+		cq->queue = &hw->cq[cq->instance];
+
+		INIT_LIST_HEAD(&cq->q_list);
+
+		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_CQ, cq->queue,
+				    cq->entry_count, eq->queue)) {
+			efc_log_err(hw->os,
+				     "CQ[%d] allocation failure len=%d\n",
+				    eq->instance,
+				    eq->entry_count);
+			kfree(cq);
+			cq = NULL;
+		} else {
+			hw->hw_cq[cq->instance] = cq;
+			INIT_LIST_HEAD(&cq->list_entry);
+			list_add_tail(&cq->list_entry, &eq->cq_list);
+			efc_log_debug(hw->os,
+				       "create cq[%2d] id %3d len %4d\n",
+				      cq->instance, cq->queue->id,
+				      cq->entry_count);
+		}
+	}
+	return cq;
+}
+
+/**
+ * @brief Allocate a new CQ Set of objects.
+ *
+ * @param eqs pointer to a set of EQ objects.
+ * @param cqs pointer to a set of CQ objects to be returned.
+ * @param num_cqs number of CQ queues in the set.
+ * @param entry_count number of entries in the CQ.
+ *
+ * @return 0 on success and -1 on failure.
+ */
+u32
+efct_hw_new_cq_set(struct hw_eq_s *eqs[], struct hw_cq_s *cqs[],
+		   u32 num_cqs, u32 entry_count)
+{
+	u32 i;
+	struct efct_hw_s *hw = eqs[0]->hw;
+	struct sli4_s *sli4 = &hw->sli;
+	struct hw_cq_s *cq = NULL;
+	struct sli4_queue_s *qs[SLI_MAX_CQ_SET_COUNT];
+	struct sli4_queue_s *assefct[SLI_MAX_CQ_SET_COUNT];
+
+	/* Initialise CQS pointers to NULL */
+	for (i = 0; i < num_cqs; i++)
+		cqs[i] = NULL;
+
+	for (i = 0; i < num_cqs; i++) {
+		cq = kmalloc(sizeof(*cq), GFP_KERNEL);
+		if (!cq)
+			goto error;
+
+		memset(cq, 0, sizeof(*cq));
+		cqs[i]          = cq;
+		cq->eq          = eqs[i];
+		cq->type        = SLI_QTYPE_CQ;
+		cq->instance    = hw->cq_count++;
+		cq->entry_count = entry_count;
+		cq->queue       = &hw->cq[cq->instance];
+		qs[i]           = cq->queue;
+		assefct[i]       = eqs[i]->queue;
+		INIT_LIST_HEAD(&cq->q_list);
+	}
+
+	if (!sli_cq_alloc_set(sli4, qs, num_cqs, entry_count, assefct)) {
+		efc_log_err(hw->os, "Failed to create CQ Set.\n");
+		goto error;
+	}
+
+	for (i = 0; i < num_cqs; i++) {
+		hw->hw_cq[cqs[i]->instance] = cqs[i];
+		INIT_LIST_HEAD(&cqs[i]->list_entry);
+		list_add_tail(&cqs[i]->list_entry, &cqs[i]->eq->cq_list);
+	}
+
+	return 0;
+
+error:
+	for (i = 0; i < num_cqs; i++) {
+		kfree(cqs[i]);
+		cqs[i] = NULL;
+	}
+	return -1;
+}
+
+/**
+ * @brief Allocate a new MQ object
+ *
+ * A new MQ object is instantiated
+ *
+ * @param cq pointer to parent CQ object
+ * @param entry_count number of entries in the MQ
+ *
+ * @return pointer to allocated MQ object
+ */
+struct hw_mq_s *
+efct_hw_new_mq(struct hw_cq_s *cq, u32 entry_count)
+{
+	struct efct_hw_s *hw = cq->eq->hw;
+	struct hw_mq_s *mq = kmalloc(sizeof(*mq), GFP_KERNEL);
+
+	if (mq) {
+		memset(mq, 0, sizeof(*mq));
+		mq->cq = cq;
+		mq->type = SLI_QTYPE_MQ;
+		mq->instance = cq->eq->hw->mq_count++;
+		mq->entry_count = entry_count;
+		mq->entry_size = EFCT_HW_MQ_DEPTH;
+		mq->queue = &hw->mq[mq->instance];
+
+		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_MQ,
+				    mq->queue,
+				    mq->entry_size,
+				    cq->queue)) {
+			efc_log_err(hw->os, "MQ allocation failure\n");
+			kfree(mq);
+			mq = NULL;
+		} else {
+			hw->hw_mq[mq->instance] = mq;
+			INIT_LIST_HEAD(&mq->list_entry);
+			list_add_tail(&mq->list_entry, &cq->q_list);
+			efc_log_debug(hw->os,
+				       "create mq[%2d] id %3d len %4d\n",
+				      mq->instance, mq->queue->id,
+				      mq->entry_count);
+		}
+	}
+	return mq;
+}
+
+/**
+ * @brief Allocate a new WQ object
+ *
+ * A new WQ object is instantiated
+ *
+ * @param cq pointer to parent CQ object
+ * @param entry_count number of entries in the WQ
+ * @param class WQ class
+ * @param ulp index of chute
+ *
+ * @return pointer to allocated WQ object
+ */
+struct hw_wq_s *
+efct_hw_new_wq(struct hw_cq_s *cq, u32 entry_count,
+	       u32 class, u32 ulp)
+{
+	struct efct_hw_s *hw = cq->eq->hw;
+	struct hw_wq_s *wq = kmalloc(sizeof(*wq), GFP_KERNEL);
+
+	if (wq) {
+		memset(wq, 0, sizeof(*wq));
+		wq->hw = cq->eq->hw;
+		wq->cq = cq;
+		wq->type = SLI_QTYPE_WQ;
+		wq->instance = cq->eq->hw->wq_count++;
+		wq->entry_count = entry_count;
+		wq->queue = &hw->wq[wq->instance];
+		wq->ulp = ulp;
+		wq->wqec_set_count = EFCT_HW_WQEC_SET_COUNT;
+		wq->wqec_count = wq->wqec_set_count;
+		wq->free_count = wq->entry_count - 1;
+		wq->class = class;
+		INIT_LIST_HEAD(&wq->pending_list);
+
+		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_WQ, wq->queue,
+				    wq->entry_count, cq->queue)) {
+			efc_log_err(hw->os, "WQ allocation failure\n");
+			kfree(wq);
+			wq = NULL;
+		} else {
+			hw->hw_wq[wq->instance] = wq;
+			INIT_LIST_HEAD(&wq->list_entry);
+			list_add_tail(&wq->list_entry, &cq->q_list);
+			efc_log_debug(hw->os,
+				       "create wq[%2d] id %3d len %4d cls %d ulp %d\n",
+				wq->instance, wq->queue->id,
+				wq->entry_count, wq->class, wq->ulp);
+		}
+	}
+	return wq;
+}
+
+/**
+ * @brief Allocate a struct hw_rq_s object
+ *
+ * Allocate an RQ object, which encapsulates 2 SLI queues (for rq pair)
+ *
+ * @param cq pointer to parent CQ object
+ * @param entry_count number of entries in the RQs
+ * @param ulp ULP index for this RQ
+ *
+ * @return pointer to newly allocated hw_rq_t
+ */
+struct hw_rq_s *
+efct_hw_new_rq(struct hw_cq_s *cq, u32 entry_count, u32 ulp)
+{
+	struct efct_hw_s *hw = cq->eq->hw;
+	struct hw_rq_s *rq = kmalloc(sizeof(*rq), GFP_KERNEL);
+
+	if (rq) {
+		memset(rq, 0, sizeof(*rq));
+		rq->instance = hw->hw_rq_count++;
+		rq->cq = cq;
+		rq->type = SLI_QTYPE_RQ;
+		rq->entry_count = entry_count;
+
+		/* Create the header RQ */
+		rq->hdr = &hw->rq[hw->rq_count];
+		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
+
+		if (sli_fc_rq_alloc(&hw->sli, rq->hdr,
+				    rq->entry_count,
+				    rq->hdr_entry_size,
+				    cq->queue,
+				    true)) {
+			efc_log_err(hw->os,
+				     "RQ allocation failure - header\n");
+			kfree(rq);
+			return NULL;
+		}
+		/* Update hw_rq_lookup[] */
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		efc_log_debug(hw->os,
+			      "create rq[%2d] id %3d len %4d hdr  size %4d\n",
+			      rq->instance, rq->hdr->id, rq->entry_count,
+			      rq->hdr_entry_size);
+
+		/* Create the default data RQ */
+		rq->data = &hw->rq[hw->rq_count];
+		rq->data_entry_size = hw->config.rq_default_buffer_size;
+
+		if (sli_fc_rq_alloc(&hw->sli, rq->data,
+				    rq->entry_count,
+				    rq->data_entry_size,
+				    cq->queue,
+				    false)) {
+			efc_log_err(hw->os,
+				     "RQ allocation failure - first burst\n");
+			kfree(rq);
+			return NULL;
+		}
+		/* Update hw_rq_lookup[] */
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		efc_log_debug(hw->os,
+			       "create rq[%2d] id %3d len %4d data size %4d\n",
+			 rq->instance, rq->data->id, rq->entry_count,
+			 rq->data_entry_size);
+
+		hw->hw_rq[rq->instance] = rq;
+		INIT_LIST_HEAD(&rq->list_entry);
+		list_add_tail(&rq->list_entry, &cq->q_list);
+
+		rq->rq_tracker = kmalloc_array(rq->entry_count,
+					sizeof(struct efc_hw_sequence_s *),
+					GFP_ATOMIC);
+		if (!rq->rq_tracker)
+			return NULL;
+
+		memset(rq->rq_tracker, 0,
+		       rq->entry_count * sizeof(struct efc_hw_sequence_s *));
+	}
+	return rq;
+}
+
+/**
+ * @brief Allocate a struct hw_rq_s object SET
+ *
+ * Allocate an RQ object SET, where each element in set
+ * encapsulates 2 SLI queues (for rq pair)
+ *
+ * @param cqs pointers to be associated with RQs.
+ * @param rqs RQ pointers to be returned on success.
+ * @param num_rq_pairs number of rq pairs in the Set.
+ * @param entry_count number of entries in the RQs
+ * @param ulp ULP index for this RQ
+ *
+ * @return 0 in success and -1 on failure.
+ */
+u32
+efct_hw_new_rq_set(struct hw_cq_s *cqs[], struct hw_rq_s *rqs[],
+		   u32 num_rq_pairs, u32 entry_count)
+{
+	struct efct_hw_s *hw = cqs[0]->eq->hw;
+	struct hw_rq_s *rq = NULL;
+	struct sli4_queue_s *qs[SLI_MAX_RQ_SET_COUNT * 2] = { NULL };
+	u32 i, q_count, size;
+
+	/* Initialise RQS pointers */
+	for (i = 0; i < num_rq_pairs; i++)
+		rqs[i] = NULL;
+
+	for (i = 0, q_count = 0; i < num_rq_pairs; i++, q_count += 2) {
+		rq = kmalloc(sizeof(*rq), GFP_KERNEL);
+		if (!rq)
+			goto error;
+
+		memset(rq, 0, sizeof(*rq));
+		rqs[i] = rq;
+		rq->instance = hw->hw_rq_count++;
+		rq->cq = cqs[i];
+		rq->type = SLI_QTYPE_RQ;
+		rq->entry_count = entry_count;
+
+		/* Header RQ */
+		rq->hdr = &hw->rq[hw->rq_count];
+		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		qs[q_count] = rq->hdr;
+
+		/* Data RQ */
+		rq->data = &hw->rq[hw->rq_count];
+		rq->data_entry_size = hw->config.rq_default_buffer_size;
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		qs[q_count + 1] = rq->data;
+
+		rq->rq_tracker = NULL;
+	}
+
+	if (!sli_fc_rq_set_alloc(&hw->sli, num_rq_pairs, qs,
+				cqs[0]->queue->id,
+			    rqs[0]->entry_count,
+			    rqs[0]->hdr_entry_size,
+			    rqs[0]->data_entry_size)) {
+		efc_log_err(hw->os,
+			     "RQ Set allocation failure for base CQ=%d\n",
+			    cqs[0]->queue->id);
+		goto error;
+	}
+
+	for (i = 0; i < num_rq_pairs; i++) {
+		hw->hw_rq[rqs[i]->instance] = rqs[i];
+		INIT_LIST_HEAD(&rqs[i]->list_entry);
+		list_add_tail(&rqs[i]->list_entry, &cqs[i]->q_list);
+		size = sizeof(struct efc_hw_sequence_s *) * rqs[i]->entry_count;
+		rqs[i]->rq_tracker = kmalloc(size, GFP_KERNEL);
+		if (!rqs[i]->rq_tracker)
+			goto error;
+	}
+
+	return 0;
+
+error:
+	for (i = 0; i < num_rq_pairs; i++) {
+		if (rqs[i]) {
+			kfree(rqs[i]->rq_tracker);
+			kfree(rqs[i]);
+		}
+	}
+
+	return -1;
+}
+
+/**
+ * @brief Free an EQ object
+ *
+ * The EQ object and any child queue objects are freed
+ *
+ * @param eq pointer to EQ object
+ *
+ * @return none
+ */
+void
+efct_hw_del_eq(struct hw_eq_s *eq)
+{
+	if (eq) {
+		struct hw_cq_s *cq;
+		struct hw_cq_s *cq_next;
+
+		list_for_each_entry_safe(cq, cq_next, &eq->cq_list, list_entry)
+			efct_hw_del_cq(cq);
+		efct_varray_free(eq->wq_array);
+		list_del(&eq->list_entry);
+		eq->hw->hw_eq[eq->instance] = NULL;
+		kfree(eq);
+	}
+}
+
+/**
+ * @brief Free a CQ object
+ *
+ * The CQ object and any child queue objects are freed
+ *
+ * @param cq pointer to CQ object
+ *
+ * @return none
+ */
+void
+efct_hw_del_cq(struct hw_cq_s *cq)
+{
+	if (cq) {
+		struct hw_q_s *q;
+		struct hw_q_s *q_next;
+
+		list_for_each_entry_safe(q, q_next, &cq->q_list, list_entry) {
+			switch (q->type) {
+			case SLI_QTYPE_MQ:
+				efct_hw_del_mq((struct hw_mq_s *)q);
+				break;
+			case SLI_QTYPE_WQ:
+				efct_hw_del_wq((struct hw_wq_s *)q);
+				break;
+			case SLI_QTYPE_RQ:
+				efct_hw_del_rq((struct hw_rq_s *)q);
+				break;
+			default:
+				break;
+			}
+		}
+		list_del(&cq->list_entry);
+		cq->eq->hw->hw_cq[cq->instance] = NULL;
+		kfree(cq);
+	}
+}
+
+/**
+ * @brief Free a MQ object
+ *
+ * The MQ object is freed
+ *
+ * @param mq pointer to MQ object
+ *
+ * @return none
+ */
+void
+efct_hw_del_mq(struct hw_mq_s *mq)
+{
+	if (mq) {
+		list_del(&mq->list_entry);
+		mq->cq->eq->hw->hw_mq[mq->instance] = NULL;
+		kfree(mq);
+	}
+}
+
+/**
+ * @brief Free a WQ object
+ *
+ * The WQ object is freed
+ *
+ * @param wq pointer to WQ object
+ *
+ * @return none
+ */
+void
+efct_hw_del_wq(struct hw_wq_s *wq)
+{
+	if (wq) {
+		list_del(&wq->list_entry);
+		wq->cq->eq->hw->hw_wq[wq->instance] = NULL;
+		kfree(wq);
+	}
+}
+
+/**
+ * @brief Free an RQ object
+ *
+ * The RQ object is freed
+ *
+ * @param rq pointer to RQ object
+ *
+ * @return none
+ */
+void
+efct_hw_del_rq(struct hw_rq_s *rq)
+{
+	struct efct_hw_s *hw = NULL;
+
+	if (rq) {
+		/* Free RQ tracker */
+		kfree(rq->rq_tracker);
+		rq->rq_tracker = NULL;
+		list_del(&rq->list_entry);
+		hw = rq->cq->eq->hw;
+		hw->hw_rq[rq->instance] = NULL;
+		kfree(rq);
+	}
+}
+
+/**
+ * @brief Display HW queue objects
+ *
+ * The HW queue objects are displayed using efct_log
+ *
+ * @param hw pointer to HW object
+ *
+ * @return none
+ */
+void
+efct_hw_queue_dump(struct efct_hw_s *hw)
+{
+	struct hw_eq_s *eq;
+	struct hw_cq_s *cq;
+	struct hw_q_s *q;
+	struct hw_mq_s *mq;
+	struct hw_wq_s *wq;
+	struct hw_rq_s *rq;
+
+	list_for_each_entry(eq, &hw->eq_list, list_entry) {
+		efc_log_debug(hw->os, "eq[%d] id %2d\n",
+			       eq->instance, eq->queue->id);
+		list_for_each_entry(cq, &eq->cq_list, list_entry) {
+			efc_log_debug(hw->os, "cq[%d] id %2d current\n",
+				       cq->instance, cq->queue->id);
+			list_for_each_entry(q, &cq->q_list, list_entry) {
+				switch (q->type) {
+				case SLI_QTYPE_MQ:
+					mq = (struct hw_mq_s *)q;
+					efc_log_debug(hw->os,
+						       "    mq[%d] id %2d\n",
+					       mq->instance, mq->queue->id);
+					break;
+				case SLI_QTYPE_WQ:
+					wq = (struct hw_wq_s *)q;
+					efc_log_debug(hw->os,
+						       "    wq[%d] id %2d\n",
+						wq->instance, wq->queue->id);
+					break;
+				case SLI_QTYPE_RQ:
+					rq = (struct hw_rq_s *)q;
+					efc_log_debug(hw->os,
+						       "    rq[%d] hdr id %2d\n",
+					       rq->instance, rq->hdr->id);
+					break;
+				default:
+					break;
+				}
+			}
+		}
+	}
+}
+
+/**
+ * @brief Teardown HW queue objects
+ *
+ * The HW queue objects are freed
+ *
+ * @param hw pointer to HW object
+ *
+ * @return none
+ */
+void
+efct_hw_queue_teardown(struct efct_hw_s *hw)
+{
+	u32 i;
+	struct hw_eq_s *eq;
+	struct hw_eq_s *eq_next;
+
+	if (hw->eq_list.next) {
+		list_for_each_entry_safe(eq, eq_next, &hw->eq_list,
+					 list_entry) {
+			efct_hw_del_eq(eq);
+		}
+	}
+	for (i = 0; i < ARRAY_SIZE(hw->wq_cpu_array); i++) {
+		efct_varray_free(hw->wq_cpu_array[i]);
+		hw->wq_cpu_array[i] = NULL;
+	}
+	for (i = 0; i < ARRAY_SIZE(hw->wq_class_array); i++) {
+		efct_varray_free(hw->wq_class_array[i]);
+		hw->wq_class_array[i] = NULL;
+	}
+}
+
+/**
+ * @brief Allocate a WQ to an IO object
+ *
+ * The next work queue index is used to assign a WQ to an IO.
+ *
+ * If wq_steering is EFCT_HW_WQ_STEERING_CLASS, a WQ from io->wq_class is
+ * selected.
+ *
+ * If wq_steering is EFCT_HW_WQ_STEERING_REQUEST, then a WQ from the EQ that
+ * the IO request came in on is selected.
+ *
+ * If wq_steering is EFCT_HW_WQ_STEERING_CPU, then a WQ associted with the
+ * CPU the request is made on is selected.
+ *
+ * @param hw pointer to HW object
+ * @param io pointer to IO object
+ *
+ * @return Return pointer to next WQ
+ */
+struct hw_wq_s *
+efct_hw_queue_next_wq(struct efct_hw_s *hw, struct efct_hw_io_s *io)
+{
+	struct hw_eq_s *eq;
+	struct hw_wq_s *wq = NULL;
+	u32 cpuidx;
+
+	switch (io->wq_steering) {
+	case EFCT_HW_WQ_STEERING_CLASS:
+		if (unlikely(io->wq_class >= ARRAY_SIZE(hw->wq_class_array)))
+			break;
+
+		wq = efct_varray_iter_next(hw->wq_class_array[io->wq_class]);
+		break;
+	case EFCT_HW_WQ_STEERING_REQUEST:
+		eq = io->eq;
+		if (likely(eq))
+			wq = efct_varray_iter_next(eq->wq_array);
+		break;
+	case EFCT_HW_WQ_STEERING_CPU:
+		cpuidx = in_interrupt() ?
+			raw_smp_processor_id() : task_cpu(current);
+
+		if (likely(cpuidx < ARRAY_SIZE(hw->wq_cpu_array)))
+			wq = efct_varray_iter_next(hw->wq_cpu_array[cpuidx]);
+		break;
+	}
+
+	if (unlikely(!wq))
+		wq = hw->hw_wq[0];
+
+	return wq;
+}
+
+/**
+ * @brief Return count of EQs for a queue topology object
+ *
+ * The EQ count for in the HWs queue topology (hw->qtop) object is returned
+ *
+ * @param hw pointer to HW object
+ *
+ * @return count of EQs
+ */
+u32
+efct_hw_qtop_eq_count(struct efct_hw_s *hw)
+{
+	return hw->qtop->entry_counts[QTOP_EQ];
+}
+
+#define TOKEN_LEN		32
+
+/**
+ * @brief Declare token types
+ */
+enum tok_type_e {
+	TOK_LPAREN = 1,
+	TOK_RPAREN,
+	TOK_COLON,
+	TOK_EQUALS,
+	TOK_QUEUE,
+	TOK_ATTR_NAME,
+	TOK_NUMBER,
+	TOK_NUMBER_VALUE,
+	TOK_NUMBER_LIST,
+};
+
+/**
+ * @brief Declare token sub-types
+ */
+enum tok_subtype_e {
+	TOK_SUB_EQ = 100,
+	TOK_SUB_CQ,
+	TOK_SUB_RQ,
+	TOK_SUB_MQ,
+	TOK_SUB_WQ,
+	TOK_SUB_LEN,
+	TOK_SUB_CLASS,
+	TOK_SUB_ULP,
+	TOK_SUB_FILTER,
+};
+
+/**
+ * @brief convert queue subtype to QTOP entry
+ *
+ * @param q queue subtype
+ *
+ * @return QTOP entry or 0
+ */
+static enum efct_hw_qtop_entry_e
+subtype2qtop(enum tok_subtype_e q)
+{
+	switch (q) {
+	case TOK_SUB_EQ:	return QTOP_EQ;
+	case TOK_SUB_CQ:	return QTOP_CQ;
+	case TOK_SUB_RQ:	return QTOP_RQ;
+	case TOK_SUB_MQ:	return QTOP_MQ;
+	case TOK_SUB_WQ:	return QTOP_WQ;
+	default:
+		break;
+	}
+	return 0;
+}
+
+/**
+ * @brief Declare token object
+ */
+struct tok_s {
+	enum tok_type_e type;
+	enum tok_subtype_e subtype;
+	char string[TOKEN_LEN];
+};
+
+/**
+ * @brief Declare token array object
+ */
+struct tokarray_s {
+	struct tok_s *tokens;		/* Pointer to array of tokens */
+	u32 alloc_count;		/* Number of tokens in the array */
+	u32 inuse_count;		/* Number of tokens posted to array */
+	u32 iter_idx;		/* Iterator index */
+};
+
+/**
+ * @brief Declare token match structure
+ */
+struct tokmatch_s {
+	char *s;
+	enum tok_type_e type;
+	enum tok_subtype_e subtype;
+};
+
+/**
+ * @brief test if character is ID start character
+ *
+ * @param c character to test
+ *
+ * @return TRUE if character is an ID start character
+ */
+static int
+idstart(int c)
+{
+	return	isalpha(c) || (c == '_') || (c == '$');
+}
+
+/**
+ * @brief test if character is an ID character
+ *
+ * @param c character to test
+ *
+ * @return TRUE if character is an ID character
+ */
+static int
+idchar(int c)
+{
+	return idstart(c) || isdigit(c);
+}
+
+/**
+ * @brief Declare single character matches
+ */
+static struct tokmatch_s cmatches[] = {
+	{"(", TOK_LPAREN},
+	{")", TOK_RPAREN},
+	{":", TOK_COLON},
+	{"=", TOK_EQUALS},
+};
+
+/**
+ * @brief Declare identifier match strings
+ */
+static struct tokmatch_s smatches[] = {
+	{"eq", TOK_QUEUE, TOK_SUB_EQ},
+	{"cq", TOK_QUEUE, TOK_SUB_CQ},
+	{"rq", TOK_QUEUE, TOK_SUB_RQ},
+	{"mq", TOK_QUEUE, TOK_SUB_MQ},
+	{"wq", TOK_QUEUE, TOK_SUB_WQ},
+	{"len", TOK_ATTR_NAME, TOK_SUB_LEN},
+	{"class", TOK_ATTR_NAME, TOK_SUB_CLASS},
+	{"ulp", TOK_ATTR_NAME, TOK_SUB_ULP},
+	{"filter", TOK_ATTR_NAME, TOK_SUB_FILTER},
+};
+
+/**
+ * @brief Scan string and return next token
+ *
+ * The string is scanned and the next token is returned
+ *
+ * @param s input string to scan
+ * @param tok pointer to place scanned token
+ *
+ * @return pointer to input string following scanned token, or NULL
+ */
+static const char *
+tokenize(const char *s, struct tok_s *tok)
+{
+	u32 i;
+
+	memset(tok, 0, sizeof(*tok));
+
+	/* Skip over whitespace */
+	while (*s && isspace(*s))
+		s++;
+
+	/* Return if nothing left in this string */
+	if (*s == 0)
+		return NULL;
+
+	/* Look for single character matches */
+	for (i = 0; i < ARRAY_SIZE(cmatches); i++) {
+		if (cmatches[i].s[0] == *s) {
+			tok->type = cmatches[i].type;
+			tok->subtype = cmatches[i].subtype;
+			tok->string[0] = *s++;
+			return s;
+		}
+	}
+
+	/* Scan for a hex number or decimal */
+	if ((s[0] == '0') && ((s[1] == 'x') || (s[1] == 'X'))) {
+		char *p = tok->string;
+
+		tok->type = TOK_NUMBER;
+
+		*p++ = *s++;
+		*p++ = *s++;
+		while ((*s == '.') || isxdigit(*s)) {
+			if ((p - tok->string) < (int)sizeof(tok->string))
+				*p++ = *s;
+			if (*s == ',')
+				tok->type = TOK_NUMBER_LIST;
+			s++;
+		}
+		*p = 0;
+		return s;
+	} else if (isdigit(*s)) {
+		char *p = tok->string;
+
+		tok->type = TOK_NUMBER;
+		while ((*s == ',') || isdigit(*s)) {
+			if ((p - tok->string) < (int)sizeof(tok->string))
+				*p++ = *s;
+			if (*s == ',')
+				tok->type = TOK_NUMBER_LIST;
+			s++;
+		}
+		*p = 0;
+		return s;
+	}
+
+	/* Scan for an ID */
+	if (idstart(*s)) {
+		char *p = tok->string;
+
+		for (*p++ = *s++; idchar(*s); s++) {
+			if ((p - tok->string) < TOKEN_LEN)
+				*p++ = *s;
+		}
+
+		/* See if this is a $ number value */
+		if (tok->string[0] == '$') {
+			tok->type = TOK_NUMBER_VALUE;
+		} else {
+			/* Look for a string match */
+			for (i = 0; i < ARRAY_SIZE(smatches); i++) {
+				if (strcmp(smatches[i].s, tok->string) == 0) {
+					tok->type = smatches[i].type;
+					tok->subtype = smatches[i].subtype;
+					return s;
+				}
+			}
+		}
+	}
+	return s;
+}
+
+/**
+ * @brief convert token type to string
+ *
+ * @param type token type
+ *
+ * @return string, or "unknown"
+ */
+static const char *
+token_type2s(enum tok_type_e type)
+{
+	switch (type) {
+	case TOK_LPAREN:
+		return "TOK_LPAREN";
+	case TOK_RPAREN:
+		return "TOK_RPAREN";
+	case TOK_COLON:
+		return "TOK_COLON";
+	case TOK_EQUALS:
+		return "TOK_EQUALS";
+	case TOK_QUEUE:
+		return "TOK_QUEUE";
+	case TOK_ATTR_NAME:
+		return "TOK_ATTR_NAME";
+	case TOK_NUMBER:
+		return "TOK_NUMBER";
+	case TOK_NUMBER_VALUE:
+		return "TOK_NUMBER_VALUE";
+	case TOK_NUMBER_LIST:
+		return "TOK_NUMBER_LIST";
+	}
+	return "unknown";
+}
+
+/**
+ * @brief convert token sub-type to string
+ *
+ * @param subtype token sub-type
+ *
+ * @return string, or "unknown"
+ */
+static const char *
+token_subtype2s(enum tok_subtype_e subtype)
+{
+	switch (subtype) {
+	case TOK_SUB_EQ:
+		return "TOK_SUB_EQ";
+	case TOK_SUB_CQ:
+		return "TOK_SUB_CQ";
+	case TOK_SUB_RQ:
+		return "TOK_SUB_RQ";
+	case TOK_SUB_MQ:
+		return "TOK_SUB_MQ";
+	case TOK_SUB_WQ:
+		return "TOK_SUB_WQ";
+	case TOK_SUB_LEN:
+		return "TOK_SUB_LEN";
+	case TOK_SUB_CLASS:
+		return "TOK_SUB_CLASS";
+	case TOK_SUB_ULP:
+		return "TOK_SUB_ULP";
+	case TOK_SUB_FILTER:
+		return "TOK_SUB_FILTER";
+	}
+	return "";
+}
+
+/**
+ * @brief Generate syntax error message
+ *
+ * A syntax error message is found, the input tokens are dumped up to and
+ * including the token that failed as indicated by the current iterator index.
+ *
+ * @param hw pointer to HW object
+ * @param tokarray pointer to token array object
+ *
+ * @return none
+ */
+static void
+tok_syntax(struct efct_hw_s *hw, struct tokarray_s *tokarray)
+{
+	u32 i;
+	struct tok_s *tok;
+
+	efc_log_test(hw->os, "Syntax error:\n");
+
+	for (i = 0, tok = tokarray->tokens; (i <= tokarray->inuse_count);
+	     i++, tok++) {
+		efc_log_test(hw->os, "%s [%2d]    %-16s %-16s %s\n",
+			      (i == tokarray->iter_idx) ? ">>>" : "   ", i,
+			     token_type2s(tok->type),
+			     token_subtype2s(tok->subtype), tok->string);
+	}
+}
+
+/**
+ * @brief parse a number
+ *
+ * Parses tokens of type TOK_NUMBER and TOK_NUMBER_VALUE, returning a numeric
+ * value
+ *
+ * @param hw pointer to HW object
+ * @param qtop pointer to QTOP object
+ * @param tok pointer to token to parse
+ *
+ * @return numeric value
+ */
+static u32
+tok_getnumber(struct efct_hw_s *hw, struct efct_hw_qtop_s *qtop,
+	      struct tok_s *tok)
+{
+	u32 rval = 0;
+	u32 num_cpus = num_online_cpus();
+
+	switch (tok->type) {
+	case TOK_NUMBER_VALUE:
+		if (strcmp(tok->string, "$ncpu") == 0)
+			rval = num_cpus;
+		else if (strcmp(tok->string, "$ncpu1") == 0)
+			rval = num_cpus - 1;
+		else if (strcmp(tok->string, "$nwq") == 0)
+			rval = (hw) ? hw->config.n_wq : 0;
+		else if (strcmp(tok->string, "$maxmrq") == 0)
+			rval = (num_cpus < EFCT_HW_MAX_MRQS)
+				? num_cpus : EFCT_HW_MAX_MRQS;
+		else if (strcmp(tok->string, "$nulp") == 0)
+			rval = hw->ulp_max - hw->ulp_start + 1;
+		else if ((qtop->rptcount_idx > 0) &&
+			 strcmp(tok->string, "$rpt0") == 0)
+			rval = qtop->rptcount[qtop->rptcount_idx - 1];
+		else if ((qtop->rptcount_idx > 1) &&
+			 strcmp(tok->string, "$rpt1") == 0)
+			rval = qtop->rptcount[qtop->rptcount_idx - 2];
+		else if ((qtop->rptcount_idx > 2) &&
+			 strcmp(tok->string, "$rpt2") == 0)
+			rval = qtop->rptcount[qtop->rptcount_idx - 3];
+		else if ((qtop->rptcount_idx > 3) &&
+			 strcmp(tok->string, "$rpt3") == 0)
+			rval = qtop->rptcount[qtop->rptcount_idx - 4];
+		else if (kstrtou32(tok->string, 0, &rval))
+			efc_log_debug(hw->os, "kstrtou32 failed\n");
+
+		break;
+	case TOK_NUMBER:
+		if (kstrtou32(tok->string, 0, &rval))
+			efc_log_debug(hw->os, "kstrtou32 failed\n");
+		break;
+	default:
+		break;
+	}
+	return rval;
+}
+
+/**
+ * @brief parse subfilter of a token
+ *
+ * The tokens are semantically parsed, to generate QTOP entries.
+ *
+ * @param pointer queue type
+ * @param token
+ * @param qtop ouptut QTOP object
+ *
+ * @return Nothing.
+ */
+static void
+parse_sub_filter(struct efct_hw_s *hw, struct efct_hw_qtop_entry_s *qt,
+		 struct tok_s *tok, struct efct_hw_qtop_s *qtop)
+{
+	u32 mask = 0;
+	char *p;
+	u32 v;
+
+	if (tok[3].type == TOK_NUMBER_LIST) {
+		mask = 0;
+		p = tok[3].string;
+
+		while ((p) && *p) {
+			if (kstrtou32(p, 0, &v))
+				efc_log_debug(hw->os, "kstrtou32 failed\n");
+			if (v < 32)
+				mask |= (1U << v);
+
+			p = strchr(p, ',');
+			if (p)
+				p++;
+		}
+		qt->filter_mask = mask;
+	} else {
+		qt->filter_mask = (1U << tok_getnumber(hw, qtop, &tok[3]));
+	}
+}
+
+/**
+ * @brief parse an array of tokens
+ *
+ * The tokens are semantically parsed, to generate QTOP entries.
+ *
+ * @param hw pointer to HW object
+ * @param tokarray array array of tokens
+ * @param qtop ouptut QTOP object
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+static int
+parse_topology(struct efct_hw_s *hw, struct tokarray_s *tokarray,
+	       struct efct_hw_qtop_s *qtop)
+{
+	struct efct_hw_qtop_entry_s *qt = qtop->entries + qtop->inuse_count;
+	struct tok_s *tok;
+	u32 num = 0;
+
+	for (; (tokarray->iter_idx < tokarray->inuse_count) &&
+	     ((tok = &tokarray->tokens[tokarray->iter_idx]) != NULL);) {
+		if (qtop->inuse_count >= qtop->alloc_count)
+			return -1;
+
+		qt = qtop->entries + qtop->inuse_count;
+
+		switch (tok[0].type) {
+		case TOK_QUEUE:
+			qt->entry = subtype2qtop(tok[0].subtype);
+			qt->set_default = false;
+			qt->len = 0;
+			qt->class = 0;
+			qtop->inuse_count++;
+
+			/* Advance current token index */
+			tokarray->iter_idx++;
+
+			/*
+			 * Parse for queue attributes, possibly multiple
+			 * instances
+			 */
+			while ((tokarray->iter_idx + 4) <=
+				tokarray->inuse_count) {
+				tok = &tokarray->tokens[tokarray->iter_idx];
+				if (tok[0].type == TOK_COLON &&
+				    tok[1].type == TOK_ATTR_NAME &&
+					tok[2].type == TOK_EQUALS &&
+					(tok[3].type == TOK_NUMBER ||
+					 tok[3].type == TOK_NUMBER_VALUE ||
+					 tok[3].type == TOK_NUMBER_LIST)) {
+					num = tok_getnumber(hw, qtop, &tok[3]);
+
+					switch (tok[1].subtype) {
+					case TOK_SUB_LEN:
+						qt->len = num;
+						break;
+					case TOK_SUB_CLASS:
+						qt->class = num;
+						break;
+					case TOK_SUB_ULP:
+						qt->ulp = num;
+						break;
+					case TOK_SUB_FILTER:
+						parse_sub_filter(hw, qt, tok,
+								 qtop);
+						break;
+					default:
+						break;
+					}
+					/* Advance current token index */
+					tokarray->iter_idx += 4;
+				} else {
+					break;
+				}
+				num = 0;
+			}
+			qtop->entry_counts[qt->entry]++;
+			break;
+
+		case TOK_ATTR_NAME:
+			if (((tokarray->iter_idx + 5) <=
+			      tokarray->inuse_count) &&
+			      tok[1].type == TOK_COLON &&
+			      tok[2].type == TOK_QUEUE &&
+			      tok[3].type == TOK_EQUALS &&
+			      (tok[4].type == TOK_NUMBER ||
+			      tok[4].type == TOK_NUMBER_VALUE)) {
+				qt->entry = subtype2qtop(tok[2].subtype);
+				qt->set_default = true;
+				switch (tok[0].subtype) {
+				case TOK_SUB_LEN:
+					qt->len = tok_getnumber(hw, qtop,
+								&tok[4]);
+					break;
+				case TOK_SUB_CLASS:
+					qt->class = tok_getnumber(hw, qtop,
+								  &tok[4]);
+					break;
+				case TOK_SUB_ULP:
+					qt->ulp = tok_getnumber(hw, qtop,
+								&tok[4]);
+					break;
+				default:
+					break;
+				}
+				qtop->inuse_count++;
+				tokarray->iter_idx += 5;
+			} else {
+				tok_syntax(hw, tokarray);
+				return -1;
+			}
+			break;
+
+		case TOK_NUMBER:
+		case TOK_NUMBER_VALUE: {
+			u32 rpt_count = 1;
+			u32 i;
+			u32 rpt_idx;
+
+			rpt_count = tok_getnumber(hw, qtop, tok);
+
+			if (tok[1].type == TOK_LPAREN) {
+				u32 iter_idx_save;
+
+				tokarray->iter_idx += 2;
+
+				/* save token array iteration index */
+				iter_idx_save = tokarray->iter_idx;
+
+				for (i = 0; i < rpt_count; i++) {
+					rpt_idx = qtop->rptcount_idx;
+
+					if (qtop->rptcount_idx <
+					    ARRAY_SIZE(qtop->rptcount)) {
+						qtop->rptcount[rpt_idx + 1] = i;
+					}
+
+					/* restore token array iteration idx */
+					tokarray->iter_idx = iter_idx_save;
+
+					/* parse, append to qtop */
+					parse_topology(hw, tokarray, qtop);
+
+					qtop->rptcount_idx = rpt_idx;
+				}
+			}
+			break;
+		}
+
+		case TOK_RPAREN:
+			tokarray->iter_idx++;
+			return 0;
+
+		default:
+			tok_syntax(hw, tokarray);
+			return -1;
+		}
+	}
+	return 0;
+}
+
+/**
+ * @brief Parse queue topology string
+ *
+ * The queue topology object is allocated, and filled with the results of
+ * parsing the passed in queue topology string
+ *
+ * @param hw pointer to HW object
+ * @param qtop_string input queue topology string
+ *
+ * @return pointer to allocated QTOP object, or NULL if there was an error
+ */
+struct efct_hw_qtop_s *
+efct_hw_qtop_parse(struct efct_hw_s *hw, const char *qtop_string)
+{
+	struct efct_hw_qtop_s *qtop;
+	struct tokarray_s tokarray;
+	const char *s;
+
+	efc_log_debug(hw->os, "queue topology: %s\n", qtop_string);
+
+	/* Allocate a token array */
+	tokarray.tokens = kmalloc_array(MAX_TOKENS, sizeof(*tokarray.tokens),
+					GFP_KERNEL);
+	if (!tokarray.tokens)
+		return NULL;
+	memset(tokarray.tokens, 0, MAX_TOKENS * sizeof(*tokarray.tokens));
+	tokarray.alloc_count = MAX_TOKENS;
+	tokarray.inuse_count = 0;
+	tokarray.iter_idx = 0;
+
+	/* Parse the tokens */
+	for (s = qtop_string; (tokarray.inuse_count < tokarray.alloc_count) &&
+	     ((s = tokenize(s, &tokarray.tokens[tokarray.inuse_count]))) !=
+	       NULL;)
+		tokarray.inuse_count++;
+
+	/* Allocate a queue topology structure */
+	qtop = kmalloc(sizeof(*qtop), GFP_KERNEL);
+	if (!qtop) {
+		kfree(tokarray.tokens);
+		efc_log_err(hw->os, "malloc qtop failed\n");
+		return NULL;
+	}
+	memset(qtop, 0, sizeof(*qtop));
+	qtop->os = hw->os;
+
+	/* Allocate queue topology entries */
+	qtop->entries = kzalloc((EFCT_HW_MAX_QTOP_ENTRIES *
+				sizeof(*qtop->entries)), GFP_ATOMIC);
+	if (!qtop->entries) {
+		kfree(qtop);
+		kfree(tokarray.tokens);
+		return NULL;
+	}
+	qtop->alloc_count = EFCT_HW_MAX_QTOP_ENTRIES;
+	qtop->inuse_count = 0;
+
+	/* Parse the tokens */
+	if (parse_topology(hw, &tokarray, qtop)) {
+		efc_log_err(hw->os, "failed to parse tokens\n");
+		efct_hw_qtop_free(qtop);
+		kfree(tokarray.tokens);
+		return NULL;
+	}
+
+	/* Free the tokens array */
+	kfree(tokarray.tokens);
+
+	return qtop;
+}
+
+/**
+ * @brief free queue topology object
+ *
+ * @param qtop pointer to QTOP object
+ *
+ * @return none
+ */
+void
+efct_hw_qtop_free(struct efct_hw_qtop_s *qtop)
+{
+	if (qtop) {
+		kfree(qtop->entries);
+		kfree(qtop);
+	}
+}
diff --git a/drivers/scsi/elx/efct/efct_hw_queues.h b/drivers/scsi/elx/efct/efct_hw_queues.h
new file mode 100644
index 000000000000..363d48906670
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw_queues.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFCT_HW_QUEUES_H__
+#define __EFCT_HW_QUEUES_H__
+
+#define EFCT_HW_MQ_DEPTH	128
+#include "efct_hw.h"
+
+enum efct_hw_qtop_entry_e {
+	QTOP_EQ = 0,
+	QTOP_CQ,
+	QTOP_WQ,
+	QTOP_RQ,
+	QTOP_MQ,
+	QTOP_LAST,
+};
+
+struct efct_hw_qtop_entry_s {
+	enum efct_hw_qtop_entry_e entry;
+	bool set_default;
+	u32 len;
+	u8 class;
+	u8 ulp;
+	u8 filter_mask;
+};
+
+struct efct_hw_mrq_s {
+	struct rq_config {
+		struct hw_eq_s *eq;
+		u32 len;
+		u8 class;
+		u8 ulp;
+		u8 filter_mask;
+	} rq_cfg[16];
+	u32 num_pairs;
+};
+
+#define MAX_TOKENS			256
+#define EFCT_HW_MAX_QTOP_ENTRIES	200
+
+struct efct_hw_qtop_s {
+	void *os;
+	struct efct_hw_qtop_entry_s *entries;
+	u32 alloc_count;
+	u32 inuse_count;
+	u32 entry_counts[QTOP_LAST];
+	u32 rptcount[10];
+	u32 rptcount_idx;
+};
+
+struct efct_hw_qtop_s *
+efct_hw_qtop_parse(struct efct_hw_s *hw, const char *qtop_string);
+void efct_hw_qtop_free(struct efct_hw_qtop_s *qtop);
+const char *efct_hw_qtop_entry_name(enum efct_hw_qtop_entry_e entry);
+u32 efct_hw_qtop_eq_count(struct efct_hw_s *hw);
+
+enum efct_hw_rtn_e
+efct_hw_init_queues(struct efct_hw_s *hw, struct efct_hw_qtop_s *qtop);
+extern  struct hw_wq_s
+*efct_hw_queue_next_wq(struct efct_hw_s *hw, struct efct_hw_io_s *io);
+
+#endif /* __EFCT_HW_QUEUES_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 18/32] elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (16 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 17/32] elx: efct: Hardware queues creation and deletion James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 19/32] elx: efct: Hardware IO and SGL initialization James Smart
                   ` (14 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
RQ data buffer allocation and deallocate.
Memory pool allocation and deallocation APIs.
Mailbox command submission and completion routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c    | 447 +++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h    |   7 +
 drivers/scsi/elx/efct/efct_utils.c | 662 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_utils.h | 113 +++++++
 4 files changed, 1229 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_utils.c
 create mode 100644 drivers/scsi/elx/efct/efct_utils.h

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index ecb3ccbf7c4c..ea23bb33e11d 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -23,6 +23,14 @@ target_wqe_timer_cb(struct timer_list *);
 static void
 shutdown_target_wqe_timer(struct efct_hw_s *hw);
 
+static int
+efct_hw_command_process(struct efct_hw_s *, int, u8 *, size_t);
+static int
+efct_hw_command_cancel(struct efct_hw_s *);
+static int
+efct_hw_mq_process(struct efct_hw_s *, int, struct sli4_queue_s *);
+
+
 static enum efct_hw_rtn_e
 efct_hw_link_event_init(struct efct_hw_s *hw)
 {
@@ -1296,3 +1304,442 @@ efct_get_wwn(struct efct_hw_s *hw, enum efct_hw_property_e prop)
 
 	return value;
 }
+
+/**
+ * @brief Allocate an efct_hw_rx_buffer_t array.
+ *
+ * @par Description
+ * An efct_hw_rx_buffer_t array is allocated, along with the required DMA mem.
+ *
+ * @param hw Pointer to HW object.
+ * @param rqindex RQ index for this buffer.
+ * @param count Count of buffers in array.
+ * @param size Size of buffer.
+ *
+ * @return Returns the pointer to the allocated efc_hw_rq_buffer_s array.
+ */
+static struct efc_hw_rq_buffer_s *
+efct_hw_rx_buffer_alloc(struct efct_hw_s *hw, u32 rqindex, u32 count,
+			u32 size)
+{
+	struct efct_s *efct = hw->os;
+	struct efc_hw_rq_buffer_s *rq_buf = NULL;
+	struct efc_hw_rq_buffer_s *prq;
+	u32 i;
+
+	if (count != 0) {
+		rq_buf = kmalloc_array(count, sizeof(*rq_buf), GFP_ATOMIC);
+		if (!rq_buf)
+			return NULL;
+		memset(rq_buf, 0, sizeof(*rq_buf) * count);
+
+		for (i = 0, prq = rq_buf; i < count; i ++, prq++) {
+			prq->rqindex = rqindex;
+			prq->dma.size = size;
+			prq->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+							   prq->dma.size,
+							   &prq->dma.phys,
+							   GFP_DMA);
+			if (!prq->dma.virt) {
+				efc_log_err(hw->os, "DMA allocation failed\n");
+				kfree(rq_buf);
+				rq_buf = NULL;
+				break;
+			}
+		}
+	}
+	return rq_buf;
+}
+
+/**
+ * @brief Free an efct_hw_rx_buffer_t array.
+ *
+ * @par Description
+ * The efct_hw_rx_buffer_t array is freed, along with allocated DMA memory.
+ *
+ * @param hw Pointer to HW object.
+ * @param rq_buf Pointer to efct_hw_rx_buffer_t array.
+ * @param count Count of buffers in array.
+ *
+ * @return None.
+ */
+static void
+efct_hw_rx_buffer_free(struct efct_hw_s *hw,
+		       struct efc_hw_rq_buffer_s *rq_buf,
+			u32 count)
+{
+	struct efct_s *efct = hw->os;
+	u32 i;
+	struct efc_hw_rq_buffer_s *prq;
+
+	if (rq_buf) {
+		for (i = 0, prq = rq_buf; i < count; i++, prq++) {
+			dma_free_coherent(&efct->pcidev->dev,
+					  prq->dma.size, prq->dma.virt,
+					  prq->dma.phys);
+			memset(&prq->dma, 0, sizeof(struct efc_dma_s));
+		}
+
+		kfree(rq_buf);
+	}
+}
+
+/**
+ * @brief Allocate the RQ data buffers.
+ *
+ * @param hw Pointer to HW object.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_rx_allocate(struct efct_hw_s *hw)
+{
+	struct efct_s *efct = hw->os;
+	u32 i;
+	int rc = EFCT_HW_RTN_SUCCESS;
+	u32 rqindex = 0;
+	struct hw_rq_s *rq;
+	u32 hdr_size = EFCT_HW_RQ_SIZE_HDR;
+	u32 payload_size = hw->config.rq_default_buffer_size;
+
+	rqindex = 0;
+
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		rq = hw->hw_rq[i];
+
+		/* Allocate header buffers */
+		rq->hdr_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
+						      rq->entry_count,
+						      hdr_size);
+		if (!rq->hdr_buf) {
+			efc_log_err(efct,
+				     "efct_hw_rx_buffer_alloc hdr_buf failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+
+		efc_log_debug(hw->os,
+			       "rq[%2d] rq_id %02d header  %4d by %4d bytes\n",
+			      i, rq->hdr->id, rq->entry_count, hdr_size);
+
+		rqindex++;
+
+		/* Allocate payload buffers */
+		rq->payload_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
+							  rq->entry_count,
+							  payload_size);
+		if (!rq->payload_buf) {
+			efc_log_err(efct,
+				     "efct_hw_rx_buffer_alloc fb_buf failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+		efc_log_debug(hw->os,
+			       "rq[%2d] rq_id %02d default %4d by %4d bytes\n",
+			      i, rq->data->id, rq->entry_count, payload_size);
+		rqindex++;
+	}
+
+	return rc ? EFCT_HW_RTN_ERROR : EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @brief Post the RQ data buffers to the chip.
+ *
+ * @param hw Pointer to HW object.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_rx_post(struct efct_hw_s *hw)
+{
+	u32 i;
+	u32 idx;
+	u32 rq_idx;
+	int rc = 0;
+
+	/*
+	 * In RQ pair mode, we MUST post the header and payload buffer at the
+	 * same time.
+	 */
+	for (rq_idx = 0, idx = 0; rq_idx < hw->hw_rq_count; rq_idx++) {
+		struct hw_rq_s *rq = hw->hw_rq[rq_idx];
+
+		for (i = 0; i < rq->entry_count - 1; i++) {
+			struct efc_hw_sequence_s *seq;
+
+			seq = efct_array_get(hw->seq_pool, idx++);
+			if (!seq) {
+				rc = -1;
+				break;
+			}
+			seq->header = &rq->hdr_buf[i];
+			seq->payload = &rq->payload_buf[i];
+			rc = efct_hw_sequence_free(hw, seq);
+			if (rc)
+				break;
+		}
+		if (rc)
+			break;
+	}
+
+	return rc;
+}
+
+/**
+ * @brief Free the RQ data buffers.
+ *
+ * @param hw Pointer to HW object.
+ *
+ */
+void
+efct_hw_rx_free(struct efct_hw_s *hw)
+{
+	struct hw_rq_s *rq;
+	u32 i;
+
+	/* Free hw_rq buffers */
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		rq = hw->hw_rq[i];
+		if (rq) {
+			efct_hw_rx_buffer_free(hw, rq->hdr_buf,
+					       rq->entry_count);
+			rq->hdr_buf = NULL;
+			efct_hw_rx_buffer_free(hw, rq->payload_buf,
+					       rq->entry_count);
+			rq->payload_buf = NULL;
+		}
+	}
+}
+
+/**
+ * @brief Submit queued (pending) mbx commands.
+ *
+ * @par Description
+ * Submit queued mailbox commands.
+ * --- Assumes that hw->cmd_lock is held ---
+ *
+ * @param hw Hardware context.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+static int
+efct_hw_cmd_submit_pending(struct efct_hw_s *hw)
+{
+	struct efct_command_ctx_s *ctx = NULL;
+	int rc = 0;
+
+	/* Assumes lock held */
+
+	/* Only submit MQE if there's room */
+	while (hw->cmd_head_count < (EFCT_HW_MQ_DEPTH - 1) &&
+	       !list_empty(&hw->cmd_pending)) {
+		ctx = list_first_entry(&hw->cmd_pending,
+				       struct efct_command_ctx_s, list_entry);
+		if (!ctx)
+			break;
+
+		list_del(&ctx->list_entry);
+
+		INIT_LIST_HEAD(&ctx->list_entry);
+		list_add_tail(&ctx->list_entry, &hw->cmd_head);
+		hw->cmd_head_count++;
+		if (sli_mq_write(&hw->sli, hw->mq, ctx->buf) < 0) {
+			efc_log_test(hw->os,
+				      "sli_queue_write failed: %d\n", rc);
+			rc = -1;
+			break;
+		}
+	}
+	return rc;
+}
+
+/**
+ * @ingroup io
+ * @brief Issue a SLI command.
+ *
+ * @par Description
+ * Send a mailbox command to the hardware, and either wait for a completion
+ * (EFCT_CMD_POLL) or get an optional asynchronous completion (EFCT_CMD_NOWAIT).
+ *
+ * @param hw Hardware context.
+ * @param cmd Buffer containing a formatted command and results.
+ * @param opts Command options:
+ *  - EFCT_CMD_POLL - Cmd executes synchronously &
+ *		      busy-waits for the completion.
+ *  - EFCT_CMD_NOWAIT - Cmd executes asynchronously. Uses callback.
+ * @param cb Function callback used for asynchronous mode. May be NULL.
+ * @n Prototype is <tt>(*cb)(void *arg, u8 *cmd)</tt>.
+ * @n @n @b Note: If the
+ * callback function pointer is NULL, the results of the command are silently
+ * discarded, allowing this pointer to exist solely on the stack.
+ * @param arg Argument passed to an asynchronous callback.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_command(struct efct_hw_s *hw, u8 *cmd, u32 opts, void *cb,
+		void *arg)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_ERROR;
+	unsigned long flags = 0;
+	void *bmbx = NULL;
+
+	/*
+	 * If the chip is in an error state (UE'd) then reject this mailbox
+	 *  command.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		efc_log_crit(hw->os,
+			      "status=%#x error1=%#x error2=%#x\n",
+			sli_reg_read_status(&hw->sli),
+			sli_reg_read_err1(&hw->sli),
+			sli_reg_read_err2(&hw->sli));
+
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (opts == EFCT_CMD_POLL) {
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+		bmbx = hw->sli.bmbx.virt;
+
+		memset(bmbx, 0, SLI4_BMBX_SIZE);
+		memcpy(bmbx, cmd, SLI4_BMBX_SIZE);
+
+		if (sli_bmbx_command(&hw->sli) == 0) {
+			rc = EFCT_HW_RTN_SUCCESS;
+			memcpy(cmd, bmbx, SLI4_BMBX_SIZE);
+		}
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+	} else if (opts == EFCT_CMD_NOWAIT) {
+		struct efct_command_ctx_s	*ctx = NULL;
+
+		ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
+		if (!ctx)
+			return EFCT_HW_RTN_NO_RESOURCES;
+
+		memset(ctx, 0, sizeof(struct efct_command_ctx_s));
+
+		if (hw->state != EFCT_HW_STATE_ACTIVE) {
+			efc_log_err(hw->os,
+				     "Can't send command, HW state=%d\n",
+				    hw->state);
+			kfree(ctx);
+			return EFCT_HW_RTN_ERROR;
+		}
+
+		if (cb) {
+			ctx->cb = cb;
+			ctx->arg = arg;
+		}
+		ctx->buf = cmd;
+		ctx->ctx = hw;
+
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+
+			/* Add to pending list */
+			INIT_LIST_HEAD(&ctx->list_entry);
+			list_add_tail(&ctx->list_entry, &hw->cmd_pending);
+
+			/* Submit as much of the pending list as we can */
+			if (efct_hw_cmd_submit_pending(hw) == 0)
+				rc = EFCT_HW_RTN_SUCCESS;
+
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_command_process(struct efct_hw_s *hw, int status, u8 *mqe,
+			size_t size)
+{
+	struct efct_command_ctx_s *ctx = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+	if (!list_empty(&hw->cmd_head)) {
+		ctx = list_first_entry(&hw->cmd_head,
+				       struct efct_command_ctx_s, list_entry);
+		list_del(&ctx->list_entry);
+	}
+	if (!ctx) {
+		efc_log_err(hw->os, "no command context?!?\n");
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		return -1;
+	}
+
+	hw->cmd_head_count--;
+
+	/* Post any pending requests */
+	efct_hw_cmd_submit_pending(hw);
+
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	if (ctx->cb) {
+		if (ctx->buf)
+			memcpy(ctx->buf, mqe, size);
+
+		ctx->cb(hw, status, ctx->buf, ctx->arg);
+	}
+
+	memset(ctx, 0, sizeof(struct efct_command_ctx_s));
+	kfree(ctx);
+
+	return 0;
+}
+
+/**
+ * @brief Process entries on the given mailbox queue.
+ *
+ * @param hw Hardware context.
+ * @param status CQE status.
+ * @param mq Pointer to the mailbox queue object.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+static int
+efct_hw_mq_process(struct efct_hw_s *hw,
+		   int status, struct sli4_queue_s *mq)
+{
+	u8		mqe[SLI4_BMBX_SIZE];
+
+	if (!sli_mq_read(&hw->sli, mq, mqe))
+		efct_hw_command_process(hw, status, mqe, mq->size);
+
+	return 0;
+}
+
+static int
+efct_hw_command_cancel(struct efct_hw_s *hw)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+
+	/*
+	 * Manually clean up remaining commands. Note: since this calls
+	 * efct_hw_command_process(), we'll also process the cmd_pending
+	 * list, so no need to manually clean that out.
+	 */
+	while (!list_empty(&hw->cmd_head)) {
+		u8		mqe[SLI4_BMBX_SIZE] = { 0 };
+		struct efct_command_ctx_s *ctx =
+	list_first_entry(&hw->cmd_head, struct efct_command_ctx_s, list_entry);
+
+		efc_log_test(hw->os, "hung command %08x\n",
+			      !ctx ? U32_MAX :
+			      (!ctx->buf ? U32_MAX :
+			       *((u32 *)ctx->buf)));
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		efct_hw_command_process(hw, -1, mqe, SLI4_BMBX_SIZE);
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+	}
+
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 9636e6dbe259..161f9001a5c6 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -1023,4 +1023,11 @@ efct_hw_set_ptr(struct efct_hw_s *hw, enum efct_hw_property_e prop,
 extern uint64_t
 efct_get_wwn(struct efct_hw_s *hw, enum efct_hw_property_e prop);
 
+enum efct_hw_rtn_e efct_hw_rx_allocate(struct efct_hw_s *hw);
+enum efct_hw_rtn_e efct_hw_rx_post(struct efct_hw_s *hw);
+void efct_hw_rx_free(struct efct_hw_s *hw);
+extern enum efct_hw_rtn_e
+efct_hw_command(struct efct_hw_s *hw, u8 *cmd, u32 opts, void *cb,
+		void *arg);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_utils.c b/drivers/scsi/elx/efct/efct_utils.c
new file mode 100644
index 000000000000..3c2deca23420
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_utils.c
@@ -0,0 +1,662 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_utils.h"
+
+#define DEFAULT_SLAB_LEN		(64 * 1024)
+
+struct pool_hdr_s {
+	struct list_head list_entry;
+};
+
+struct efct_array_s {
+	void *os;
+
+	u32 size;
+	u32 count;
+
+	u32 n_rows;
+	u32 elems_per_row;
+	u32 bytes_per_row;
+
+	void **array_rows;
+	u32 array_rows_len;
+};
+
+static u32 slab_len = DEFAULT_SLAB_LEN;
+
+/**
+ * @brief Void pointer array structure
+ *
+ * This structure describes an object consisting of an array of void
+ * pointers.   The object is allocated with a maximum array size, entries
+ * are then added to the array with while maintaining an entry count.   A set of
+ * iterator APIs are included to allow facilitate cycling through the array
+ * entries in a circular fashion.
+ *
+ */
+struct efct_varray_s {
+	void *os;
+	u32 array_count;	/*>> maximum entry count in array */
+	void **array;		/*>> pointer to allocated array memory */
+	u32 entry_count;	/*>> number of entries added to the array */
+	uint next_index;	/*>> iterator next index */
+	spinlock_t lock;	/*>> iterator lock */
+};
+
+/**
+ * @brief Set array slab allocation length
+ *
+ * The slab length is the maximum allocation length that the array uses.
+ * The default 64k slab length may be overridden using this function.
+ *
+ * @param len new slab length.
+ *
+ * @return none
+ */
+void
+efct_array_set_slablen(u32 len)
+{
+	slab_len = len;
+}
+
+/**
+ * @brief Allocate an array object
+ *
+ * An array object of size and number of elements is allocated
+ *
+ * @param os OS handle
+ * @param size size of array elements in bytes
+ * @param count number of elements in array
+ *
+ * @return pointer to array object or NULL
+ */
+struct efct_array_s *
+efct_array_alloc(void *os, u32 size, u32 count)
+{
+	struct efct_array_s *array = NULL;
+	u32 i;
+
+	/* Fail if the item size exceeds slab_len - caller should increase
+	 * slab_size, or not use this API.
+	 */
+	if (size > slab_len) {
+		pr_err("Error: size exceeds slab length\n");
+		return NULL;
+	}
+
+	array = kmalloc(sizeof(*array), GFP_KERNEL);
+	if (!array)
+		return NULL;
+
+	memset(array, 0, sizeof(*array));
+	array->os = os;
+	array->size = size;
+	array->count = count;
+	array->elems_per_row = slab_len / size;
+	array->n_rows = (count + array->elems_per_row - 1) /
+			array->elems_per_row;
+	array->bytes_per_row = array->elems_per_row * array->size;
+
+	array->array_rows_len = array->n_rows * sizeof(*array->array_rows);
+	array->array_rows = kmalloc(array->array_rows_len, GFP_ATOMIC);
+	if (!array->array_rows) {
+		efct_array_free(array);
+		return NULL;
+	}
+	memset(array->array_rows, 0, array->array_rows_len);
+	for (i = 0; i < array->n_rows; i++) {
+		array->array_rows[i] = kmalloc(array->bytes_per_row,
+					       GFP_KERNEL);
+		if (!array->array_rows[i]) {
+			efct_array_free(array);
+			return NULL;
+		}
+		memset(array->array_rows[i], 0, array->bytes_per_row);
+	}
+
+	return array;
+}
+
+/**
+ * @brief Free an array object
+ *
+ * Frees a prevously allocated array object
+ *
+ * @param array pointer to array object
+ *
+ * @return none
+ */
+void
+efct_array_free(struct efct_array_s *array)
+{
+	u32 i;
+
+	if (array) {
+		if (array->array_rows) {
+			for (i = 0; i < array->n_rows; i++)
+				kfree(array->array_rows[i]);
+
+			kfree(array->array_rows);
+		}
+		kfree(array);
+	}
+}
+
+/**
+ * @brief Return reference to an element of an array object
+ *
+ * Return the address of an array element given an index
+ *
+ * @param array pointer to array object
+ * @param idx array element index
+ *
+ * @return rointer to array element, or NULL if index out of range
+ */
+void *efct_array_get(struct efct_array_s *array, u32 idx)
+{
+	void *entry = NULL;
+
+	if (idx < array->count) {
+		u32 row = idx / array->elems_per_row;
+		u32 offset = idx % array->elems_per_row;
+
+		entry = ((u8 *)array->array_rows[row]) +
+			 (offset * array->size);
+	}
+	return entry;
+}
+
+/**
+ * @brief Return number of elements in an array
+ *
+ * Return the number of elements in an array
+ *
+ * @param array pointer to array object
+ *
+ * @return returns count of elements in an array
+ */
+u32
+efct_array_get_count(struct efct_array_s *array)
+{
+	return array->count;
+}
+
+/**
+ * @brief Return size of array elements in bytes
+ *
+ * Returns the size in bytes of each array element
+ *
+ * @param array pointer to array object
+ *
+ * @return size of array element
+ */
+u32
+efct_array_get_size(struct efct_array_s *array)
+{
+	return array->size;
+}
+
+/**
+ * @brief Allocate a void pointer array
+ *
+ * A void pointer array of given length is allocated.
+ *
+ * @param os OS handle
+ * @param array_count Array size
+ *
+ * @return returns a pointer to the efct_varray_s object, other NULL on error
+ */
+struct efct_varray_s *
+efct_varray_alloc(void *os, u32 array_count)
+{
+	struct efct_varray_s *va;
+
+	va = kmalloc(sizeof(*va), GFP_ATOMIC);
+	if (va) {
+		memset(va, 0, sizeof(*va));
+		va->os = os;
+		va->array_count = array_count;
+		va->array = kmalloc_array(va->array_count, sizeof(*va->array),
+					  GFP_KERNEL);
+		if (va->array) {
+			va->next_index = 0;
+			spin_lock_init(&va->lock);
+		} else {
+			kfree(va);
+			va = NULL;
+		}
+	}
+	return va;
+}
+
+/**
+ * @brief Free a void pointer array
+ *
+ * The void pointer array object is free'd
+ *
+ * @param va Pointer to void pointer array
+ *
+ * @return none
+ */
+void
+efct_varray_free(struct efct_varray_s *va)
+{
+	if (va) {
+		kfree(va->array);
+		kfree(va);
+	}
+}
+
+/**
+ * @brief Add an entry to a void pointer array
+ *
+ * An entry is added to the void pointer array
+ *
+ * @param va Pointer to void pointer array
+ * @param entry Pointer to entry to add
+ *
+ * @return returns 0 if entry was added, -1 if there is no more space in the
+ * array
+ */
+int
+efct_varray_add(struct efct_varray_s *va, void *entry)
+{
+	u32 rc = -1;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&va->lock, flags);
+		if (va->entry_count < va->array_count) {
+			va->array[va->entry_count++] = entry;
+			rc = 0;
+		}
+	spin_unlock_irqrestore(&va->lock, flags);
+
+	return rc;
+}
+
+/**
+ * @brief Reset the void pointer array iterator
+ *
+ * The next index value of the void pointer array iterator is cleared.
+ *
+ * @param va Pointer to void pointer array
+ *
+ * @return none
+ */
+void
+efct_varray_iter_reset(struct efct_varray_s *va)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&va->lock, flags);
+		va->next_index = 0;
+	spin_unlock_irqrestore(&va->lock, flags);
+}
+
+/**
+ * @brief Return next entry from a void pointer array
+ *
+ * The next entry in the void pointer array is returned.
+ *
+ * @param va Pointer to void point array
+ *
+ * Note: takes the void pointer array lock
+ *
+ * @return returns next void pointer entry
+ */
+void *
+efct_varray_iter_next(struct efct_varray_s *va)
+{
+	void *rval = NULL;
+	unsigned long flags = 0;
+
+	if (va) {
+		spin_lock_irqsave(&va->lock, flags);
+			rval = _efct_varray_iter_next(va);
+		spin_unlock_irqrestore(&va->lock, flags);
+	}
+	return rval;
+}
+
+/**
+ * @brief Return next entry from a void pointer array
+ *
+ * The next entry in the void pointer array is returned.
+ *
+ * @param va Pointer to void point array
+ *
+ * Note: doesn't take the void pointer array lock
+ *
+ * @return returns next void pointer entry
+ */
+void *
+_efct_varray_iter_next(struct efct_varray_s *va)
+{
+	void *rval;
+
+	rval = va->array[va->next_index];
+	if (++va->next_index >= va->entry_count)
+		va->next_index = 0;
+	return rval;
+}
+
+/**
+ * @brief Return entry count for a void pointer array
+ *
+ * The entry count for a void pointer array is returned
+ *
+ * @param va Pointer to void pointer array
+ *
+ * @return returns entry count
+ */
+u32
+efct_varray_get_count(struct efct_varray_s *va)
+{
+	u32 rc;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&va->lock, flags);
+		rc = va->entry_count;
+	spin_unlock_irqrestore(&va->lock, flags);
+	return rc;
+}
+
+/**
+ * The efct_pool_s data structure consists of:
+ *
+ *	pool->a		An efct_array_s.
+ *	pool->freelist	A linked list of free items.
+ *
+ *	When a pool is allocated using efct_pool_alloc(), the caller
+ *	provides the size in bytes of each memory pool item (size), and
+ *	a count of items (count). Since efct_pool_alloc() has no visibility
+ *	into the object the caller is allocating, a link for the linked list
+ *	is "pre-pended".  Thus when allocating the efct_array_s, the size used
+ *	is the size of the pool_hdr_s plus the requestedmemory pool item size.
+ *
+ *	array item layout:
+ *
+ *		pool_hdr_s
+ *		pool data[size]
+ *
+ *	The address of the pool data is returned when allocated (using
+ *	efct_pool_get(), or efct_pool_get_instance()), and received when being
+ *	freed (using efct_pool_put(). So the address returned by the array item
+ *	(efct_array_get()) must be offset by the size of pool_hdr_s.
+ */
+
+/**
+ * @brief Allocate a memory pool.
+ *
+ * A memory pool of given size and item count is allocated.
+ *
+ * @param os OS handle.
+ * @param size Size in bytes of item.
+ * @param count Number of items in a memory pool.
+ *
+ * @return Returns pointer to allocated memory pool, or NULL.
+ */
+struct efct_pool_s *
+efct_pool_alloc(void *os, u32 size, u32 count)
+{
+	struct efct_pool_s *pool;
+	struct pool_hdr_s *pool_entry;
+	u32 i;
+
+	pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+	if (!pool)
+		return NULL;
+
+	memset(pool, 0, sizeof(*pool));
+	pool->os = os;
+
+	/* Allocate an array where each array item is the size of a pool_hdr_s
+	 * plus the requested memory item size (size)
+	 */
+	pool->a = efct_array_alloc(os, size + sizeof(struct pool_hdr_s),
+				   count);
+	if (!pool->a) {
+		efct_pool_free(pool);
+		return NULL;
+	}
+
+	INIT_LIST_HEAD(&pool->freelist);
+	for (i = 0; i < count; i++) {
+		pool_entry = (struct pool_hdr_s *)efct_array_get(pool->a, i);
+		INIT_LIST_HEAD(&pool_entry->list_entry);
+		list_add_tail(&pool_entry->list_entry, &pool->freelist);
+	}
+
+	spin_lock_init(&pool->lock);
+
+	return pool;
+}
+
+/**
+ * @brief Reset a memory pool.
+ *
+ * Place all pool elements on the free list, and zero them.
+ *
+ * @param pool Pointer to the pool object.
+ *
+ * @return None.
+ */
+void
+efct_pool_reset(struct efct_pool_s *pool)
+{
+	u32 i;
+	u32 count = efct_array_get_count(pool->a);
+	u32 size = efct_array_get_size(pool->a);
+	unsigned long flags = 0;
+	struct pool_hdr_s *pool_entry;
+
+	spin_lock_irqsave(&pool->lock, flags);
+
+	/*
+	 * Remove all the entries from the free list, otherwise we will
+	 * encountered linked list asserts when they are re-added.
+	 */
+	while (!list_empty(&pool->freelist)) {
+		pool_entry = list_first_entry(&pool->freelist,
+					      struct pool_hdr_s, list_entry);
+		list_del(&pool_entry->list_entry);
+	}
+
+	/* Reset the free list */
+	INIT_LIST_HEAD(&pool->freelist);
+
+	/* Return all elements to the free list and zero the elements */
+	for (i = 0; i < count; i++) {
+		pool_entry = (struct pool_hdr_s *)efct_array_get(pool->a, i);
+		memset(pool_entry, 0, size - sizeof(struct pool_hdr_s));
+		INIT_LIST_HEAD(&pool_entry->list_entry);
+		list_add_tail(&pool_entry->list_entry, &pool->freelist);
+	}
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+/**
+ * @brief Free a previously allocated memory pool.
+ *
+ * The memory pool is freed.
+ *
+ * @param pool Pointer to memory pool.
+ *
+ * @return None.
+ */
+void
+efct_pool_free(struct efct_pool_s *pool)
+{
+	if (pool) {
+		if (pool->a)
+			efct_array_free(pool->a);
+		kfree(pool);
+	}
+}
+
+/**
+ * @brief Allocate a memory pool item
+ *
+ * A memory pool item is taken from the free list and returned.
+ *
+ * @param pool Pointer to memory pool.
+ *
+ * @return Pointer to allocated item, otherwise NULL if there are
+ * no unallocated items.
+ */
+void *
+efct_pool_get(struct efct_pool_s *pool)
+{
+	struct pool_hdr_s *h = NULL;
+	void *item = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&pool->lock, flags);
+
+	if (!list_empty(&pool->freelist)) {
+		h = list_first_entry(&pool->freelist, struct pool_hdr_s,
+				     list_entry);
+	}
+
+	if (h) {
+		list_del(&h->list_entry);
+		/*
+		 * Return the array item address offset by the size of
+		 * pool_hdr_s
+		 */
+		item = &h[1];
+	}
+
+	spin_unlock_irqrestore(&pool->lock, flags);
+	return item;
+}
+
+/**
+ * @brief free memory pool item
+ *
+ * A memory pool item is freed.
+ *
+ * @param pool Pointer to memory pool.
+ * @param item Pointer to item to free.
+ *
+ * @return None.
+ */
+void
+efct_pool_put(struct efct_pool_s *pool, void *item)
+{
+	struct pool_hdr_s *h;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&pool->lock, flags);
+
+	/* Fetch the address of the array item, which is the item address
+	 * negatively offset by size of pool_hdr_s (note the index of [-1]
+	 */
+	h = &((struct pool_hdr_s *)item)[-1];
+
+	INIT_LIST_HEAD(&h->list_entry);
+	list_add_tail(&h->list_entry, &pool->freelist);
+
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+/**
+ * @brief free memory pool item
+ *
+ * A memory pool item is freed to head of list.
+ *
+ * @param pool Pointer to memory pool.
+ * @param item Pointer to item to free.
+ *
+ * @return None.
+ */
+void
+efct_pool_put_head(struct efct_pool_s *pool, void *item)
+{
+	struct pool_hdr_s *h;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&pool->lock, flags);
+
+	/* Fetch the address of the array item, which is the item address
+	 * negatively offset by size of pool_hdr_s (note the index of [-1]
+	 */
+	h = &((struct pool_hdr_s *)item)[-1];
+
+	INIT_LIST_HEAD(&h->list_entry);
+	list_add(&h->list_entry, &pool->freelist);
+
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+/**
+ * @brief Return memory pool item count.
+ *
+ * Returns the allocated number of items.
+ *
+ * @param pool Pointer to memory pool.
+ *
+ * @return Returns count of allocated items.
+ */
+u32
+efct_pool_get_count(struct efct_pool_s *pool)
+{
+	u32 count;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&pool->lock, flags);
+	count = efct_array_get_count(pool->a);
+	spin_unlock_irqrestore(&pool->lock, flags);
+	return count;
+}
+
+/**
+ * @brief Return item given an index.
+ *
+ * A pointer to a memory pool item is returned given an index.
+ *
+ * @param pool Pointer to memory pool.
+ * @param idx Index.
+ *
+ * @return Returns pointer to item, or NULL if index is invalid.
+ */
+void *
+efct_pool_get_instance(struct efct_pool_s *pool, u32 idx)
+{
+	struct pool_hdr_s *h = efct_array_get(pool->a, idx);
+
+	if (!h)
+		return NULL;
+	return &h[1];
+}
+
+/**
+ * @brief Return count of free objects in a pool.
+ *
+ * The number of objects on a pool's free list.
+ *
+ * @param pool Pointer to memory pool.
+ *
+ * @return Returns count of objects on free list.
+ */
+u32
+efct_pool_get_freelist_count(struct efct_pool_s *pool)
+{
+	u32 count = 0;
+	struct pool_hdr_s *item;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&pool->lock, flags);
+
+	list_for_each_entry(item, &pool->freelist, list_entry) {
+		count++;
+	}
+
+	spin_unlock_irqrestore(&pool->lock, flags);
+	return count;
+}
diff --git a/drivers/scsi/elx/efct/efct_utils.h b/drivers/scsi/elx/efct/efct_utils.h
new file mode 100644
index 000000000000..c9743ed37b9b
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_utils.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFCT_UTILS_H__
+#define __EFCT_UTILS_H__
+
+/* Sparse vector structure. */
+struct sparse_vector_s {
+	void *os;
+	u32 max_idx;		/**< maximum index value */
+	void **array;		/**< pointer to 3D array */
+};
+
+#define EFCT_LOG_ENABLE_SCSI_TRACE(efct)                \
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 2)) != 0) : 0)
+#define EFCT_LOG_ENABLE_ELS_TRACE(efct)		\
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 1)) != 0) : 0)
+#define EFCT_LOG_ENABLE_IO_ERRORS(efct)		\
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 6)) != 0) : 0)
+#define EFCT_LOG_ENABLE_LIO_IO_TRACE(efct)	\
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 7)) != 0) : 0)
+#define EFCT_LOG_ENABLE_LIO_TRACE(efct)		\
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 8)) != 0) : 0)
+
+#define SPV_ROWLEN	256
+#define SPV_DIM		3
+
+struct efct_pool_s {
+	void *os;
+	struct efct_array_s *a;
+	struct list_head freelist;
+	/* Protects freelist */
+	spinlock_t lock;
+};
+
+extern void
+efct_array_set_slablen(u32 len);
+extern struct efct_array_s *
+efct_array_alloc(void *os, u32 size, u32 count);
+extern void
+efct_array_free(struct efct_array_s *array);
+extern void *
+efct_array_get(struct efct_array_s *array, u32 idx);
+extern u32
+efct_array_get_count(struct efct_array_s *array);
+extern u32
+efct_array_get_size(struct efct_array_s *array);
+
+extern struct efct_varray_s *
+efct_varray_alloc(void *os, u32 entry_count);
+extern void
+efct_varray_free(struct efct_varray_s *ai);
+extern int
+efct_varray_add(struct efct_varray_s *ai, void *entry);
+extern void
+efct_varray_iter_reset(struct efct_varray_s *ai);
+extern void *
+efct_varray_iter_next(struct efct_varray_s *ai);
+extern void *
+_efct_varray_iter_next(struct efct_varray_s *ai);
+extern void
+efct_varray_unlock(struct efct_varray_s *ai);
+extern u32
+efct_varray_get_count(struct efct_varray_s *ai);
+
+/**
+ * @brief Sparse Vector API
+ *
+ * This is a trimmed down sparse vector implementation tuned to the problem of
+ * 24-bit FC_IDs. In this case, the 24-bit index value is broken down in three
+ * 8-bit values. These values are used to index up to three 256 element arrays.
+ * Arrays are allocated, only when needed. @n @n
+ * The lookup can complete in constant time (3 indexed array references). @n @n
+ * A typical use case would be that the fabric/directory FC_IDs would cause two
+ * rows to be allocated, and the fabric assigned remote nodes would cause two
+ * rows to be allocated, with the root row always allocated. This gives five
+ * rows of 256 x sizeof(void*), resulting in 10k.
+ */
+/*!
+ * @defgroup spv Sparse Vector
+ */
+
+void efct_spv_del(struct sparse_vector_s *spv);
+struct sparse_vector_s *efct_spv_new(void *os);
+void efct_spv_set(struct sparse_vector_s *sv, u32 idx, void *value);
+void *efct_spv_get(struct sparse_vector_s *sv, u32 idx);
+
+/**
+ * @POOL
+ *
+ */
+extern struct efct_pool_s *
+efct_pool_alloc(void *os, u32 size, u32 count);
+extern void
+efct_pool_reset(struct efct_pool_s *pool);
+extern void
+efct_pool_free(struct efct_pool_s *pool);
+extern void *
+efct_pool_get(struct efct_pool_s *pool);
+extern void
+efct_pool_put(struct efct_pool_s *pool, void *arg);
+extern void
+efct_pool_put_head(struct efct_pool_s *pool, void *arg);
+extern u32
+efct_pool_get_count(struct efct_pool_s *pool);
+extern void *
+efct_pool_get_instance(struct efct_pool_s *pool, u32 instance);
+extern u32
+efct_pool_get_freelist_count(struct efct_pool_s *pool);
+#endif /* __EFCT_UTILS_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 19/32] elx: efct: Hardware IO and SGL initialization
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (17 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 18/32] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 20/32] elx: efct: Hardware queues processing James Smart
                   ` (13 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to create IO interfaces (wqs, etc), SGL initialization,
and configure hardware features.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c | 1530 +++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |   46 ++
 2 files changed, 1576 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index ea23bb33e11d..ae0f49e5d751 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -29,7 +29,26 @@ static int
 efct_hw_command_cancel(struct efct_hw_s *);
 static int
 efct_hw_mq_process(struct efct_hw_s *, int, struct sli4_queue_s *);
+static enum efct_hw_rtn_e
+efct_hw_setup_io(struct efct_hw_s *);
+static enum efct_hw_rtn_e
+efct_hw_init_io(struct efct_hw_s *);
+static int
+efct_hw_io_cancel(struct efct_hw_s *);
+static void
+efct_hw_io_restore_sgl(struct efct_hw_s *, struct efct_hw_io_s *);
 
+static enum efct_hw_rtn_e
+efct_hw_config_set_fdt_xfer_hint(struct efct_hw_s *hw, u32 fdt_xfer_hint);
+static int
+efct_hw_config_mrq(struct efct_hw_s *hw, u8, u16);
+static enum efct_hw_rtn_e
+efct_hw_config_watchdog_timer(struct efct_hw_s *hw);
+static enum efct_hw_rtn_e
+efct_hw_config_sli_port_health_check(struct efct_hw_s *hw, u8 query,
+				     u8 enable);
+static enum efct_hw_rtn_e
+efct_hw_set_dif_seed(struct efct_hw_s *hw);
 
 static enum efct_hw_rtn_e
 efct_hw_link_event_init(struct efct_hw_s *hw)
@@ -1743,3 +1762,1514 @@ efct_hw_command_cancel(struct efct_hw_s *hw)
 
 	return 0;
 }
+
+/**
+ * @brief Initialize IO fields on each free call.
+ *
+ * @n @b Note: This is done on each free call (as opposed to each
+ * alloc call) because port-owned XRIs are not
+ * allocated with efct_hw_io_alloc() but are freed with this
+ * function.
+ *
+ * @param io Pointer to HW IO.
+ */
+static inline void
+efct_hw_init_free_io(struct efct_hw_io_s *io)
+{
+	/*
+	 * Set io->done to NULL, to avoid any callbacks, should
+	 * a completion be received for one of these IOs
+	 */
+	io->done = NULL;
+	io->abort_done = NULL;
+	io->status_saved = false;
+	io->abort_in_progress = false;
+	io->rnode = NULL;
+	io->type = 0xFFFF;
+	io->wq = NULL;
+	io->ul_io = NULL;
+	io->tgt_wqe_timeout = 0;
+}
+
+/**
+ * @brief Initialize the pool of HW IO objects.
+ *
+ * @param hw Hardware context.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+static enum efct_hw_rtn_e
+efct_hw_setup_io(struct efct_hw_s *hw)
+{
+	u32	i = 0;
+	struct efct_hw_io_s	*io = NULL;
+	uintptr_t	xfer_virt = 0;
+	uintptr_t	xfer_phys = 0;
+	u32	index;
+	bool new_alloc = true;
+	struct efc_dma_s *dma;
+	struct efct_s *efct = hw->os;
+
+	if (!hw->io) {
+		hw->io = kmalloc_array(hw->config.n_io, sizeof(io),
+				 GFP_KERNEL);
+
+		if (!hw->io)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		memset(hw->io, 0, hw->config.n_io * sizeof(io));
+
+		for (i = 0; i < hw->config.n_io; i++) {
+			hw->io[i] = kmalloc(sizeof(*io), GFP_KERNEL);
+			if (!hw->io[i])
+				goto error;
+
+			memset(hw->io[i], 0, sizeof(struct efct_hw_io_s));
+		}
+
+		/* Create WQE buffs for IO */
+		hw->wqe_buffs = kmalloc((hw->config.n_io *
+					     hw->sli.wqe_size),
+					     GFP_ATOMIC);
+		if (!hw->wqe_buffs) {
+			kfree(hw->io);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+		memset(hw->wqe_buffs, 0, (hw->config.n_io *
+					hw->sli.wqe_size));
+
+	} else {
+		/* re-use existing IOs, including SGLs */
+		new_alloc = false;
+	}
+
+	if (new_alloc) {
+		dma = &hw->xfer_rdy;
+		dma->size = sizeof(struct fcp_txrdy) * hw->config.n_io;
+		dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+					       dma->size, &dma->phys, GFP_DMA);
+		if (!dma->virt)
+			return EFCT_HW_RTN_NO_MEMORY;
+	}
+	xfer_virt = (uintptr_t)hw->xfer_rdy.virt;
+	xfer_phys = hw->xfer_rdy.phys;
+
+	for (i = 0; i < hw->config.n_io; i++) {
+		struct hw_wq_callback_s *wqcb;
+
+		io = hw->io[i];
+
+		/* initialize IO fields */
+		io->hw = hw;
+
+		/* Assign a WQE buff */
+		io->wqe.wqebuf = &hw->wqe_buffs[i * hw->sli.wqe_size];
+
+		/* Allocate the request tag for this IO */
+		wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_io, io);
+		if (!wqcb) {
+			efc_log_err(hw->os, "can't allocate request tag\n");
+			return EFCT_HW_RTN_NO_RESOURCES;
+		}
+		io->reqtag = wqcb->instance_index;
+
+		/* Now for the fields that are initialized on each free */
+		efct_hw_init_free_io(io);
+
+		/* The XB flag isn't cleared on IO free, so init to zero */
+		io->xbusy = 0;
+
+		if (sli_resource_alloc(&hw->sli, SLI_RSRC_XRI,
+				       &io->indicator, &index)) {
+			efc_log_err(hw->os,
+				     "sli_resource_alloc failed @ %d\n", i);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+		if (new_alloc) {
+			dma = &io->def_sgl;
+			dma->size = hw->config.n_sgl *
+					sizeof(struct sli4_sge_s);
+			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+						       dma->size, &dma->phys,
+						       GFP_DMA);
+			if (!dma->virt) {
+				efc_log_err(hw->os, "dma_alloc fail %d\n", i);
+				memset(&io->def_sgl, 0,
+				       sizeof(struct efc_dma_s));
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+		}
+		io->def_sgl_count = hw->config.n_sgl;
+		io->sgl = &io->def_sgl;
+		io->sgl_count = io->def_sgl_count;
+
+		if (hw->xfer_rdy.size) {
+			io->xfer_rdy.virt = (void *)xfer_virt;
+			io->xfer_rdy.phys = xfer_phys;
+			io->xfer_rdy.size = sizeof(struct fcp_txrdy);
+
+			xfer_virt += sizeof(struct fcp_txrdy);
+			xfer_phys += sizeof(struct fcp_txrdy);
+		}
+	}
+
+	return EFCT_HW_RTN_SUCCESS;
+error:
+	for (i = 0; i < hw->config.n_io && hw->io[i]; i++) {
+		kfree(hw->io[i]);
+		hw->io[i] = NULL;
+	}
+
+	kfree(hw->io);
+	hw->io = NULL;
+
+	return EFCT_HW_RTN_NO_MEMORY;
+}
+
+static enum efct_hw_rtn_e
+efct_hw_init_io(struct efct_hw_s *hw)
+{
+	u32	i = 0, io_index = 0;
+	bool prereg = false;
+	struct efct_hw_io_s	*io = NULL;
+	u8		cmd[SLI4_BMBX_SIZE];
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+	u32	nremaining;
+	u32	n = 0;
+	u32	sgls_per_request = 256;
+	struct efc_dma_s	**sgls = NULL;
+	struct efc_dma_s	reqbuf;
+	struct efct_s *efct = hw->os;
+
+	prereg = hw->sli.sgl_pre_registered;
+
+	memset(&reqbuf, 0, sizeof(struct efc_dma_s));
+	if (prereg) {
+		sgls = kmalloc_array(sgls_per_request, sizeof(*sgls),
+				     GFP_ATOMIC);
+		if (!sgls)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		reqbuf.size = 32 + sgls_per_request * 16;
+		reqbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
+						 reqbuf.size, &reqbuf.phys,
+						 GFP_DMA);
+		if (!reqbuf.virt) {
+			efc_log_err(hw->os, "dma_alloc reqbuf failed\n");
+			kfree(sgls);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+
+	for (nremaining = hw->config.n_io; nremaining; nremaining -= n) {
+		if (prereg) {
+			/* Copy address of SGL's into local sgls[] array, break
+			 * out if the xri is not contiguous.
+			 */
+			u32 min = (sgls_per_request < nremaining)
+					? sgls_per_request : nremaining;
+			for (n = 0; n < min; n++) {
+				/* Check that we have contiguous xri values */
+				if (n > 0) {
+					if (hw->io[io_index + n]->indicator !=
+					    hw->io[io_index + n - 1]->indicator
+					    + 1)
+						break;
+				}
+				sgls[n] = hw->io[io_index + n]->sgl;
+			}
+
+			if (!sli_cmd_post_sgl_pages(&hw->sli, cmd,
+						   sizeof(cmd),
+						hw->io[io_index]->indicator,
+						n, sgls, NULL, &reqbuf)) {
+				if (efct_hw_command(hw, cmd, EFCT_CMD_POLL,
+						    NULL, NULL)) {
+					rc = EFCT_HW_RTN_ERROR;
+					efc_log_err(hw->os,
+						     "SGL post failed\n");
+					break;
+				}
+			}
+		} else {
+			n = nremaining;
+		}
+
+		/* Add to tail if successful */
+		for (i = 0; i < n; i++, io_index++) {
+			io = hw->io[io_index];
+			io->state = EFCT_HW_IO_STATE_FREE;
+			INIT_LIST_HEAD(&io->list_entry);
+			list_add_tail(&io->list_entry, &hw->io_free);
+		}
+	}
+
+	if (prereg) {
+		dma_free_coherent(&efct->pcidev->dev,
+				  reqbuf.size, reqbuf.virt, reqbuf.phys);
+		memset(&reqbuf, 0, sizeof(struct efc_dma_s));
+		kfree(sgls);
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup io
+ * @brief Lockless allocate a HW IO object.
+ *
+ * @par Description
+ * Assume that hw->efct_lock is held.
+ *
+ * @param hw Hardware context.
+ *
+ * @return Returns a pointer to an object on success, or NULL on failure.
+ */
+static inline struct efct_hw_io_s *
+_efct_hw_io_alloc(struct efct_hw_s *hw)
+{
+	struct efct_hw_io_s	*io = NULL;
+
+	if (!list_empty(&hw->io_free)) {
+		io = list_first_entry(&hw->io_free, struct efct_hw_io_s,
+				      list_entry);
+		list_del(&io->list_entry);
+	}
+	if (io) {
+		INIT_LIST_HEAD(&io->list_entry);
+		INIT_LIST_HEAD(&io->wqe_link);
+		INIT_LIST_HEAD(&io->dnrx_link);
+		list_add_tail(&io->list_entry, &hw->io_inuse);
+		io->state = EFCT_HW_IO_STATE_INUSE;
+		io->abort_reqtag = U32_MAX;
+		kref_init(&io->ref);
+		io->release = efct_hw_io_free_internal;
+	} else {
+		atomic_add_return(1, &hw->io_alloc_failed_count);
+	}
+
+	return io;
+}
+
+/**
+ * @ingroup io
+ * @brief Allocate a HW IO object.
+ *
+ * @par Description
+ * @n @b Note: This function applies to non-port owned XRIs
+ * only.
+ *
+ * @param hw Hardware context.
+ *
+ * @return Returns a pointer to an object on success, or NULL on failure.
+ */
+struct efct_hw_io_s *
+efct_hw_io_alloc(struct efct_hw_s *hw)
+{
+	struct efct_hw_io_s	*io = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+	io = _efct_hw_io_alloc(hw);
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	return io;
+}
+
+/**
+ * @ingroup io
+ * @brief When an IO is freed, depending on the exchange busy flag, and other
+ * workarounds, move it to the correct list.
+ *
+ * @par Description
+ * Note: Assumes that the hw->io_lock is held and the item has been removed
+ * from the busy or wait_free list.
+ *
+ * @param hw Hardware context.
+ * @param io Pointer to the IO object to move.
+ */
+static void
+efct_hw_io_free_move_correct_list(struct efct_hw_s *hw,
+				  struct efct_hw_io_s *io)
+{
+	if (io->xbusy) {
+		/*
+		 * add to wait_free list and wait for XRI_ABORTED CQEs to clean
+		 * up
+		 */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &hw->io_wait_free);
+		io->state = EFCT_HW_IO_STATE_WAIT_FREE;
+	} else {
+		/* IO not busy, add to free list */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &hw->io_free);
+		io->state = EFCT_HW_IO_STATE_FREE;
+	}
+}
+
+/**
+ * @ingroup io
+ * @brief Free a HW IO object. Perform cleanup common to
+ * port and host-owned IOs.
+ *
+ * @param hw Hardware context.
+ * @param io Pointer to the HW IO object.
+ */
+static inline void
+efct_hw_io_free_common(struct efct_hw_s *hw, struct efct_hw_io_s *io)
+{
+	/* initialize IO fields */
+	efct_hw_init_free_io(io);
+
+	/* Restore default SGL */
+	efct_hw_io_restore_sgl(hw, io);
+}
+
+/**
+ * @ingroup io
+ * @brief Free a previously-allocated HW IO object. Called when
+ * IO refcount goes to zero (host-owned IOs only).
+ *
+ * @param arg Pointer to the HW IO object.
+ */
+void
+efct_hw_io_free_internal(struct kref *arg)
+{
+	unsigned long flags = 0;
+	struct efct_hw_io_s *io =
+			container_of(arg, struct efct_hw_io_s, ref);
+	struct efct_hw_s *hw = io->hw;
+
+	/* perform common cleanup */
+	efct_hw_io_free_common(hw, io);
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+		/* remove from in-use list */
+		if (io->list_entry.next &&
+		    !list_empty(&hw->io_inuse)) {
+			list_del(&io->list_entry);
+			efct_hw_io_free_move_correct_list(hw, io);
+		}
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+}
+
+/**
+ * @ingroup io
+ * @brief Free a previously-allocated HW IO object.
+ *
+ * @par Description
+ * @n @b Note: This function applies to port and host owned XRIs.
+ *
+ * @param hw Hardware context.
+ * @param io Pointer to the HW IO object.
+ *
+ * @return Returns a non-zero value if HW IO was freed, 0 if references
+ * on the IO still exist, or a negative value if an error occurred.
+ */
+int
+efct_hw_io_free(struct efct_hw_s *hw, struct efct_hw_io_s *io)
+{
+	/* just put refcount */
+	if (refcount_read(&io->ref.refcount) <= 0) {
+		efc_log_err(hw->os,
+			     "Bad parameter: refcount <= 0 xri=%x tag=%x\n",
+			    io->indicator, io->reqtag);
+		return -1;
+	}
+
+	return kref_put(&io->ref, io->release);
+}
+
+/**
+ * @ingroup io
+ * @brief Check if given HW IO is in-use
+ *
+ * @par Description
+ * This function returns TRUE if the given HW IO has been
+ * allocated and is in-use, and FALSE otherwise. It applies to
+ * port and host owned XRIs.
+ *
+ * @param hw Hardware context.
+ * @param io Pointer to the HW IO object.
+ *
+ * @return TRUE if an IO is in use, or FALSE otherwise.
+ */
+u8
+efct_hw_io_inuse(struct efct_hw_s *hw, struct efct_hw_io_s *io)
+{
+	return (refcount_read(&io->ref.refcount) > 0);
+}
+
+/**
+ * @brief Find IO given indicator (xri).
+ *
+ * @param hw context.
+ * @param indicator Indicator (xri) to look for.
+ *
+ * @return Returns io if found, NULL otherwise.
+ */
+struct efct_hw_io_s *
+efct_hw_io_lookup(struct efct_hw_s *hw, u32 xri)
+{
+	u32 ioindex;
+
+	ioindex = xri - hw->sli.extent[SLI_RSRC_XRI].base[0];
+	return hw->io[ioindex];
+}
+
+/**
+ * @brief Issue any pending callbacks for an IO and remove off the timer and
+ * pending lists.
+ *
+ * @param hw context.
+ * @param io Pointer to the IO to cleanup.
+ */
+static void
+efct_hw_io_cancel_cleanup(struct efct_hw_s *hw, struct efct_hw_io_s *io)
+{
+	efct_hw_done_t done = io->done;
+	efct_hw_done_t abort_done = io->abort_done;
+	unsigned long flags = 0;
+
+	/* first check active_wqe list and remove if there */
+	if (io->wqe_link.next)
+		list_del(&io->wqe_link);
+
+	/* Remove from WQ pending list */
+	if (io->wq && io->wq->pending_list.next)
+		list_del(&io->list_entry);
+
+	if (io->done) {
+		void *arg = io->arg;
+
+		io->done = NULL;
+		spin_unlock_irqrestore(&hw->io_lock, flags);
+		done(io, io->rnode, 0, SLI4_FC_WCQE_STATUS_SHUTDOWN, 0, arg);
+		spin_lock_irqsave(&hw->io_lock, flags);
+	}
+
+	if (io->abort_done) {
+		void		*abort_arg = io->abort_arg;
+
+		io->abort_done = NULL;
+		spin_unlock_irqrestore(&hw->io_lock, flags);
+		abort_done(io, io->rnode, 0, SLI4_FC_WCQE_STATUS_SHUTDOWN, 0,
+			   abort_arg);
+		spin_lock_irqsave(&hw->io_lock, flags);
+	}
+}
+
+static int
+efct_hw_io_cancel(struct efct_hw_s *hw)
+{
+	struct efct_hw_io_s	*io = NULL;
+	struct efct_hw_io_s	*tmp_io = NULL;
+	u32	iters = 100; /* One second limit */
+	unsigned long flags = 0;
+
+	/*
+	 * Manually clean up outstanding IO.
+	 * Only walk through list once: the backend will cleanup any IOs when
+	 * done/abort_done is called.
+	 */
+	spin_lock_irqsave(&hw->io_lock, flags);
+	list_for_each_entry_safe(io, tmp_io, &hw->io_inuse, list_entry) {
+		efct_hw_done_t  done = io->done;
+		efct_hw_done_t  abort_done = io->abort_done;
+
+		efct_hw_io_cancel_cleanup(hw, io);
+
+		/*
+		 * Since this is called in a reset/shutdown
+		 * case, If there is no callback, then just
+		 * free the IO.
+		 *
+		 * Note: A port owned XRI cannot be on
+		 *       the in use list. We cannot call
+		 *       efct_hw_io_free() because we already
+		 *       hold the io_lock.
+		 */
+		if (!done &&
+		    !abort_done) {
+			/*
+			 * Since this is called in a reset/shutdown
+			 * case, If there is no callback, then just
+			 * free the IO.
+			 */
+			efct_hw_io_free_common(hw, io);
+			list_del(&io->list_entry);
+			efct_hw_io_free_move_correct_list(hw, io);
+		}
+	}
+
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	/* Give time for the callbacks to complete */
+	do {
+		mdelay(10);
+		iters--;
+	} while (!list_empty(&hw->io_inuse) && iters);
+
+	/* Leave a breadcrumb that cleanup is not yet complete. */
+	if (!list_empty(&hw->io_inuse))
+		efc_log_test(hw->os, "io_inuse list is not empty\n");
+
+	return 0;
+}
+
+enum efct_hw_rtn_e
+efct_hw_io_register_sgl(struct efct_hw_s *hw, struct efct_hw_io_s *io,
+			struct efc_dma_s *sgl,
+			u32 sgl_count)
+{
+	if (hw->sli.sgl_pre_registered) {
+		efc_log_err(hw->os,
+			     "can't use temp SGL with pre-registered SGLs\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+	io->ovfl_sgl = sgl;
+	io->ovfl_sgl_count = sgl_count;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static void
+efct_hw_io_restore_sgl(struct efct_hw_s *hw, struct efct_hw_io_s *io)
+{
+	/* Restore the default */
+	io->sgl = &io->def_sgl;
+	io->sgl_count = io->def_sgl_count;
+
+	/* Clear the overflow SGL */
+	io->ovfl_sgl = NULL;
+	io->ovfl_sgl_count = 0;
+	io->ovfl_lsp = NULL;
+}
+
+/**
+ * @ingroup io
+ * @brief Initialize the scatter gather list entries of an IO.
+ *
+ * @param hw Hardware context.
+ * @param io Previously-allocated HW IO object.
+ * @param type Type of IO (target read, target response, and so on).
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_io_init_sges(struct efct_hw_s *hw, struct efct_hw_io_s *io,
+		     enum efct_hw_io_type_e type)
+{
+	struct sli4_sge_s	*data = NULL;
+	u32	i = 0;
+	u32	skips = 0;
+	u32 sge_flags = 0;
+
+	if (!io) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p\n", hw, io);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* Clear / reset the scatter-gather list */
+	io->sgl = &io->def_sgl;
+	io->sgl_count = io->def_sgl_count;
+	io->first_data_sge = 0;
+
+	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge_s));
+	io->n_sge = 0;
+	io->sge_offset = 0;
+
+	io->type = type;
+
+	data = io->sgl->virt;
+
+	/*
+	 * Some IO types have underlying hardware requirements on the order
+	 * of SGEs. Process all special entries here.
+	 */
+	switch (type) {
+	case EFCT_HW_IO_TARGET_WRITE:
+#define EFCT_TARGET_WRITE_SKIPS	2
+		skips = EFCT_TARGET_WRITE_SKIPS;
+
+		/* populate host resident XFER_RDY buffer */
+		sge_flags = data->dw2_flags;
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+		data->buffer_address_high =
+			cpu_to_le32(upper_32_bits(io->xfer_rdy.phys));
+		data->buffer_address_low  =
+			cpu_to_le32(lower_32_bits(io->xfer_rdy.phys));
+		data->buffer_length = cpu_to_le32(io->xfer_rdy.size);
+		data->dw2_flags = cpu_to_le32(sge_flags);
+		data++;
+
+		skips--;
+
+		io->n_sge = 1;
+		break;
+	case EFCT_HW_IO_TARGET_READ:
+		/*
+		 * For FCP_TSEND64, the first 2 entries are SKIP SGE's
+		 */
+#define EFCT_TARGET_READ_SKIPS	2
+		skips = EFCT_TARGET_READ_SKIPS;
+		break;
+	case EFCT_HW_IO_TARGET_RSP:
+		/*
+		 * No skips, etc. for FCP_TRSP64
+		 */
+		break;
+	default:
+		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Write skip entries
+	 */
+	for (i = 0; i < skips; i++) {
+		sge_flags = data->dw2_flags;
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		data->dw2_flags = cpu_to_le32(sge_flags);
+		data++;
+	}
+
+	io->n_sge += skips;
+
+	/*
+	 * Set last
+	 */
+	sge_flags = data->dw2_flags;
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @ingroup io
+ * @brief Add a T10 PI seed scatter gather list entry.
+ *
+ * @param hw Hardware context.
+ * @param io Previously-allocated HW IO object.
+ * @param dif_info Pointer to T10 DIF fields, or NULL if no DIF.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_io_add_seed_sge(struct efct_hw_s *hw, struct efct_hw_io_s *io,
+			struct efct_hw_dif_info_s *dif_info)
+{
+	struct sli4_sge_s	*data = NULL;
+	struct sli4_diseed_sge_s *dif_seed;
+	u32 sge_flags;
+	u16 dif_flags;
+
+	/* If no dif_info, or dif_oper is disabled, then just return success */
+	if (!dif_info ||
+	    dif_info->dif_oper == EFCT_HW_DIF_OPER_DISABLED)
+		return EFCT_HW_RTN_SUCCESS;
+
+	if (!io) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p dif_info=%p\n", hw,
+			    io, dif_info);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	data = io->sgl->virt;
+	data += io->n_sge;
+
+	/* If we are doing T10 DIF add the DIF Seed SGE */
+	memset(data, 0, sizeof(struct sli4_diseed_sge_s));
+	dif_seed = (struct sli4_diseed_sge_s *)data;
+
+	dif_seed->ref_tag_cmp = cpu_to_le32(dif_info->ref_tag_cmp);
+	dif_seed->ref_tag_repl = cpu_to_le32(dif_info->ref_tag_repl);
+	dif_seed->app_tag_repl = cpu_to_le16(dif_info->app_tag_repl);
+
+	dif_flags = 0;
+	if (dif_info->repl_app_tag)
+		dif_flags |= DISEED_SGE_RE;
+
+	if (hw->sli.if_type != SLI4_INTF_IF_TYPE_2) {
+		if (dif_info->disable_app_ref_ffff)
+			dif_flags |= DISEED_SGE_ATRT;
+
+		if (dif_info->disable_app_ffff)
+			dif_flags |= DISEED_SGE_AT;
+	}
+	dif_flags |= SLI4_SGE_TYPE_DISEED << 11;
+
+	if ((io->type == EFCT_HW_IO_TARGET_WRITE) &&
+	    hw->sli.if_type != SLI4_INTF_IF_TYPE_2 &&
+	    dif_info->dif_separate) {
+		dif_flags &= ~SLI4_SGE_TYPE_MASK;
+		dif_flags |= SLI4_SGE_TYPE_SKIP << 11;
+	}
+
+	dif_seed->dw2w1_flags = cpu_to_le16(dif_flags);
+	dif_seed->app_tag_cmp = cpu_to_le16(dif_info->app_tag_cmp);
+
+	dif_flags = 0;
+	dif_flags |= (dif_info->blk_size & DISEED_SGE_BS_MASK);
+	if (dif_info->auto_incr_ref_tag)
+		dif_flags |= DISEED_SGE_AI;
+	if (dif_info->check_app_tag)
+		dif_flags |= DISEED_SGE_ME;
+	if (dif_info->check_ref_tag)
+		dif_flags |= DISEED_SGE_RE;
+	if (dif_info->check_guard)
+		dif_flags |= DISEED_SGE_CE;
+	if (dif_info->repl_ref_tag)
+		dif_flags |= DISEED_SGE_NR;
+
+	switch (dif_info->dif_oper) {
+	case EFCT_HW_SGE_DIFOP_INNODIFOUTCRC:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_NODIF_OUT_CRC);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_NODIF_OUT_CRC);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCRCOUTNODIF:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CRC_OUT_NODIF);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CRC_OUT_NODIF);
+		break;
+	case EFCT_HW_SGE_DIFOP_INNODIFOUTCHKSUM:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_NODIF_OUT_CSUM);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_NODIF_OUT_CSUM);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCHKSUMOUTNODIF:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CSUM_OUT_NODIF);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CSUM_OUT_NODIF);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCRCOUTCRC:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CRC_OUT_CRC);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CRC_OUT_CRC);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCHKSUMOUTCHKSUM:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CSUM_OUT_CSUM);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CSUM_OUT_CSUM);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCRCOUTCHKSUM:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CRC_OUT_CSUM);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CRC_OUT_CSUM);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCHKSUMOUTCRC:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CSUM_OUT_CRC);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CSUM_OUT_CRC);
+		break;
+	case EFCT_HW_SGE_DIFOP_INRAWOUTRAW:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_RAW_OUT_RAW);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_RAW_OUT_RAW);
+		break;
+	default:
+		efc_log_err(hw->os, "unsupported DIF operation %#x\n",
+			     dif_info->dif_oper);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	dif_seed->dw3w1_flags = cpu_to_le16(dif_flags);
+	/*
+	 * Set last, clear previous last
+	 */
+	sge_flags = data->dw2_flags;
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+	if (io->n_sge) {
+		sge_flags = data[-1].dw2_flags;
+		sge_flags &= ~SLI4_SGE_LAST;
+		data[-1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	io->n_sge++;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static enum efct_hw_rtn_e
+efct_hw_io_overflow_sgl(struct efct_hw_s *hw, struct efct_hw_io_s *io)
+{
+	struct sli4_lsp_sge_s *lsp;
+	u32 dw2_flags = 0;
+
+	/* fail if we're already pointing to the overflow SGL */
+	if (io->sgl == io->ovfl_sgl)
+		return EFCT_HW_RTN_ERROR;
+
+	/* fail if we don't have an overflow SGL registered */
+	if (!io->ovfl_sgl)
+		return EFCT_HW_RTN_ERROR;
+
+	/*
+	 * Overflow, we need to put a link SGE in the last location of the
+	 * current SGL, after copying the the last SGE to the overflow SGL
+	 */
+
+	((struct sli4_sge_s *)io->ovfl_sgl->virt)[0] =
+			 ((struct sli4_sge_s *)io->sgl->virt)[io->n_sge - 1];
+
+	lsp = &((struct sli4_lsp_sge_s *)io->sgl->virt)[io->n_sge - 1];
+	memset(lsp, 0, sizeof(*lsp));
+
+	lsp->buffer_address_high =
+		cpu_to_le32(upper_32_bits(io->ovfl_sgl->phys));
+	lsp->buffer_address_low  =
+		cpu_to_le32(lower_32_bits(io->ovfl_sgl->phys));
+	dw2_flags = SLI4_SGE_TYPE_LSP << SLI4_SGE_TYPE_SHIFT;
+	dw2_flags &= ~SLI4_SGE_LAST;
+	lsp->dw2_flags = cpu_to_le32(dw2_flags);
+
+	io->ovfl_lsp = lsp;
+	io->ovfl_lsp->dw3_seglen =
+		cpu_to_le32(sizeof(struct sli4_sge_s) &
+			    SLI4_LSP_SGE_SEGLEN);
+
+	/* Update the current SGL pointer, and n_sgl */
+	io->sgl = io->ovfl_sgl;
+	io->sgl_count = io->ovfl_sgl_count;
+	io->n_sge = 1;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @ingroup io
+ * @brief Add a scatter gather list entry to an IO.
+ *
+ * @param hw Hardware context.
+ * @param io Previously-allocated HW IO object.
+ * @param addr Physical address.
+ * @param length Length of memory pointed to by @c addr.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_io_add_sge(struct efct_hw_s *hw, struct efct_hw_io_s *io,
+		   uintptr_t addr, u32 length)
+{
+	struct sli4_sge_s	*data = NULL;
+	u32 sge_flags = 0;
+
+	if (!io || !addr || !length) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p addr=%lx length=%u\n",
+			    hw, io, addr, length);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (length && (io->n_sge + 1) > io->sgl_count) {
+		if (efct_hw_io_overflow_sgl(hw, io) != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "SGL full (%d)\n", io->n_sge);
+			return EFCT_HW_RTN_ERROR;
+		}
+	}
+
+	if (length > hw->sli.sge_supported_length) {
+		efc_log_err(hw->os,
+			     "length of SGE %d bigger than allowed %d\n",
+			    length, hw->sli.sge_supported_length);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	data = io->sgl->virt;
+	data += io->n_sge;
+
+	sge_flags = data->dw2_flags;
+	sge_flags &= ~SLI4_SGE_TYPE_MASK;
+	sge_flags |= SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT;
+	sge_flags &= ~SLI4_SGE_DATA_OFFSET_MASK;
+	sge_flags |= SLI4_SGE_DATA_OFFSET_MASK & io->sge_offset;
+
+	data->buffer_address_high = cpu_to_le32(upper_32_bits(addr));
+	data->buffer_address_low  = cpu_to_le32(lower_32_bits(addr));
+	data->buffer_length = cpu_to_le32(length);
+
+	/*
+	 * Always assume this is the last entry and mark as such.
+	 * If this is not the first entry unset the "last SGE"
+	 * indication for the previous entry
+	 */
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+
+	if (io->n_sge) {
+		sge_flags = data[-1].dw2_flags;
+		sge_flags &= ~SLI4_SGE_LAST;
+		data[-1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	/* Set first_data_bde if not previously set */
+	if (io->first_data_sge == 0)
+		io->first_data_sge = io->n_sge;
+
+	io->sge_offset += length;
+	io->n_sge++;
+
+	/* Update the linked segment length (only executed after overflow has
+	 * begun)
+	 */
+	if (io->ovfl_lsp)
+		io->ovfl_lsp->dw3_seglen =
+			cpu_to_le32(io->n_sge * sizeof(struct sli4_sge_s) &
+				    SLI4_LSP_SGE_SEGLEN);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @ingroup io
+ * @brief Add a T10 DIF scatter gather list entry to an IO.
+ *
+ * @param hw Hardware context.
+ * @param io Previously-allocated HW IO object.
+ * @param addr DIF physical address.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_io_add_dif_sge(struct efct_hw_s *hw,
+		       struct efct_hw_io_s *io, uintptr_t addr)
+{
+	struct sli4_dif_sge_s	*data = NULL;
+	u32 sge_flags = 0;
+
+	if (!io || !addr) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p addr=%lx\n",
+			    hw, io, addr);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if ((io->n_sge + 1) > hw->config.n_sgl) {
+		if (efct_hw_io_overflow_sgl(hw, io) != EFCT_HW_RTN_ERROR) {
+			efc_log_err(hw->os, "SGL full (%d)\n", io->n_sge);
+			return EFCT_HW_RTN_ERROR;
+		}
+	}
+
+	data = io->sgl->virt;
+	data += io->n_sge;
+
+	sge_flags = data->dw2_flags;
+	sge_flags &= ~SLI4_SGE_TYPE_MASK;
+	sge_flags |= SLI4_SGE_TYPE_DIF << SLI4_SGE_TYPE_SHIFT;
+
+	if ((io->type == EFCT_HW_IO_TARGET_WRITE) &&
+	    hw->sli.if_type != SLI4_INTF_IF_TYPE_2) {
+		sge_flags &= ~SLI4_SGE_TYPE_MASK;
+		sge_flags |= SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT;
+	}
+
+	data->buffer_address_high = cpu_to_le32(upper_32_bits(addr));
+	data->buffer_address_low  = cpu_to_le32(lower_32_bits(addr));
+
+	/*
+	 * Always assume this is the last entry and mark as such.
+	 * If this is not the first entry unset the "last SGE"
+	 * indication for the previous entry
+	 */
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+	if (io->n_sge) {
+		sge_flags = data[-1].dw2_flags;
+		sge_flags &= ~SLI4_SGE_LAST;
+		data[-1].dw2_flags &= cpu_to_le32(sge_flags);
+	}
+
+	io->n_sge++;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @ingroup io
+ * @brief Abort all previously-started IO's.
+ *
+ * @param hw Hardware context.
+ *
+ * @return Returns None.
+ */
+
+void
+efct_hw_io_abort_all(struct efct_hw_s *hw)
+{
+	struct efct_hw_io_s *io_to_abort	= NULL;
+	struct efct_hw_io_s *next_io		= NULL;
+
+	list_for_each_entry_safe(io_to_abort, next_io,
+				 &hw->io_inuse, list_entry) {
+		efct_hw_io_abort(hw, io_to_abort, true, NULL, NULL);
+	}
+}
+
+/**
+ * @ingroup io
+ * @brief Abort a previously-started IO.
+ *
+ * @param hw Hardware context.
+ * @param io_to_abort The IO to abort.
+ * @param send_abts Boolean to have the hardware automatically
+ * generate an ABTS.
+ * @param cb Function call upon completion of the abort (may be NULL).
+ * @param arg Argument to pass to abort completion function.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_io_abort(struct efct_hw_s *hw, struct efct_hw_io_s *io_to_abort,
+		 bool send_abts, void *cb, void *arg)
+{
+	enum sli4_abort_type_e atype = SLI_ABORT_MAX;
+	u32	id = 0, mask = 0;
+	enum efct_hw_rtn_e	rc = EFCT_HW_RTN_SUCCESS;
+	struct hw_wq_callback_s *wqcb;
+	unsigned long flags = 0;
+
+	if (!io_to_abort) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p\n",
+			    hw, io_to_abort);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_err(hw->os, "cannot send IO abort, HW state=%d\n",
+			     hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* take a reference on IO being aborted */
+	if (kref_get_unless_zero(&io_to_abort->ref) == 0) {
+		/* command no longer active */
+		efc_log_test(hw->os,
+			      "io not active xri=0x%x tag=0x%x\n",
+			     io_to_abort->indicator, io_to_abort->reqtag);
+		return EFCT_HW_RTN_IO_NOT_ACTIVE;
+	}
+
+	/* Must have a valid WQ reference */
+	if (!io_to_abort->wq) {
+		efc_log_test(hw->os, "io_to_abort xri=0x%x not active on WQ\n",
+			      io_to_abort->indicator);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+		return EFCT_HW_RTN_IO_NOT_ACTIVE;
+	}
+
+	/*
+	 * Validation checks complete; now check to see if already being
+	 * aborted
+	 */
+	spin_lock_irqsave(&hw->io_abort_lock, flags);
+	if (io_to_abort->abort_in_progress) {
+		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+		efc_log_debug(hw->os,
+			       "io already being aborted xri=0x%x tag=0x%x\n",
+			      io_to_abort->indicator, io_to_abort->reqtag);
+		return EFCT_HW_RTN_IO_ABORT_IN_PROGRESS;
+	}
+
+	/*
+	 * This IO is not already being aborted. Set flag so we won't try to
+	 * abort it again. After all, we only have one abort_done callback.
+	 */
+	io_to_abort->abort_in_progress = true;
+	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+
+	/*
+	 * If we got here, the possibilities are:
+	 * - host owned xri
+	 *	- io_to_abort->wq_index != U32_MAX
+	 *		- submit ABORT_WQE to same WQ
+	 * - port owned xri:
+	 *	- rxri: io_to_abort->wq_index == U32_MAX
+	 *		- submit ABORT_WQE to any WQ
+	 *	- non-rxri
+	 *		- io_to_abort->index != U32_MAX
+	 *			- submit ABORT_WQE to same WQ
+	 *		- io_to_abort->index == U32_MAX
+	 *			- submit ABORT_WQE to any WQ
+	 */
+	io_to_abort->abort_done = cb;
+	io_to_abort->abort_arg  = arg;
+
+	atype = SLI_ABORT_XRI;
+	id = io_to_abort->indicator;
+
+	/* Allocate a request tag for the abort portion of this IO */
+	wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_abort, io_to_abort);
+	if (!wqcb) {
+		efc_log_err(hw->os, "can't allocate request tag\n");
+		return EFCT_HW_RTN_NO_RESOURCES;
+	}
+	io_to_abort->abort_reqtag = wqcb->instance_index;
+
+	/*
+	 * If the wqe is on the pending list, then set this wqe to be
+	 * aborted when the IO's wqe is removed from the list.
+	 */
+	if (io_to_abort->wq) {
+		spin_lock_irqsave(&io_to_abort->wq->queue->lock, flags);
+		if (io_to_abort->wqe.list_entry.next) {
+			io_to_abort->wqe.abort_wqe_submit_needed = true;
+			io_to_abort->wqe.send_abts = send_abts;
+			io_to_abort->wqe.id = id;
+			io_to_abort->wqe.abort_reqtag =
+						 io_to_abort->abort_reqtag;
+			spin_unlock_irqrestore(&io_to_abort->wq->queue->lock,
+					       flags);
+			return 0;
+		}
+		spin_unlock_irqrestore(&io_to_abort->wq->queue->lock, flags);
+	}
+
+	if (sli_abort_wqe(&hw->sli, io_to_abort->wqe.wqebuf,
+			  hw->sli.wqe_size, atype, send_abts, id, mask,
+			  io_to_abort->abort_reqtag, SLI4_CQ_DEFAULT)) {
+		efc_log_err(hw->os, "ABORT WQE error\n");
+		io_to_abort->abort_reqtag = U32_MAX;
+		efct_hw_reqtag_free(hw, wqcb);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (rc == EFCT_HW_RTN_SUCCESS) {
+		if (!io_to_abort->wq)
+			io_to_abort->wq = efct_hw_queue_next_wq(hw,
+								io_to_abort);
+
+		/* ABORT_WQE does not actually utilize an XRI on the Port,
+		 * therefore, keep xbusy as-is to track the exchange's state,
+		 * not the ABORT_WQE's state
+		 */
+		rc = efct_hw_wq_write(io_to_abort->wq, &io_to_abort->wqe);
+		if (rc > 0)
+			/* non-negative return is success */
+			rc = 0;
+			/*
+			 * can't abort an abort so skip adding to timed wqe
+			 * list
+			 */
+	}
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		spin_lock_irqsave(&hw->io_abort_lock, flags);
+		io_to_abort->abort_in_progress = false;
+		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+	}
+	return rc;
+}
+
+/**
+ * @brief Initialize the reqtag pool.
+ *
+ * @par Description
+ * The WQ request tag pool is initialized.
+ *
+ * @param hw Pointer to HW object.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_reqtag_init(struct efct_hw_s *hw)
+{
+	if (!hw->wq_reqtag_pool) {
+		hw->wq_reqtag_pool = efct_pool_alloc(hw->os,
+					sizeof(struct hw_wq_callback_s),
+					65536);
+		if (!hw->wq_reqtag_pool) {
+			efc_log_err(hw->os,
+				     "efct_pool_alloc struct hw_wq_callback_s fail\n");
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+	efct_hw_reqtag_reset(hw);
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @brief Allocate a WQ request tag.
+ *
+ * Allocate and populate a WQ request tag from the WQ request tag pool.
+ *
+ * @param hw Pointer to HW object.
+ * @param callback Callback function.
+ * @param arg Pointer to callback argument.
+ *
+ * @return Returns pointer to allocated WQ request tag, or NULL if object
+ * cannot be allocated.
+ */
+struct hw_wq_callback_s *
+efct_hw_reqtag_alloc(struct efct_hw_s *hw,
+		     void (*callback)(void *arg, u8 *cqe, int status),
+		     void *arg)
+{
+	struct hw_wq_callback_s *wqcb = NULL;
+
+	if (!callback)
+		return wqcb;
+
+	wqcb = efct_pool_get(hw->wq_reqtag_pool);
+	if (wqcb) {
+		wqcb->callback = callback;
+		wqcb->arg = arg;
+	}
+	return wqcb;
+}
+
+/**
+ * @brief Free a WQ request tag.
+ *
+ * Free the passed in WQ request tag.
+ *
+ * @param hw Pointer to HW object.
+ * @param wqcb Pointer to WQ request tag object to free.
+ *
+ * @return None.
+ */
+void
+efct_hw_reqtag_free(struct efct_hw_s *hw, struct hw_wq_callback_s *wqcb)
+{
+	if (!wqcb->callback)
+		efc_log_err(hw->os, "WQCB is already freed\n");
+
+	wqcb->callback = NULL;
+	wqcb->arg = NULL;
+	efct_pool_put(hw->wq_reqtag_pool, wqcb);
+}
+
+/**
+ * @brief Return WQ request tag by index.
+ *
+ * @par Description
+ * Return pointer to WQ request tag object given an index.
+ *
+ * @param hw Pointer to HW object.
+ * @param instance_index Index of WQ request tag to return.
+ *
+ * @return Pointer to WQ request tag, or NULL.
+ */
+struct hw_wq_callback_s *
+efct_hw_reqtag_get_instance(struct efct_hw_s *hw, u32 instance_index)
+{
+	struct hw_wq_callback_s *wqcb;
+
+	wqcb = efct_pool_get_instance(hw->wq_reqtag_pool, instance_index);
+	if (!wqcb)
+		efc_log_err(hw->os, "wqcb for instance %d is null\n",
+			     instance_index);
+
+	return wqcb;
+}
+
+/**
+ * @brief Reset the WQ request tag pool.
+ *
+ * @par Description
+ * Reset the WQ request tag pool, returning all to the free list.
+ *
+ * @param hw pointer to HW object.
+ *
+ * @return None.
+ */
+void
+efct_hw_reqtag_reset(struct efct_hw_s *hw)
+{
+	struct hw_wq_callback_s *wqcb;
+	u32 i;
+
+	/* Remove all from freelist */
+	while (efct_pool_get(hw->wq_reqtag_pool))
+		;
+
+	/* Put them all back */
+	for (i = 0;
+	     ((wqcb = efct_pool_get_instance(hw->wq_reqtag_pool, i)) != NULL);
+	     i++) {
+		wqcb->instance_index = i;
+		wqcb->callback = NULL;
+		wqcb->arg = NULL;
+		efct_pool_put(hw->wq_reqtag_pool, wqcb);
+	}
+}
+
+static enum efct_hw_rtn_e
+efct_hw_set_dif_seed(struct efct_hw_s *hw)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+	u8 buf[SLI4_BMBX_SIZE];
+	struct sli4_rqst_cmn_set_features_dif_seed_s seed_param;
+
+	memset(&seed_param, 0, sizeof(seed_param));
+	seed_param.seed = cpu_to_le16(hw->config.dif_seed);
+
+	/* send set_features command */
+	if (!sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
+					SLI4_SET_FEATURES_DIF_SEED,
+					4,
+					(u32 *)&seed_param)) {
+		rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+		if (rc)
+			efc_log_err(hw->os,
+				     "efct_hw_command returns %d\n", rc);
+		else
+			efc_log_debug(hw->os, "DIF seed set to 0x%x\n",
+				       hw->config.dif_seed);
+	} else {
+		efc_log_err(hw->os,
+			     "sli_cmd_common_set_features failed\n");
+		rc = EFCT_HW_RTN_ERROR;
+	}
+	return rc;
+}
+
+static void
+efct_hw_watchdog_timer_cb(struct timer_list *t)
+{
+	struct efct_hw_s *hw = from_timer(hw, t, watchdog_timer);
+
+	efct_hw_config_watchdog_timer(hw);
+}
+
+static void
+efct_hw_cb_cfg_watchdog(struct efct_hw_s *hw, int status, u8 *mqe,
+			void  *arg)
+{
+	u16 timeout = hw->watchdog_timeout;
+
+	if (status != 0) {
+		efc_log_err(hw->os, "config watchdog timer failed, rc = %d\n",
+			     status);
+	} else {
+		if (timeout != 0) {
+			/*
+			 * keeping callback 500ms before timeout to keep
+			 * heartbeat alive
+			 */
+			timer_setup(&hw->watchdog_timer,
+				    &efct_hw_watchdog_timer_cb, 0);
+
+			mod_timer(&hw->watchdog_timer,
+				  jiffies +
+				  msecs_to_jiffies(timeout * 1000 - 500));
+		} else {
+			del_timer(&hw->watchdog_timer);
+		}
+	}
+
+	kfree(mqe);
+}
+
+/**
+ * @brief Set configuration parameters for watchdog timer feature.
+ *
+ * @param hw Hardware context.
+ * @param timeout Timeout for watchdog timer in seconds
+ *
+ * @return Returns EFCT_HW_RTN_SUCCESS on success.
+ */
+static enum efct_hw_rtn_e
+efct_hw_config_watchdog_timer(struct efct_hw_s *hw)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+	u8 *buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+
+	if (!buf)
+		return EFCT_HW_RTN_ERROR;
+
+	sli4_cmd_lowlevel_set_watchdog(&hw->sli, buf, SLI4_BMBX_SIZE,
+				       hw->watchdog_timeout);
+	rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT, efct_hw_cb_cfg_watchdog,
+			     NULL);
+	if (rc) {
+		kfree(buf);
+		efc_log_err(hw->os, "config watchdog timer failed, rc = %d\n",
+			     rc);
+	}
+	return rc;
+}
+
+/**
+ * @brief enable sli port health check
+ *
+ * @param hw Hardware context.
+ * @param buf Pointer to a mailbox buffer area.
+ * @param query current status of the health check feature enabled/disabled
+ * @param enable if 1: enable 0: disable
+ * @param buf Pointer to a mailbox buffer area.
+ *
+ * @return Returns EFCT_HW_RTN_SUCCESS on success.
+ */
+static enum efct_hw_rtn_e
+efct_hw_config_sli_port_health_check(struct efct_hw_s *hw, u8 query,
+				     u8 enable)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+	u8 buf[SLI4_BMBX_SIZE];
+	struct sli4_rqst_cmn_set_features_health_check_s param;
+	u32	health_check_flag = 0;
+
+	memset(&param, 0, sizeof(param));
+
+	if (enable)
+		health_check_flag |= SLI4_RQ_HEALTH_CHECK_ENABLE;
+
+	if (query)
+		health_check_flag |= SLI4_RQ_HEALTH_CHECK_QUERY;
+
+	param.health_check_dword = cpu_to_le32(health_check_flag);
+
+	/* build the set_features command */
+	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
+				    SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK,
+				    sizeof(param),
+				    &param);
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+	if (rc)
+		efc_log_err(hw->os, "efct_hw_command returns %d\n", rc);
+	else
+		efc_log_test(hw->os, "SLI Port Health Check is enabled\n");
+
+	return rc;
+}
+
+/**
+ * @brief Set FTD transfer hint feature
+ *
+ * @param hw Hardware context.
+ * @param fdt_xfer_hint size in bytes where read requests are segmented.
+ *
+ * @return Returns EFCT_HW_RTN_SUCCESS on success.
+ */
+static enum efct_hw_rtn_e
+efct_hw_config_set_fdt_xfer_hint(struct efct_hw_s *hw, u32 fdt_xfer_hint)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+	u8 buf[SLI4_BMBX_SIZE];
+	struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint_s param;
+
+	memset(&param, 0, sizeof(param));
+	param.fdt_xfer_hint = cpu_to_le32(fdt_xfer_hint);
+	/* build the set_features command */
+	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
+				    SLI4_SET_FEATURES_SET_FTD_XFER_HINT,
+				    sizeof(param),
+				    &param);
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+	if (rc)
+		efc_log_warn(hw->os, "set FDT hint %d failed: %d\n",
+			      fdt_xfer_hint, rc);
+	else
+		efc_log_info(hw->os, "Set FTD transfer hint to %d\n",
+			      le32_to_cpu(param.fdt_xfer_hint));
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 161f9001a5c6..d913c0169c44 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -1029,5 +1029,51 @@ void efct_hw_rx_free(struct efct_hw_s *hw);
 extern enum efct_hw_rtn_e
 efct_hw_command(struct efct_hw_s *hw, u8 *cmd, u32 opts, void *cb,
 		void *arg);
+struct efct_hw_io_s *efct_hw_io_alloc(struct efct_hw_s *hw);
+int efct_hw_io_free(struct efct_hw_s *hw, struct efct_hw_io_s *io);
+u8 efct_hw_io_inuse(struct efct_hw_s *hw, struct efct_hw_io_s *io);
+extern enum efct_hw_rtn_e
+efct_hw_io_send(struct efct_hw_s *hw, enum efct_hw_io_type_e type,
+		struct efct_hw_io_s *io, u32 len,
+		union efct_hw_io_param_u *iparam,
+		struct efc_remote_node_s *rnode, void *cb, void *arg);
+extern enum efct_hw_rtn_e
+efct_hw_io_register_sgl(struct efct_hw_s *hw, struct efct_hw_io_s *io,
+			struct efc_dma_s *sgl,
+			u32 sgl_count);
+extern enum efct_hw_rtn_e
+efct_hw_io_init_sges(struct efct_hw_s *hw,
+		     struct efct_hw_io_s *io, enum efct_hw_io_type_e type);
+extern enum efct_hw_rtn_e
+efct_hw_io_add_seed_sge(struct efct_hw_s *hw, struct efct_hw_io_s *io,
+			struct efct_hw_dif_info_s *dif_info);
+extern enum efct_hw_rtn_e
+efct_hw_io_add_sge(struct efct_hw_s *hw, struct efct_hw_io_s *io,
+		   uintptr_t addr, u32 length);
+extern enum efct_hw_rtn_e
+efct_hw_io_add_dif_sge(struct efct_hw_s *hw, struct efct_hw_io_s *io,
+		       uintptr_t addr);
+extern enum efct_hw_rtn_e
+efct_hw_io_abort(struct efct_hw_s *hw, struct efct_hw_io_s *io_to_abort,
+		 bool send_abts, void *cb, void *arg);
+extern u32
+efct_hw_io_get_count(struct efct_hw_s *hw,
+		     enum efct_hw_io_count_type_e io_count_type);
+extern struct efct_hw_io_s
+*efct_hw_io_lookup(struct efct_hw_s *hw, u32 indicator);
+void efct_hw_io_abort_all(struct efct_hw_s *hw);
+void efct_hw_io_free_internal(struct kref *arg);
+
+/* HW WQ request tag API */
+enum efct_hw_rtn_e efct_hw_reqtag_init(struct efct_hw_s *hw);
+extern struct hw_wq_callback_s
+*efct_hw_reqtag_alloc(struct efct_hw_s *hw,
+			void (*callback)(void *arg, u8 *cqe,
+					 int status), void *arg);
+extern void
+efct_hw_reqtag_free(struct efct_hw_s *hw, struct hw_wq_callback_s *wqcb);
+extern struct hw_wq_callback_s
+*efct_hw_reqtag_get_instance(struct efct_hw_s *hw, u32 instance_index);
+void efct_hw_reqtag_reset(struct efct_hw_s *hw);
 
 #endif /* __EFCT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 20/32] elx: efct: Hardware queues processing
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (18 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 19/32] elx: efct: Hardware IO and SGL initialization James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 21/32] elx: efct: Unsolicited FC frame processing routines James Smart
                   ` (12 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for EQ, CQ, WQ and RQ processing.
Routines for IO object pool allocation and deallocation.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c        | 628 +++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h        |  36 ++
 drivers/scsi/elx/efct/efct_hw_queues.c | 247 +++++++++++++
 drivers/scsi/elx/efct/efct_io.c        | 288 +++++++++++++++
 drivers/scsi/elx/efct/efct_io.h        | 219 ++++++++++++
 5 files changed, 1418 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_io.c
 create mode 100644 drivers/scsi/elx/efct/efct_io.h

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index ae0f49e5d751..aab66f5d7908 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -49,6 +49,14 @@ efct_hw_config_sli_port_health_check(struct efct_hw_s *hw, u8 query,
 				     u8 enable);
 static enum efct_hw_rtn_e
 efct_hw_set_dif_seed(struct efct_hw_s *hw);
+static void
+efct_hw_queue_hash_add(struct efct_queue_hash_s *, u16, u16);
+static int
+efct_hw_flush(struct efct_hw_s *);
+static void
+efct_hw_wq_process_io(void *arg, u8 *cqe, int status);
+static void
+efct_hw_wq_process_abort(void *arg, u8 *cqe, int status);
 
 static enum efct_hw_rtn_e
 efct_hw_link_event_init(struct efct_hw_s *hw)
@@ -3273,3 +3281,623 @@ efct_hw_config_set_fdt_xfer_hint(struct efct_hw_s *hw, u32 fdt_xfer_hint)
 
 	return rc;
 }
+
+static u8 efct_hw_iotype_is_originator(u16 io_type)
+{
+	switch (io_type) {
+	case EFCT_HW_FC_CT:
+	case EFCT_HW_ELS_REQ:
+		return 1;
+	default:
+		return 0;
+	}
+}
+
+/**
+ * @brief Update the queue hash with the ID and index.
+ *
+ * @param hash Pointer to hash table.
+ * @param id ID that was created.
+ * @param index The index into the hash object.
+ */
+static void
+efct_hw_queue_hash_add(struct efct_queue_hash_s *hash,
+		       u16 id, u16 index)
+{
+	u32	hash_index = id & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/*
+	 * Since the hash is always bigger than the number of queues, then we
+	 * never have to worry about an infinite loop.
+	 */
+	while (hash[hash_index].in_use)
+		hash_index = (hash_index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/* not used, claim the entry */
+	hash[hash_index].id = id;
+	hash[hash_index].in_use = true;
+	hash[hash_index].index = index;
+}
+
+/**
+ * @brief Find index given queue ID.
+ *
+ * @param hash Pointer to hash table.
+ * @param id ID to find.
+ *
+ * @return Returns the index into the HW cq array or -1 if not found.
+ */
+int
+efct_hw_queue_hash_find(struct efct_queue_hash_s *hash, u16 id)
+{
+	int	rc = -1;
+	int	index = id & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/*
+	 * Since the hash is always bigger than the maximum number of Qs, then
+	 * we never have to worry about an infinite loop. We will always find
+	 * an unused entry.
+	 */
+	do {
+		if (hash[index].in_use &&
+		    hash[index].id == id)
+			rc = hash[index].index;
+		else
+			index = (index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
+	} while (rc == -1 && hash[index].in_use);
+
+	return rc;
+}
+
+int
+efct_hw_process(struct efct_hw_s *hw, u32 vector,
+		u32 max_isr_time_msec)
+{
+	struct hw_eq_s *eq;
+	int rc = 0;
+
+	/*
+	 * The caller should disable interrupts if they wish to prevent us
+	 * from processing during a shutdown. The following states are defined:
+	 *   EFCT_HW_STATE_UNINITIALIZED - No queues allocated
+	 *   EFCT_HW_STATE_QUEUES_ALLOCATED - The state after a chip reset,
+	 *                                    queues are cleared.
+	 *   EFCT_HW_STATE_ACTIVE - Chip and queues are operational
+	 *   EFCT_HW_STATE_RESET_IN_PROGRESS - reset, we still want completions
+	 *   EFCT_HW_STATE_TEARDOWN_IN_PROGRESS - We still want mailbox
+	 *                                        completions.
+	 */
+	if (hw->state == EFCT_HW_STATE_UNINITIALIZED)
+		return 0;
+
+	/* Get pointer to struct hw_eq_s */
+	eq = hw->hw_eq[vector];
+	if (!eq)
+		return 0;
+
+	eq->use_count++;
+
+	rc = efct_hw_eq_process(hw, eq, max_isr_time_msec);
+
+	return rc;
+}
+
+/**
+ * @ingroup interrupt
+ * @brief Process events associated with an EQ.
+ *
+ * @par Description
+ * Loop termination:
+ * @n @n Without a mechanism to terminate the completion processing loop, it
+ * is possible under some workload conditions for the loop to never terminate
+ * (or at least take longer than the OS is happy to have an interrupt handler
+ * or kernel thread context hold a CPU without yielding).
+ * @n @n The approach taken here is to periodically check how much time
+ * we have been in this
+ * processing loop, and if we exceed a predetermined time (multiple seconds),
+ * the loop is terminated, and efct_hw_process() returns.
+ *
+ * @param hw Hardware context.
+ * @param eq Pointer to HW EQ object.
+ * @param max_isr_time_msec Maximum time in msec to stay in this function.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+efct_hw_eq_process(struct efct_hw_s *hw, struct hw_eq_s *eq,
+		   u32 max_isr_time_msec)
+{
+	u8		eqe[sizeof(struct sli4_eqe_s)] = { 0 };
+	u32	tcheck_count;
+	time_t		tstart;
+	time_t		telapsed;
+	bool		done = false;
+
+	tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
+	tstart = jiffies_to_msecs(jiffies);
+
+	while (!done && !sli_eq_read(&hw->sli, eq->queue, eqe)) {
+		u16	cq_id = 0;
+		int		rc;
+
+		rc = sli_eq_parse(&hw->sli, eqe, &cq_id);
+		if (unlikely(rc)) {
+			if (rc > 0) {
+				u32 i;
+
+				/*
+				 * Received a sentinel EQE indicating the
+				 * EQ is full. Process all CQs
+				 */
+				for (i = 0; i < hw->cq_count; i++)
+					efct_hw_cq_process(hw, hw->hw_cq[i]);
+				continue;
+			} else {
+				return rc;
+			}
+		} else {
+			int index;
+
+			index  = efct_hw_queue_hash_find(hw->cq_hash, cq_id);
+
+			if (likely(index >= 0))
+				efct_hw_cq_process(hw, hw->hw_cq[index]);
+			else
+				efc_log_err(hw->os, "bad CQ_ID %#06x\n",
+					     cq_id);
+		}
+
+		if (eq->queue->n_posted > eq->queue->posted_limit)
+			sli_queue_arm(&hw->sli, eq->queue, false);
+
+		if (tcheck_count && (--tcheck_count == 0)) {
+			tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
+			telapsed = jiffies_to_msecs(jiffies) - tstart;
+			if (telapsed >= max_isr_time_msec)
+				done = true;
+		}
+	}
+	sli_queue_eq_arm(&hw->sli, eq->queue, true);
+
+	return 0;
+}
+
+/**
+ * @brief Process entries on the given completion queue.
+ *
+ * @param hw Hardware context.
+ * @param cq Pointer to the HW completion queue object.
+ *
+ * @return None.
+ */
+void
+efct_hw_cq_process(struct efct_hw_s *hw, struct hw_cq_s *cq)
+{
+	u8		cqe[sizeof(struct sli4_mcqe_s)];
+	u16	rid = U16_MAX;
+	enum sli4_qentry_e	ctype;		/* completion type */
+	int		status;
+	u32	n_processed = 0;
+	u32	tstart, telapsed;
+
+	tstart = jiffies_to_msecs(jiffies);
+
+	while (!sli_cq_read(&hw->sli, cq->queue, cqe)) {
+		status = sli_cq_parse(&hw->sli, cq->queue,
+				      cqe, &ctype, &rid);
+		/*
+		 * The sign of status is significant. If status is:
+		 * == 0 : call completed correctly and
+		 * the CQE indicated success
+		 * > 0 : call completed correctly and
+		 * the CQE indicated an error
+		 * < 0 : call failed and no information is available about the
+		 * CQE
+		 */
+		if (status < 0) {
+			if (status == -2)
+				/*
+				 * Notification that an entry was consumed,
+				 * but not completed
+				 */
+				continue;
+
+			break;
+		}
+
+		switch (ctype) {
+		case SLI_QENTRY_ASYNC:
+			sli_cqe_async(&hw->sli, cqe);
+			break;
+		case SLI_QENTRY_MQ:
+			/*
+			 * Process MQ entry. Note there is no way to determine
+			 * the MQ_ID from the completion entry.
+			 */
+			efct_hw_mq_process(hw, status, hw->mq);
+			break;
+		case SLI_QENTRY_WQ:
+			efct_hw_wq_process(hw, cq, cqe, status, rid);
+			break;
+		case SLI_QENTRY_WQ_RELEASE: {
+			u32 wq_id = rid;
+			int index;
+			struct hw_wq_s *wq = NULL;
+
+			index = efct_hw_queue_hash_find(hw->wq_hash, wq_id);
+
+			if (likely(index >= 0)) {
+				wq = hw->hw_wq[index];
+			} else {
+				efc_log_err(hw->os, "bad WQ_ID %#06x\n", wq_id);
+				break;
+			}
+			/* Submit any HW IOs that are on the WQ pending list */
+			hw_wq_submit_pending(wq, wq->wqec_set_count);
+
+			break;
+		}
+
+		case SLI_QENTRY_RQ:
+			efct_hw_rqpair_process_rq(hw, cq, cqe);
+			break;
+		case SLI_QENTRY_XABT: {
+			efct_hw_xabt_process(hw, cq, cqe, rid);
+			break;
+		}
+		default:
+			efc_log_test(hw->os,
+				      "unhandled ctype=%#x rid=%#x\n",
+				     ctype, rid);
+			break;
+		}
+
+		n_processed++;
+		if (n_processed == cq->queue->proc_limit)
+			break;
+
+		if (cq->queue->n_posted >= cq->queue->posted_limit)
+			sli_queue_arm(&hw->sli, cq->queue, false);
+	}
+
+	sli_queue_arm(&hw->sli, cq->queue, true);
+
+	if (n_processed > cq->queue->max_num_processed)
+		cq->queue->max_num_processed = n_processed;
+	telapsed = jiffies_to_msecs(jiffies) - tstart;
+	if (telapsed > cq->queue->max_process_time)
+		cq->queue->max_process_time = telapsed;
+}
+
+/**
+ * @brief Process WQ completion queue entries.
+ *
+ * @param hw Hardware context.
+ * @param cq Pointer to the HW completion queue object.
+ * @param cqe Pointer to WQ completion queue.
+ * @param status Completion status.
+ * @param rid Resource ID (IO tag).
+ *
+ * @return none
+ */
+void
+efct_hw_wq_process(struct efct_hw_s *hw, struct hw_cq_s *cq,
+		   u8 *cqe, int status, u16 rid)
+{
+	struct hw_wq_callback_s *wqcb;
+
+	if (rid == EFCT_HW_REQUE_XRI_REGTAG) {
+		if (status)
+			efc_log_err(hw->os, "reque xri failed, status = %d\n",
+				     status);
+		return;
+	}
+
+	wqcb = efct_hw_reqtag_get_instance(hw, rid);
+	if (!wqcb) {
+		efc_log_err(hw->os, "invalid request tag: x%x\n", rid);
+		return;
+	}
+
+	if (!wqcb->callback) {
+		efc_log_err(hw->os, "wqcb callback is NULL\n");
+		return;
+	}
+
+	(*wqcb->callback)(wqcb->arg, cqe, status);
+}
+
+/**
+ * @brief Process WQ completions for IO requests
+ *
+ * @param arg Generic callback argument
+ * @param cqe Pointer to completion queue entry
+ * @param status Completion status
+ *
+ * @par Description
+ * Note:  Regarding io->reqtag, the reqtag is assigned once when HW IOs are
+ * initialized in efct_hw_setup_io(), and don't need to be returned to the
+ * hw->wq_reqtag_pool.
+ *
+ * @return None.
+ */
+static void
+efct_hw_wq_process_io(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_io_s *io = arg;
+	struct efct_hw_s *hw = io->hw;
+	struct sli4_fc_wcqe_s *wcqe = (void *)cqe;
+	u32	len = 0;
+	u32 ext = 0;
+
+	efct_hw_remove_io_timed_wqe(hw, io);
+
+	/* clear xbusy flag if WCQE[XB] is clear */
+	if (io->xbusy && (wcqe->flags & SLI4_WCQE_XB) == 0)
+		io->xbusy = false;
+
+	/* get extended CQE status */
+	switch (io->type) {
+	case EFCT_HW_BLS_ACC:
+	case EFCT_HW_BLS_ACC_SID:
+		break;
+	case EFCT_HW_ELS_REQ:
+		sli_fc_els_did(&hw->sli, cqe, &ext);
+		len = sli_fc_response_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_ELS_RSP:
+	case EFCT_HW_ELS_RSP_SID:
+	case EFCT_HW_FC_CT_RSP:
+		break;
+	case EFCT_HW_FC_CT:
+		len = sli_fc_response_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_WRITE:
+		len = sli_fc_io_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_READ:
+		len = sli_fc_io_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_RSP:
+		break;
+	case EFCT_HW_IO_DNRX_REQUEUE:
+		/* release the count for re-posting the buffer */
+		/* efct_hw_io_free(hw, io); */
+		break;
+	default:
+		efc_log_test(hw->os, "unhandled io type %#x for XRI 0x%x\n",
+			      io->type, io->indicator);
+		break;
+	}
+	if (status) {
+		ext = sli_fc_ext_status(&hw->sli, cqe);
+		/*
+		 * If we're not an originator IO, and XB is set, then issue
+		 * abort for the IO from within the HW
+		 */
+		if ((!efct_hw_iotype_is_originator(io->type)) &&
+		    wcqe->flags & SLI4_WCQE_XB) {
+			enum efct_hw_rtn_e rc;
+
+			efc_log_debug(hw->os, "aborting xri=%#x tag=%#x\n",
+				       io->indicator, io->reqtag);
+
+			/*
+			 * Because targets may send a response when the IO
+			 * completes using the same XRI, we must wait for the
+			 * XRI_ABORTED CQE to issue the IO callback
+			 */
+			rc = efct_hw_io_abort(hw, io, false, NULL, NULL);
+			if (rc == EFCT_HW_RTN_SUCCESS) {
+				/*
+				 * latch status to return after abort is
+				 * complete
+				 */
+				io->status_saved = true;
+				io->saved_status = status;
+				io->saved_ext = ext;
+				io->saved_len = len;
+				goto exit_efct_hw_wq_process_io;
+			} else if (rc == EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
+				/*
+				 * Already being aborted by someone else (ABTS
+				 * perhaps). Just fall thru and return original
+				 * error.
+				 */
+				efc_log_debug(hw->os, "%s%#x tag=%#x\n",
+					       "abort in progress xri=",
+					      io->indicator, io->reqtag);
+
+			} else {
+				/* Failed to abort for some other reason, log
+				 * error
+				 */
+				efc_log_test(hw->os, "%s%#x tag=%#x rc=%d\n",
+					      "Failed to abort xri=",
+					     io->indicator, io->reqtag, rc);
+			}
+		}
+	}
+
+	if (io->done) {
+		efct_hw_done_t done = io->done;
+		void *arg = io->arg;
+
+		io->done = NULL;
+
+		if (io->status_saved) {
+			/* use latched status if exists */
+			status = io->saved_status;
+			len = io->saved_len;
+			ext = io->saved_ext;
+			io->status_saved = false;
+		}
+
+		/* Restore default SGL */
+		efct_hw_io_restore_sgl(hw, io);
+		done(io, io->rnode, len, status, ext, arg);
+	}
+
+exit_efct_hw_wq_process_io:
+	return;
+}
+
+/**
+ * @brief Process WQ completions for abort requests.
+ *
+ * @param arg Generic callback argument.
+ * @param cqe Pointer to completion queue entry.
+ * @param status Completion status.
+ *
+ * @return None.
+ */
+static void
+efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_io_s *io = arg;
+	struct efct_hw_s *hw = io->hw;
+	u32 ext = 0;
+	u32 len = 0;
+	struct hw_wq_callback_s *wqcb;
+	unsigned long flags = 0;
+
+	/*
+	 * For IOs that were aborted internally, we may need to issue the
+	 * callback here depending on whether a XRI_ABORTED CQE is expected ot
+	 * not. If the status is Local Reject/No XRI, then
+	 * issue the callback now.
+	 */
+	ext = sli_fc_ext_status(&hw->sli, cqe);
+	if (status == SLI4_FC_WCQE_STATUS_LOCAL_REJECT &&
+	    ext == SLI4_FC_LOCAL_REJECT_NO_XRI &&
+		io->done) {
+		efct_hw_done_t done = io->done;
+		void		*arg = io->arg;
+
+		io->done = NULL;
+
+		/*
+		 * Use latched status as this is always saved for an internal
+		 * abort Note: We wont have both a done and abort_done
+		 * function, so don't worry about
+		 *       clobbering the len, status and ext fields.
+		 */
+		status = io->saved_status;
+		len = io->saved_len;
+		ext = io->saved_ext;
+		io->status_saved = false;
+		done(io, io->rnode, len, status, ext, arg);
+	}
+
+	if (io->abort_done) {
+		efct_hw_done_t done = io->abort_done;
+		void *arg = io->abort_arg;
+
+		io->abort_done = NULL;
+
+		done(io, io->rnode, len, status, ext, arg);
+	}
+	spin_lock_irqsave(&hw->io_abort_lock, flags);
+	/* clear abort bit to indicate abort is complete */
+	io->abort_in_progress = false;
+	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+
+	/* Free the WQ callback */
+	if (io->abort_reqtag == U32_MAX) {
+		efc_log_err(hw->os, "HW IO already freed\n");
+		return;
+	}
+
+	wqcb = efct_hw_reqtag_get_instance(hw, io->abort_reqtag);
+	efct_hw_reqtag_free(hw, wqcb);
+
+	/*
+	 * Call efct_hw_io_free() because this releases the WQ reservation as
+	 * well as doing the refcount put. Don't duplicate the code here.
+	 */
+	(void)efct_hw_io_free(hw, io);
+}
+
+/**
+ * @brief Process XABT completions
+ *
+ * @param hw Hardware context.
+ * @param cq Pointer to the HW completion queue object.
+ * @param cqe Pointer to WQ completion queue.
+ * @param rid Resource ID (IO tag).
+ *
+ *
+ * @return None.
+ */
+void
+efct_hw_xabt_process(struct efct_hw_s *hw, struct hw_cq_s *cq,
+		     u8 *cqe, u16 rid)
+{
+	/* search IOs wait free list */
+	struct efct_hw_io_s *io = NULL;
+	unsigned long flags = 0;
+
+	io = efct_hw_io_lookup(hw, rid);
+	if (!io) {
+		/* IO lookup failure should never happen */
+		efc_log_err(hw->os,
+			     "Error: xabt io lookup failed rid=%#x\n", rid);
+		return;
+	}
+
+	if (!io->xbusy)
+		efc_log_debug(hw->os, "xabt io not busy rid=%#x\n", rid);
+	else
+		/* mark IO as no longer busy */
+		io->xbusy = false;
+
+	/*
+	 * For IOs that were aborted internally, we need to issue any pending
+	 * callback here.
+	 */
+	if (io->done) {
+		efct_hw_done_t done = io->done;
+		void		*arg = io->arg;
+
+		/*
+		 * Use latched status as this is always saved for an internal
+		 * abort
+		 */
+		int status = io->saved_status;
+		u32 len = io->saved_len;
+		u32 ext = io->saved_ext;
+
+		io->done = NULL;
+		io->status_saved = false;
+
+		done(io, io->rnode, len, status, ext, arg);
+	}
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+	if (io->state == EFCT_HW_IO_STATE_INUSE ||
+	    io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
+		/* if on wait_free list, caller has already freed IO;
+		 * remove from wait_free list and add to free list.
+		 * if on in-use list, already marked as no longer busy;
+		 * just leave there and wait for caller to free.
+		 */
+		if (io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
+			io->state = EFCT_HW_IO_STATE_FREE;
+			list_del(&io->list_entry);
+			efct_hw_io_free_move_correct_list(hw, io);
+		}
+	}
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+}
+
+static int
+efct_hw_flush(struct efct_hw_s *hw)
+{
+	u32	i = 0;
+
+	/* Process any remaining completions */
+	for (i = 0; i < hw->eq_count; i++)
+		efct_hw_process(hw, i, ~0);
+
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index d913c0169c44..8a487df2338d 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -1076,4 +1076,40 @@ extern struct hw_wq_callback_s
 *efct_hw_reqtag_get_instance(struct efct_hw_s *hw, u32 instance_index);
 void efct_hw_reqtag_reset(struct efct_hw_s *hw);
 
+/* RQ completion handlers for RQ pair mode */
+extern int
+efct_hw_rqpair_process_rq(struct efct_hw_s *hw,
+			  struct hw_cq_s *cq, u8 *cqe);
+extern
+enum efct_hw_rtn_e efct_hw_rqpair_sequence_free(struct efct_hw_s *hw,
+						struct efc_hw_sequence_s *seq);
+static inline void
+efct_hw_sequence_copy(struct efc_hw_sequence_s *dst,
+		      struct efc_hw_sequence_s *src)
+{
+	/* Copy src to dst, then zero out the linked list link */
+	*dst = *src;
+}
+
+static inline enum efct_hw_rtn_e
+efct_hw_sequence_free(struct efct_hw_s *hw, struct efc_hw_sequence_s *seq)
+{
+	/* Only RQ pair mode is supported */
+	return efct_hw_rqpair_sequence_free(hw, seq);
+}
+extern int
+efct_hw_eq_process(struct efct_hw_s *hw, struct hw_eq_s *eq,
+		   u32 max_isr_time_msec);
+void efct_hw_cq_process(struct efct_hw_s *hw, struct hw_cq_s *cq);
+extern void
+efct_hw_wq_process(struct efct_hw_s *hw, struct hw_cq_s *cq,
+		   u8 *cqe, int status, u16 rid);
+extern void
+efct_hw_xabt_process(struct efct_hw_s *hw, struct hw_cq_s *cq,
+		     u8 *cqe, u16 rid);
+extern int
+efct_hw_process(struct efct_hw_s *hw, u32 vector, u32 max_isr_time_msec);
+extern int
+efct_hw_queue_hash_find(struct efct_queue_hash_s *hash, u16 id);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_hw_queues.c b/drivers/scsi/elx/efct/efct_hw_queues.c
index 5196aa75553c..97d64e225f43 100644
--- a/drivers/scsi/elx/efct/efct_hw_queues.c
+++ b/drivers/scsi/elx/efct/efct_hw_queues.c
@@ -1715,3 +1715,250 @@ efct_hw_qtop_free(struct efct_hw_qtop_s *qtop)
 		kfree(qtop);
 	}
 }
+
+/**
+ * @brief Process receive queue completions for RQ Pair mode.
+ *
+ * @par Description
+ * RQ completions are processed. In RQ pair mode, a single header and single
+ * payload buffer are received, and passed to the function that has registered
+ * for unsolicited callbacks.
+ *
+ * @param hw Hardware context.
+ * @param cq Pointer to HW completion queue.
+ * @param cqe Completion queue entry.
+ *
+ * @return Returns 0 for success, or a negative error code value for failure.
+ **/
+
+int
+efct_hw_rqpair_process_rq(struct efct_hw_s *hw, struct hw_cq_s *cq,
+			  u8 *cqe)
+{
+	u16 rq_id;
+	u32 index;
+	int rqindex;
+	int	 rq_status;
+	u32 h_len;
+	u32 p_len;
+	struct efc_hw_sequence_s *seq;
+	struct hw_rq_s *rq;
+
+	rq_status = sli_fc_rqe_rqid_and_index(&hw->sli, cqe,
+					      &rq_id, &index);
+	if (rq_status != 0) {
+		switch (rq_status) {
+		case SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED:
+		case SLI4_FC_ASYNC_RQ_DMA_FAILURE:
+			/* just get RQ buffer then return to chip */
+			rqindex = efct_hw_rqpair_find(hw, rq_id);
+			if (rqindex < 0) {
+				efc_log_test(hw->os,
+					      "status=%#x: lookup fail id=%#x\n",
+					     rq_status, rq_id);
+				break;
+			}
+
+			/* get RQ buffer */
+			seq = efct_hw_rqpair_get(hw, rqindex, index);
+
+			/* return to chip */
+			if (efct_hw_rqpair_sequence_free(hw, seq)) {
+				efc_log_test(hw->os,
+					      "status=%#x,fail rtrn buf to RQ\n",
+					     rq_status);
+				break;
+			}
+			break;
+		case SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED:
+		case SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC:
+			/*
+			 * since RQ buffers were not consumed, cannot return
+			 * them to chip
+			 * fall through
+			 */
+			efc_log_debug(hw->os, "Warning: RCQE status=%#x,\n",
+				       rq_status);
+		default:
+			break;
+		}
+		return -1;
+	}
+
+	rqindex = efct_hw_rqpair_find(hw, rq_id);
+	if (rqindex < 0) {
+		efc_log_test(hw->os, "Error: rq_id lookup failed for id=%#x\n",
+			      rq_id);
+		return -1;
+	}
+
+	rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]];
+	rq->use_count++;
+
+	seq = efct_hw_rqpair_get(hw, rqindex, index);
+	if (WARN_ON(!seq))
+		return -1;
+
+	seq->hw = hw;
+	seq->auto_xrdy = 0;
+	seq->out_of_xris = 0;
+	seq->xri = 0;
+	seq->hio = NULL;
+
+	sli_fc_rqe_length(&hw->sli, cqe, &h_len, &p_len);
+	seq->header->dma.len = h_len;
+	seq->payload->dma.len = p_len;
+	seq->fcfi = sli_fc_rqe_fcfi(&hw->sli, cqe);
+	seq->hw_priv = cq->eq;
+
+	efct_unsolicited_cb(hw->os, seq);
+
+	return 0;
+}
+/**
+ * @brief Return pointer to RQ buffer entry.
+ *
+ * @par Description
+ * Returns a pointer to the RQ buffer entry given by @c rqindex and @c bufindex.
+ *
+ * @param hw Hardware context.
+ * @param rqindex Index of the RQ that is being processed.
+ * @param bufindex Index into the RQ that is being processed.
+ *
+ * @return Pointer to the sequence structure, or NULL otherwise.
+ */
+static struct efc_hw_sequence_s *
+efct_hw_rqpair_get(struct efct_hw_s *hw, u16 rqindex, u16 bufindex)
+{
+	struct sli4_queue_s *rq_hdr = &hw->rq[rqindex];
+	struct efc_hw_sequence_s *seq = NULL;
+	struct hw_rq_s *rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]];
+	unsigned long flags = 0;
+
+	if (bufindex >= rq_hdr->length) {
+		efc_log_err(hw->os,
+				"RQidx %d bufidx %d exceed ring len %d for id %d\n",
+				rqindex, bufindex, rq_hdr->length, rq_hdr->id);
+		return NULL;
+	}
+
+	/* rq_hdr lock also covers rqindex+1 queue */
+	spin_lock_irqsave(&rq_hdr->lock, flags);
+
+	seq = rq->rq_tracker[bufindex];
+	rq->rq_tracker[bufindex] = NULL;
+
+	if (!seq) {
+		efc_log_err(hw->os,
+			     "RQbuf NULL, rqidx %d, bufidx %d, cur q idx = %d\n",
+			     rqindex, bufindex, rq_hdr->index);
+	}
+
+	spin_unlock_irqrestore(&rq_hdr->lock, flags);
+	return seq;
+}
+/**
+ * @brief Posts an RQ buffer to a queue and update the verification structures
+ *
+ * @param hw		hardware context
+ * @param seq Pointer to sequence object.
+ *
+ * @return Returns 0 on success, or a non-zero value otherwise.
+ */
+static int
+efct_hw_rqpair_put(struct efct_hw_s *hw, struct efc_hw_sequence_s *seq)
+{
+	struct sli4_queue_s *rq_hdr = &hw->rq[seq->header->rqindex];
+	struct sli4_queue_s *rq_payload = &hw->rq[seq->payload->rqindex];
+	u32 hw_rq_index = hw->hw_rq_lookup[seq->header->rqindex];
+	struct hw_rq_s *rq = hw->hw_rq[hw_rq_index];
+	u32     phys_hdr[2];
+	u32     phys_payload[2];
+	int      qindex_hdr;
+	int      qindex_payload;
+	unsigned long flags = 0;
+
+	/* Update the RQ verification lookup tables */
+	phys_hdr[0] = upper_32_bits(seq->header->dma.phys);
+	phys_hdr[1] = lower_32_bits(seq->header->dma.phys);
+	phys_payload[0] = upper_32_bits(seq->payload->dma.phys);
+	phys_payload[1] = lower_32_bits(seq->payload->dma.phys);
+
+	/* rq_hdr lock also covers payload / header->rqindex+1 queue */
+	spin_lock_irqsave(&rq_hdr->lock, flags);
+
+	/*
+	 * Note: The header must be posted last for buffer pair mode because
+	 *       posting on the header queue posts the payload queue as well.
+	 *       We do not ring the payload queue independently in RQ pair mode.
+	 */
+	qindex_payload = sli_rq_write(&hw->sli, rq_payload,
+				      (void *)phys_payload);
+	qindex_hdr = sli_rq_write(&hw->sli, rq_hdr, (void *)phys_hdr);
+	if (qindex_hdr < 0 ||
+	    qindex_payload < 0) {
+		efc_log_err(hw->os, "RQ_ID=%#x write failed\n", rq_hdr->id);
+		spin_unlock_irqrestore(&rq_hdr->lock, flags);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* ensure the indexes are the same */
+	WARN_ON(qindex_hdr != qindex_payload);
+
+	/* Update the lookup table */
+	if (!rq->rq_tracker[qindex_hdr]) {
+		rq->rq_tracker[qindex_hdr] = seq;
+	} else {
+		efc_log_test(hw->os,
+			      "expected rq_tracker[%d][%d] buffer to be NULL\n",
+			     hw_rq_index, qindex_hdr);
+	}
+
+	spin_unlock_irqrestore(&rq_hdr->lock, flags);
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @brief Return RQ buffers (while in RQ pair mode).
+ *
+ * @par Description
+ * The header and payload buffers are returned to the Receive Queue.
+ *
+ * @param hw Hardware context.
+ * @param seq Header/payload sequence buffers.
+ *
+ * @return Returns EFCT_HW_RTN_SUCCESS on success, or an error code value on
+ * failure.
+ */
+
+enum efct_hw_rtn_e
+efct_hw_rqpair_sequence_free(struct efct_hw_s *hw,
+			     struct efc_hw_sequence_s *seq)
+{
+	enum efct_hw_rtn_e   rc = EFCT_HW_RTN_SUCCESS;
+
+	/*
+	 * Post the data buffer first. Because in RQ pair mode, ringing the
+	 * doorbell of the header ring will post the data buffer as well.
+	 */
+	if (efct_hw_rqpair_put(hw, seq)) {
+		efc_log_err(hw->os, "error writing buffers\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
+
+/**
+ * @brief Find the RQ index of RQ_ID.
+ *
+ * @param hw Hardware context.
+ * @param rq_id RQ ID to find.
+ *
+ * @return Returns the RQ index, or -1 if not found
+ */
+static inline int
+efct_hw_rqpair_find(struct efct_hw_s *hw, u16 rq_id)
+{
+	return efct_hw_queue_hash_find(hw->rq_hash, rq_id);
+}
diff --git a/drivers/scsi/elx/efct/efct_io.c b/drivers/scsi/elx/efct/efct_io.c
new file mode 100644
index 000000000000..f61ee0fdd616
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_io.c
@@ -0,0 +1,288 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_utils.h"
+#include "efct_hw.h"
+#include "efct_io.h"
+
+/**
+ * @brief IO pool.
+ *
+ * Structure encapsulating a pool of IO objects.
+ *
+ */
+
+struct efct_io_pool_s {
+	struct efct_s *efct;			/* Pointer to device object */
+	spinlock_t lock;		/* IO pool lock */
+	u32 io_num_ios;		/* Total IOs allocated */
+	struct efct_pool_s *pool;
+};
+
+/**
+ * @brief Create a pool of IO objects.
+ *
+ * @par Description
+ * This function allocates memory in larger chucks called
+ * "slabs" which are a fixed size. It calculates the number of IO objects that
+ * fit within each "slab" and determines the number of "slabs" required to
+ * allocate the number of IOs requested. Each of the slabs is allocated and
+ * then it grabs each IO object within the slab and adds it to the free list.
+ * Individual command, response and SGL DMA buffers are allocated for each IO.
+ *
+ *           "Slabs"
+ *      +----------------+
+ *      |                |
+ *   +----------------+  |
+ *   |    IO          |  |
+ *   +----------------+  |
+ *   |    ...         |  |
+ *   +----------------+__+
+ *   |    IO          |
+ *   +----------------+
+ *
+ * @param efct Driver instance's software context.
+ * @param num_io Number of IO contexts to allocate.
+ * @param num_sgl Number of SGL entries to allocate for each IO.
+ *
+ * @return Returns a pointer to a new efct_io_pool_s on success,
+ *         or NULL on failure.
+ */
+
+struct efct_io_pool_s *
+efct_io_pool_create(struct efct_s *efct, u32 num_io, u32 num_sgl)
+{
+	u32 i = 0;
+	struct efct_io_pool_s *io_pool;
+
+	/* Allocate the IO pool */
+	io_pool = kmalloc(sizeof(*io_pool), GFP_KERNEL);
+	if (!io_pool)
+		return NULL;
+
+	memset(io_pool, 0, sizeof(*io_pool));
+	io_pool->efct = efct;
+	io_pool->io_num_ios = num_io;
+
+	/* initialize IO pool lock */
+	spin_lock_init(&io_pool->lock);
+
+	io_pool->pool = efct_pool_alloc(efct, sizeof(struct efct_io_s),
+					io_pool->io_num_ios);
+
+	for (i = 0; i < io_pool->io_num_ios; i++) {
+		struct efct_io_s *io = efct_pool_get_instance(io_pool->pool, i);
+
+		io->tag = i;
+		io->instance_index = i;
+		io->efct = efct;
+
+		/* Allocate a response buffer */
+		io->rspbuf.size = SCSI_RSP_BUF_LENGTH;
+		io->rspbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
+						     io->rspbuf.size,
+						     &io->rspbuf.phys, GFP_DMA);
+		if (!io->rspbuf.virt) {
+			efc_log_err(efct, "dma_alloc cmdbuf failed\n");
+			efct_io_pool_free(io_pool);
+			return NULL;
+		}
+
+		/* Allocate SGL */
+		io->sgl = kzalloc(sizeof(*io->sgl) * num_sgl, GFP_ATOMIC);
+		if (!io->sgl) {
+			efct_io_pool_free(io_pool);
+			return NULL;
+		}
+
+		memset(io->sgl, 0, sizeof(*io->sgl) * num_sgl);
+		io->sgl_allocated = num_sgl;
+		io->sgl_count = 0;
+
+		/* Make IO backend call to initialize IO */
+		efct_scsi_tgt_io_init(io);
+	}
+
+	return io_pool;
+}
+
+/**
+ * @brief Free IO objects pool
+ *
+ * @par Description
+ * The pool of IO objects are freed.
+ *
+ * @param io_pool Pointer to IO pool object.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int
+efct_io_pool_free(struct efct_io_pool_s *io_pool)
+{
+	struct efct_s *efct;
+	u32 i;
+	struct efct_io_s *io;
+
+	if (io_pool) {
+		efct = io_pool->efct;
+
+		for (i = 0; i < io_pool->io_num_ios; i++) {
+			io = efct_pool_get_instance(io_pool->pool, i);
+			if (!io)
+				continue;
+
+			efct_scsi_tgt_io_exit(io);
+			kfree(io->sgl);
+			dma_free_coherent(&efct->pcidev->dev,
+					  io->cmdbuf.size, io->cmdbuf.virt,
+					  io->cmdbuf.phys);
+			memset(&io->cmdbuf, 0, sizeof(struct efc_dma_s));
+			dma_free_coherent(&efct->pcidev->dev,
+					  io->rspbuf.size, io->rspbuf.virt,
+					  io->rspbuf.phys);
+			memset(&io->rspbuf, 0, sizeof(struct efc_dma_s));
+		}
+
+		if (io_pool->pool)
+			efct_pool_free(io_pool->pool);
+
+		kfree(io_pool);
+		efct->xport->io_pool = NULL;
+	}
+
+	return 0;
+}
+
+u32 efct_io_pool_allocated(struct efct_io_pool_s *io_pool)
+{
+	return io_pool->io_num_ios;
+}
+
+/**
+ * @ingroup io_alloc
+ * @brief Allocate an object used to track an IO.
+ *
+ * @param io_pool Pointer to the IO pool.
+ *
+ * @return Returns the pointer to a new object, or NULL if none available.
+ */
+struct efct_io_s *
+efct_io_pool_io_alloc(struct efct_io_pool_s *io_pool)
+{
+	struct efct_io_s *io = NULL;
+	struct efct_s *efct;
+	unsigned long flags = 0;
+
+	efct = io_pool->efct;
+
+	spin_lock_irqsave(&io_pool->lock, flags);
+	io = efct_pool_get(io_pool->pool);
+	if (io) {
+		spin_unlock_irqrestore(&io_pool->lock, flags);
+
+		io->io_type = EFCT_IO_TYPE_MAX;
+		io->hio_type = EFCT_HW_IO_MAX;
+		io->hio = NULL;
+		io->transferred = 0;
+		io->efct = efct;
+		io->timeout = 0;
+		io->sgl_count = 0;
+		io->tgt_task_tag = 0;
+		io->init_task_tag = 0;
+		io->hw_tag = 0;
+		io->display_name = "pending";
+		io->seq_init = 0;
+		io->els_req_free = false;
+		io->io_free = 0;
+		io->release = NULL;
+		atomic_add_return(1, &efct->xport->io_active_count);
+		atomic_add_return(1, &efct->xport->io_total_alloc);
+	} else {
+		spin_unlock_irqrestore(&io_pool->lock, flags);
+	}
+	return io;
+}
+
+/**
+ * @ingroup io_alloc
+ * @brief Free an object used to track an IO.
+ *
+ * @param io_pool Pointer to IO pool object.
+ * @param io Pointer to the IO object.
+ */
+void
+efct_io_pool_io_free(struct efct_io_pool_s *io_pool, struct efct_io_s *io)
+{
+	struct efct_s *efct;
+	struct efct_hw_io_s *hio = NULL;
+	unsigned long flags = 0;
+
+	efct = io_pool->efct;
+
+	spin_lock_irqsave(&io_pool->lock, flags);
+	hio = io->hio;
+	io->hio = NULL;
+	io->io_free = 1;
+	efct_pool_put_head(io_pool->pool, io);
+	spin_unlock_irqrestore(&io_pool->lock, flags);
+
+	if (hio)
+		efct_hw_io_free(&efct->hw, hio);
+
+	atomic_sub_return(1, &efct->xport->io_active_count);
+	atomic_add_return(1, &efct->xport->io_total_free);
+}
+
+/**
+ * @ingroup io_alloc
+ * @brief Find an I/O given it's node and ox_id.
+ *
+ * @param efct Driver instance's software context.
+ * @param node Pointer to node.
+ * @param ox_id OX_ID to find.
+ * @param rx_id RX_ID to find (0xffff for unassigned).
+ */
+struct efct_io_s *
+efct_io_find_tgt_io(struct efct_s *efct, struct efc_node_s *node,
+		    u16 ox_id, u16 rx_id)
+{
+	struct efct_io_s	*io = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry(io, &node->active_ios, list_entry) {
+		if ((io->cmd_tgt && io->init_task_tag == ox_id) &&
+		    (rx_id == 0xffff || io->tgt_task_tag == rx_id)) {
+			if (!kref_get_unless_zero(&io->ref))
+				io = NULL;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return io;
+}
+
+/**
+ * @ingroup io_alloc
+ * @brief Return IO context given the instance index.
+ *
+ * @par Description
+ * Returns a pointer to the IO context given by the instance index.
+ *
+ * @param efct Pointer to driver structure.
+ * @param index IO instance index to return.
+ *
+ * @return Returns a pointer to the IO context, or NULL if not found.
+ */
+struct efct_io_s *
+efct_io_get_instance(struct efct_s *efct, u32 index)
+{
+	struct efct_xport_s *xport = efct->xport;
+	struct efct_io_pool_s *io_pool = xport->io_pool;
+
+	return efct_pool_get_instance(io_pool->pool, index);
+}
diff --git a/drivers/scsi/elx/efct/efct_io.h b/drivers/scsi/elx/efct/efct_io.h
new file mode 100644
index 000000000000..4a4278433e25
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_io.h
@@ -0,0 +1,219 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_IO_H__)
+#define __EFCT_IO_H__
+
+#define io_error_log(io, fmt, ...)  \
+	do { \
+		if (EFCT_LOG_ENABLE_IO_ERRORS(io->efct)) \
+			efc_log_warn(io->efct, fmt, ##__VA_ARGS__); \
+	} while (0)
+
+/**
+ * @brief FCP IO context
+ *
+ * This structure is used for transport and backend IO requests and responses.
+ */
+
+#define SCSI_CMD_BUF_LENGTH	48
+#define SCSI_RSP_BUF_LENGTH	(FCP_RESP_WITH_EXT + SCSI_SENSE_BUFFERSIZE)
+#define EFCT_NUM_SCSI_IOS	8192
+
+#include "efct_lio.h"
+/**
+ * @brief EFCT IO types
+ */
+enum efct_io_type_e {
+	EFCT_IO_TYPE_IO = 0,
+	EFCT_IO_TYPE_ELS,
+	EFCT_IO_TYPE_CT,
+	EFCT_IO_TYPE_CT_RESP,
+	EFCT_IO_TYPE_BLS_RESP,
+	EFCT_IO_TYPE_ABORT,
+
+	EFCT_IO_TYPE_MAX,		/* must be last */
+};
+
+enum efct_els_state_e {
+	EFCT_ELS_REQUEST = 0,
+	EFCT_ELS_REQUEST_DELAYED,
+	EFCT_ELS_REQUEST_DELAY_ABORT,
+	EFCT_ELS_REQ_ABORT,
+	EFCT_ELS_REQ_ABORTED,
+	EFCT_ELS_ABORT_IO_COMPL,
+};
+
+struct efct_io_s {
+	struct list_head list_entry;
+	struct list_head io_pending_link;
+	/* reference counter and callback function */
+	struct kref ref;
+	void (*release)(struct kref *arg);
+	/* pointer back to efct */
+	struct efct_s *efct;
+	/* unique instance index value */
+	u32 instance_index;
+	/* display name */
+	const char *display_name;
+	/* pointer to node */
+	struct efc_node_s *node;
+	/* (io_pool->io_free_list) free list link */
+	/* initiator task tag (OX_ID) for back-end and SCSI logging */
+	u32 init_task_tag;
+	/* target task tag (RX_ID) - for back-end and SCSI logging */
+	u32 tgt_task_tag;
+	/* HW layer unique IO id - for back-end and SCSI logging */
+	u32 hw_tag;
+	/* unique IO identifier */
+	u32 tag;
+	/* SGL */
+	struct efct_scsi_sgl_s *sgl;
+	/* Number of allocated SGEs */
+	u32 sgl_allocated;
+	/* Number of SGEs in this SGL */
+	u32 sgl_count;
+	/* backend target private IO data */
+	struct efct_scsi_tgt_io_s tgt_io;
+	/* expected data transfer length, based on FC header */
+	u32 exp_xfer_len;
+
+	/* Declarations private to HW/SLI */
+	void *hw_priv;			/* HW private context */
+
+	/* Declarations private to FC Transport */
+
+	/* indicates what this struct efct_io_s structure is used for */
+	enum efct_io_type_e io_type;
+	/* pointer back to dslab allocation object */
+	void *dslab_item;
+	struct efct_hw_io_s *hio;		/* HW IO context */
+	size_t transferred;		/* Number of bytes transferred so far */
+
+	/* set if auto_trsp was set */
+	bool auto_resp;
+	/* set if low latency request */
+	bool low_latency;
+	/* selected WQ steering request */
+	u8 wq_steering;
+	/* selected WQ class if steering is class */
+	u8 wq_class;
+	/* transfer size for current request */
+	u64 xfer_req;
+	/* target callback function */
+	efct_scsi_io_cb_t scsi_tgt_cb;
+	/* target callback function argument */
+	void *scsi_tgt_cb_arg;
+	/* abort callback function */
+	efct_scsi_io_cb_t abort_cb;
+	/* abort callback function argument */
+	void *abort_cb_arg;
+	/* BLS callback function */
+	efct_scsi_io_cb_t bls_cb;
+	/* BLS callback function argument */
+	void *bls_cb_arg;
+	/* TMF command being processed */
+	enum efct_scsi_tmf_cmd_e tmf_cmd;
+	/* rx_id from the ABTS that initiated the command abort */
+	u16 abort_rx_id;
+
+	/* True if this is a Target command */
+	bool cmd_tgt;
+	/* when aborting, indicates ABTS is to be sent */
+	bool send_abts;
+	/* True if this is an Initiator command */
+	bool cmd_ini;
+	/* True if local node has sequence initiative */
+	bool seq_init;
+	/* iparams for hw io send call */
+	union efct_hw_io_param_u iparam;
+	/* HW formatted DIF parameters */
+	struct efct_hw_dif_info_s hw_dif;
+	/* DIF info saved for DIF error recovery */
+	struct efct_scsi_dif_info_s scsi_dif_info;
+	/* HW IO type */
+	enum efct_hw_io_type_e hio_type;
+	/* wire length */
+	u64 wire_len;
+	/* saved HW callback */
+	void *hw_cb;
+	/* Overflow SGL */
+	struct efc_dma_s ovfl_sgl;
+
+	/* for ELS requests/responses */
+	/* True if ELS is pending */
+	bool els_pend;
+	/* True if ELS is active */
+	bool els_active;
+	/* ELS request payload buffer */
+	struct efc_dma_s els_req;
+	/* ELS response payload buffer */
+	struct efc_dma_s els_rsp;
+	/* EIO IO state machine context */
+	//struct efc_sm_ctx_s els_sm;
+	/* current event posting nesting depth */
+	//uint els_evtdepth;
+	/* this els is to be free'd */
+	bool els_req_free;
+	/* Retries remaining */
+	u32 els_retries_remaining;
+	void (*els_callback)(struct efc_node_s *node,
+			     struct efc_node_cb_s *cbdata, void *cbarg);
+	void *els_callback_arg;
+	/* timeout */
+	u32 els_timeout_sec;
+
+	/* delay timer */
+	struct timer_list delay_timer;
+
+	/* for abort handling */
+	/* pointer to IO to abort */
+	struct efct_io_s *io_to_abort;
+
+	enum efct_els_state_e	state;
+	/* Protects els cmds */
+	spinlock_t	els_lock;
+
+	/* SCSI Command buffer, used for CDB (initiator) */
+	struct efc_dma_s cmdbuf;
+	/* SCSI Response buffer (i+t) */
+	struct efc_dma_s rspbuf;
+	/* Timeout value in seconds for this IO */
+	u32  timeout;
+	/* CS_CTL priority for this IO */
+	u8   cs_ctl;
+	/* Is io object in freelist > */
+	u8	  io_free;
+	u32  app_id;
+};
+
+/**
+ * @brief common IO callback argument
+ *
+ * Callback argument used as common I/O callback argument
+ */
+
+struct efct_io_cb_arg_s {
+	int status;		/* completion status */
+	int ext_status;	/* extended completion status */
+	void *app;		/* application argument */
+};
+
+struct efct_io_pool_s *
+efct_io_pool_create(struct efct_s *efct, u32 num_io, u32 num_sgl);
+extern int
+efct_io_pool_free(struct efct_io_pool_s *io_pool);
+extern u32
+efct_io_pool_allocated(struct efct_io_pool_s *io_pool);
+
+extern struct efct_io_s *
+efct_io_pool_io_alloc(struct efct_io_pool_s *io_pool);
+extern void
+efct_io_pool_io_free(struct efct_io_pool_s *io_pool, struct efct_io_s *io);
+extern struct efct_io_s *
+efct_io_find_tgt_io(struct efct_s *efct, struct efc_node_s *node,
+		    u16 ox_id, u16 rx_id);
+#endif /* __EFCT_IO_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 21/32] elx: efct: Unsolicited FC frame processing routines
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (19 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 20/32] elx: efct: Hardware queues processing James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 22/32] elx: efct: Extended link Service IO handling James Smart
                   ` (11 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to handle unsolicited FC frames.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c    |    2 +
 drivers/scsi/elx/efct/efct_unsol.c | 1156 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_unsol.h |   49 ++
 3 files changed, 1207 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.c
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.h

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index aab66f5d7908..9ce31326ce38 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -6,6 +6,8 @@
 
 #include "efct_driver.h"
 #include "efct_hw.h"
+#include "efct_hw_queues.h"
+#include "efct_unsol.h"
 
 #define EFCT_HW_MQ_DEPTH		128
 #define EFCT_HW_WQ_TIMER_PERIOD_MS	500
diff --git a/drivers/scsi/elx/efct/efct_unsol.c b/drivers/scsi/elx/efct/efct_unsol.c
new file mode 100644
index 000000000000..3b711f2d526e
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_unsol.c
@@ -0,0 +1,1156 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_els.h"
+#include "efct_unsol.h"
+
+#define frame_printf(efct, hdr, fmt, ...) \
+	do { \
+		char s_id_text[16]; \
+		efc_node_fcid_display(ntoh24((hdr)->fh_s_id), \
+			s_id_text, sizeof(s_id_text)); \
+		efc_log_debug(efct, "[%06x.%s] %02x/%04x/%04x: " fmt, \
+			ntoh24((hdr)->fh_d_id), s_id_text, \
+			(hdr)->fh_r_ctl, be16_to_cpu((hdr)->fh_ox_id), \
+			be16_to_cpu((hdr)->fh_rx_id), ##__VA_ARGS__); \
+	} while (0)
+
+static int
+efct_unsol_process(struct efct_s *efct, struct efc_hw_sequence_s *seq);
+static int
+efct_fc_tmf_rejected_cb(struct efct_io_s *io,
+			enum efct_scsi_io_status_e scsi_status,
+		       u32 flags, void *arg);
+static struct efc_hw_sequence_s *
+efct_frame_next(struct list_head *pend_list, spinlock_t *list_lock);
+static bool efct_node_frames_held(void *arg);
+static bool efct_domain_frames_held(void *arg);
+static int
+efct_purge_pending(struct efct_s *efct,
+		   struct list_head *pend_list, spinlock_t *list_lock);
+static int efct_sframe_send_task_set_full_or_busy(struct efc_node_s *node,
+						  struct efc_hw_sequence_s *s);
+
+/**
+ * @ingroup unsol
+ * @brief Handle unsolicited FC frames.
+ *
+ * <h3 class="desc">Description</h3>
+ * This function is called from the HW with unsolicited FC
+ * frames (FCP, ELS, BLS, etc.).
+ *
+ * @param arg Application-specified callback data.
+ * @param seq Header/payload sequence buffers.
+ *
+ * @return Returns 0 on success; or a negative error value on failure.
+ */
+
+int
+efct_unsolicited_cb(void *arg, struct efc_hw_sequence_s *seq)
+{
+	struct efct_s *efct = arg;
+	int rc;
+
+	rc = efct_unsol_process(efct, seq);
+
+	if (rc)
+		efct_hw_sequence_free(&efct->hw, seq);
+
+	return 0;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Handle unsolicited FC frames.
+ *
+ * <h3 class="desc">Description</h3>
+ * This function is called from efct_unsolicited_cb()
+ *
+ * @param efct Pointer to the efct structure.
+ * @param seq Header/payload sequence buffers.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+static int
+efct_unsol_process(struct efct_s *efct, struct efc_hw_sequence_s *seq)
+{
+	struct efct_xport_fcfi_s *xport_fcfi = NULL;
+	struct efc_domain_s *domain;
+	struct efct_hw_s *hw = &efct->hw;
+	unsigned long flags = 0;
+
+	xport_fcfi = &efct->xport->fcfi;
+
+	/* If the transport FCFI entry is NULL, then drop the frame */
+	if (!xport_fcfi) {
+		efc_log_test(efct,
+			      "FCFI %d is not valid, dropping frame\n",
+			seq->fcfi);
+
+		efct_hw_sequence_free(&efct->hw, seq);
+		return 0;
+	}
+
+	domain = hw->domain;
+
+	/*
+	 * If we are holding frames or the domain is not yet registered or
+	 * there's already frames on the pending list,
+	 * then add the new frame to pending list
+	 */
+	if (!domain ||
+	    xport_fcfi->hold_frames ||
+	    !list_empty(&xport_fcfi->pend_frames)) {
+		spin_lock_irqsave(&xport_fcfi->pend_frames_lock, flags);
+		INIT_LIST_HEAD(&seq->list_entry);
+		list_add_tail(&seq->list_entry, &xport_fcfi->pend_frames);
+		spin_unlock_irqrestore(&xport_fcfi->pend_frames_lock, flags);
+
+		if (domain) {
+			/* immediately process pending frames */
+			efct_domain_process_pending(domain);
+		}
+	} else {
+		/*
+		 * We are not holding frames and pending list is empty,
+		 * just process frame. A non-zero return means the frame
+		 * was not handled - so cleanup
+		 */
+		if (efc_domain_dispatch_frame(domain, seq))
+			efct_hw_sequence_free(&efct->hw, seq);
+	}
+	return 0;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Process pending frames queued to the given node.
+ *
+ * <h3 class="desc">Description</h3>
+ * Frames that are queued for the \c node are dispatched and returned
+ * to the RQ.
+ *
+ * @param node Node of the queued frames that are to be dispatched.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+int
+efct_process_node_pending(struct efc_node_s *node)
+{
+	struct efct_s *efct = node->efc->base;
+	struct efc_hw_sequence_s *seq = NULL;
+	u32 pend_frames_processed = 0;
+	unsigned long flags = 0;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (efct_node_frames_held(node))
+			break;
+
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+			if (!list_empty(&node->pend_frames)) {
+				seq = list_first_entry(&node->pend_frames,
+						       struct efc_hw_sequence_s,
+						       list_entry);
+				list_del(&seq->list_entry);
+			}
+			if (!seq) {
+				pend_frames_processed =
+					node->pend_frames_processed;
+				node->pend_frames_processed = 0;
+				spin_unlock_irqrestore(&node->pend_frames_lock,
+						       flags);
+				break;
+			}
+			node->pend_frames_processed++;
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+		/* now dispatch frame(s) to dispatch function */
+		if (efc_node_dispatch_frame(node, seq))
+			efct_hw_sequence_free(&efct->hw, seq);
+	}
+
+	if (pend_frames_processed != 0)
+		efc_log_debug(efct, "%u node frames held and processed\n",
+			       pend_frames_processed);
+
+	return 0;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Process pending frames queued to the given domain.
+ *
+ * <h3 class="desc">Description</h3>
+ * Frames that are queued for the \c domain are dispatched and
+ * returned to the RQ.
+ *
+ * @param domain Domain of the queued frames that are to be
+ *		 dispatched.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+int
+efct_domain_process_pending(struct efc_domain_s *domain)
+{
+	struct efct_s *efct = domain->efc->base;
+	struct efct_xport_fcfi_s *xport_fcfi;
+	struct efc_hw_sequence_s *seq = NULL;
+	u32 pend_frames_processed = 0;
+	unsigned long flags = 0;
+
+	xport_fcfi = &efct->xport->fcfi;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (efct_domain_frames_held(domain))
+			break;
+
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&xport_fcfi->pend_frames_lock, flags);
+			if (!list_empty(&xport_fcfi->pend_frames)) {
+				seq = list_first_entry(&xport_fcfi->pend_frames,
+						       struct efc_hw_sequence_s,
+						       list_entry);
+				list_del(&seq->list_entry);
+			}
+			if (!seq) {
+				pend_frames_processed =
+					xport_fcfi->pend_frames_processed;
+				xport_fcfi->pend_frames_processed = 0;
+				spin_unlock_irqrestore(&
+						xport_fcfi->pend_frames_lock,
+						flags);
+				break;
+			}
+			xport_fcfi->pend_frames_processed++;
+		spin_unlock_irqrestore(&xport_fcfi->pend_frames_lock, flags);
+
+		/* now dispatch frame(s) to dispatch function */
+		if (efc_domain_dispatch_frame(domain, seq))
+			efct_hw_sequence_free(&efct->hw, seq);
+
+		seq = NULL;
+	}
+	if (pend_frames_processed != 0)
+		efc_log_debug(efct, "%u domain frames held and processed\n",
+			       pend_frames_processed);
+	return 0;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Purge given pending list
+ *
+ * <h3 class="desc">Description</h3>
+ * Frames that are queued on the given pending list are
+ * discarded and returned to the RQ.
+ *
+ * @param efct Pointer to efct object.
+ * @param pend_list Pending list to be purged.
+ * @param list_lock Lock that protects pending list.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+static int
+efct_purge_pending(struct efct_s *efct, struct list_head *pend_list,
+		   spinlock_t *list_lock)
+{
+	struct efc_hw_sequence_s *frame;
+
+	for (;;) {
+		frame = efct_frame_next(pend_list, list_lock);
+		if (!frame)
+			break;
+
+		frame_printf(efct,
+			     (struct fc_frame_header *)frame->header->dma.virt,
+			     "Discarding held frame\n");
+		efct_hw_sequence_free(&efct->hw, frame);
+	}
+
+	return 0;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Purge node's pending (queued) frames.
+ *
+ * <h3 class="desc">Description</h3>
+ * Frames that are queued for the \c node are discarded and returned
+ * to the RQ.
+ *
+ * @param node Node of the queued frames that are to be discarded.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+int
+efct_node_purge_pending(struct efc_lport *efc, struct efc_node_s *node)
+{
+	struct efct_s *efct = efc->base;
+
+	return efct_purge_pending(efct, &node->pend_frames,
+				&node->pend_frames_lock);
+}
+
+/**
+ * @ingroup unsol
+ * @brief Purge xport's pending (queued) frames.
+ *
+ * <h3 class="desc">Description</h3>
+ * Frames that are queued for the \c xport are discarded and
+ * returned to the RQ.
+ *
+ * @param domain Pointer to domain object.
+ *
+ * @return Returns 0 on success; or a negative error value on failure.
+ */
+
+int
+efct_domain_purge_pending(struct efc_domain_s *domain)
+{
+	struct efct_s *efct = domain->efc->base;
+	struct efct_xport_fcfi_s *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	return efct_purge_pending(efct,
+				 &xport_fcfi->pend_frames,
+				 &xport_fcfi->pend_frames_lock);
+}
+
+/**
+ * @ingroup unsol
+ * @brief Check if node's pending frames are held.
+ *
+ * @param arg Node for which the pending frame hold condition is
+ * checked.
+ *
+ * @return Returns 1 if node is holding pending frames, or 0
+ * if not.
+ */
+
+static bool efct_node_frames_held(void *arg)
+{
+	struct efc_node_s *node = (struct efc_node_s *)arg;
+
+	return node->hold_frames;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Check if domain's pending frames are held.
+ *
+ * @param arg Domain for which the pending frame hold condition is
+ * checked.
+ *
+ * @return Returns 1 if domain is holding pending frames, or 0
+ * if not.
+ */
+
+static bool efct_domain_frames_held(void *arg)
+{
+	struct efc_domain_s *domain = (struct efc_domain_s *)arg;
+	struct efct_s *efct = domain->efc->base;
+	struct efct_xport_fcfi_s *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	return xport_fcfi->hold_frames;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Globally (at xport level) hold unsolicited frames.
+ *
+ * <h3 class="desc">Description</h3>
+ * This function places a hold on processing unsolicited FC
+ * frames queued to the xport pending list.
+ *
+ * @param domain Pointer to domain object.
+ *
+ * @return Returns None.
+ */
+
+void
+efct_domain_hold_frames(struct efc_lport *efc, struct efc_domain_s *domain)
+{
+	struct efct_s *efct = domain->efc->base;
+	struct efct_xport_fcfi_s *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	if (!xport_fcfi->hold_frames) {
+		efc_log_debug(efct, "hold frames set for FCFI %d\n",
+			       domain->fcf_indicator);
+		xport_fcfi->hold_frames = true;
+	}
+}
+
+/**
+ * @ingroup unsol
+ * @brief Clear hold on unsolicited frames.
+ *
+ * <h3 class="desc">Description</h3>
+ * This function clears the hold on processing unsolicited FC
+ * frames queued to the domain pending list.
+ *
+ * @param domain Pointer to domain object.
+ *
+ * @return Returns None.
+ */
+
+void
+efct_domain_accept_frames(struct efc_lport *efc, struct efc_domain_s *domain)
+{
+	struct efct_s *efct = domain->efc->base;
+	struct efct_xport_fcfi_s *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	if (xport_fcfi->hold_frames) {
+		efc_log_debug(efct, "hold frames cleared for FCFI %d\n",
+			       domain->fcf_indicator);
+	}
+	xport_fcfi->hold_frames = false;
+	efct_domain_process_pending(domain);
+}
+
+/**
+ * @ingroup unsol
+ * @brief Dispatch unsolicited FCP frames (RQ Pair).
+ *
+ * <h3 class="desc">Description</h3>
+ * Dispatch unsolicited FCP frames (called from the device node state machine).
+ *
+ * @param io Pointer to the IO context.
+ * @param task_management_flags Task management flags from the FCP_CMND frame.
+ * @param node Node that originated the frame.
+ * @param lun 32-bit LUN from FCP_CMND frame.
+ *
+ * @return Returns None.
+ */
+
+static void
+efct_dispatch_unsolicited_tmf(struct efct_io_s *io,
+			      u8 task_management_flags,
+			      struct efc_node_s *node, u32 lun)
+{
+	u32 i;
+	struct {
+		u32 mask;
+		enum efct_scsi_tmf_cmd_e cmd;
+	} tmflist[] = {
+	{FCP_TMF_ABT_TASK_SET, EFCT_SCSI_TMF_ABORT_TASK_SET},
+	{FCP_TMF_CLR_TASK_SET, EFCT_SCSI_TMF_CLEAR_TASK_SET},
+	{FCP_TMF_LUN_RESET, EFCT_SCSI_TMF_LOGICAL_UNIT_RESET},
+	{FCP_TMF_TGT_RESET, EFCT_SCSI_TMF_TARGET_RESET},
+	{FCP_TMF_CLR_ACA, EFCT_SCSI_TMF_CLEAR_ACA} };
+
+	io->exp_xfer_len = 0; /* BUG 32235 */
+
+	for (i = 0; i < ARRAY_SIZE(tmflist); i++) {
+		if (tmflist[i].mask & task_management_flags) {
+			io->tmf_cmd = tmflist[i].cmd;
+			efct_scsi_recv_tmf(io, lun, tmflist[i].cmd, NULL, 0);
+			break;
+		}
+	}
+	if (i == ARRAY_SIZE(tmflist)) {
+		/* Not handled */
+		node_printf(node, "TMF x%x rejected\n", task_management_flags);
+		efct_scsi_send_tmf_resp(io, EFCT_SCSI_TMF_FUNCTION_REJECTED,
+					NULL, efct_fc_tmf_rejected_cb, NULL);
+	}
+}
+
+static int
+efct_validate_fcp_cmd(struct efct_s *efct, struct efc_hw_sequence_s *seq)
+{
+	/*
+	 * If we received less than FCP_CMND_IU bytes, assume that the frame is
+	 * corrupted in some way and drop it.
+	 * This was seen when jamming the FCTL
+	 * fill bytes field.
+	 */
+	if (seq->payload->dma.len < sizeof(struct fcp_cmnd)) {
+		struct fc_frame_header	*fchdr = seq->header->dma.virt;
+
+		efc_log_debug(efct,
+			"drop ox_id %04x with payload (%zd) less than (%zd)\n",
+				    be16_to_cpu(fchdr->fh_ox_id),
+				    seq->payload->dma.len,
+				    sizeof(struct fcp_cmnd));
+		return -1;
+	}
+	return 0;
+}
+
+static void
+efct_populate_io_fcp_cmd(struct efct_io_s *io, struct fcp_cmnd *cmnd,
+			 struct fc_frame_header *fchdr, bool sit)
+{
+	io->init_task_tag = be16_to_cpu(fchdr->fh_ox_id);
+	/* note, tgt_task_tag, hw_tag  set when HW io is allocated */
+	io->exp_xfer_len = be32_to_cpu(cmnd->fc_dl);
+	io->transferred = 0;
+
+	/* The upper 7 bits of CS_CTL is the frame priority thru the SAN.
+	 * Our assertion here is, the priority given to a frame containing
+	 * the FCP cmd should be the priority given to ALL frames contained
+	 * in that IO. Thus we need to save the incoming CS_CTL here.
+	 */
+	if (ntoh24(fchdr->fh_f_ctl) & FC_FC_RES_B17)
+		io->cs_ctl = fchdr->fh_cs_ctl;
+	else
+		io->cs_ctl = 0;
+
+	io->seq_init = sit;
+}
+
+static u32
+efct_get_flags_fcp_cmd(struct fcp_cmnd *cmnd)
+{
+	u32 flags = 0;
+
+	switch (cmnd->fc_pri_ta & FCP_PTA_MASK) {
+	case FCP_PTA_SIMPLE:
+		flags |= EFCT_SCSI_CMD_SIMPLE;
+		break;
+	case FCP_PTA_HEADQ:
+		flags |= EFCT_SCSI_CMD_HEAD_OF_QUEUE;
+		break;
+	case FCP_PTA_ORDERED:
+		flags |= EFCT_SCSI_CMD_ORDERED;
+		break;
+	case FCP_PTA_ACA:
+		flags |= EFCT_SCSI_CMD_ACA;
+		break;
+	}
+	if (cmnd->fc_flags & FCP_CFL_WRDATA)
+		flags |= EFCT_SCSI_CMD_DIR_IN;
+	if (cmnd->fc_flags & FCP_CFL_RDDATA)
+		flags |= EFCT_SCSI_CMD_DIR_OUT;
+
+	return flags;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Dispatch unsolicited FCP_CMND frame.
+ *
+ * <h3 class="desc">Description</h3>
+ * Dispatch unsolicited FCP_CMND frame. RQ Pair mode - always
+ * used for RQ Pair mode since first burst is not supported.
+ *
+ * @param node Node that originated the frame.
+ * @param seq Header/payload sequence buffers.
+ *
+ * @return Returns 0 if frame processed and RX buffers cleaned
+ * up appropriately, -1 if frame not handled and RX buffers need
+ * to be returned.
+ */
+int
+efct_dispatch_fcp_cmd(struct efc_node_s *node, struct efc_hw_sequence_s *seq)
+{
+	struct efc_lport *efc = node->efc;
+	struct efct_s *efct = efc->base;
+	struct fc_frame_header *fchdr = seq->header->dma.virt;
+	struct fcp_cmnd	*cmnd = NULL;
+	struct efct_io_s *io = NULL;
+	u32 lun = U32_MAX;
+	int rc = 0;
+
+	if (!seq->payload) {
+		efc_log_err(efct, "Sequence payload is NULL.\n");
+		return -1;
+	}
+
+	cmnd = seq->payload->dma.virt;
+
+	/* perform FCP_CMND validation check(s) */
+	if (efct_validate_fcp_cmd(efct, seq))
+		return -1;
+
+	lun = scsilun_to_int(&cmnd->fc_lun);
+	if (lun == U32_MAX)
+		return -1;
+
+	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
+	if (!io) {
+		u32 send_frame_capable;
+
+		/* If we have SEND_FRAME capability, then use it to send
+		 * task set full or busy
+		 */
+		rc = efct_hw_get(&efct->hw, EFCT_HW_SEND_FRAME_CAPABLE,
+				 &send_frame_capable);
+		if (!rc && send_frame_capable) {
+			rc = efct_sframe_send_task_set_full_or_busy(node, seq);
+			if (rc)
+				efc_log_test(efct,
+					      "efct_sframe_task_full_or_busy failed: %d\n",
+					rc);
+			return rc;
+		}
+
+		efc_log_err(efct, "IO allocation failed ox_id %04x\n",
+			     be16_to_cpu(fchdr->fh_ox_id));
+		return -1;
+	}
+	io->hw_priv = seq->hw_priv;
+
+	io->app_id = 0;
+
+	/* RQ pair, if we got here, SIT=1 */
+	efct_populate_io_fcp_cmd(io, cmnd, fchdr, true);
+
+	if (cmnd->fc_tm_flags) {
+		efct_dispatch_unsolicited_tmf(io,
+					      cmnd->fc_tm_flags,
+					      node, lun);
+	} else {
+		u32 flags = efct_get_flags_fcp_cmd(cmnd);
+
+		if (cmnd->fc_flags & FCP_CFL_LEN_MASK) {
+			efc_log_err(efct, "Additional CDB not supported\n");
+			return -1;
+		}
+		/*
+		 * Can return failure for things like task set full and UAs,
+		 * no need to treat as a dropped frame if rc != 0
+		 */
+		efct_scsi_recv_cmd(io, lun, cmnd->fc_cdb,
+				   sizeof(cmnd->fc_cdb), flags);
+	}
+
+	/* successfully processed, now return RX buffer to the chip */
+	efct_hw_sequence_free(&efct->hw, seq);
+	return 0;
+}
+
+/**
+ * @ingroup unsol
+ * @brief Handle the callback for the TMF FUNCTION_REJECTED response.
+ *
+ * <h3 class="desc">Description</h3>
+ * Handle the callback of a send TMF FUNCTION_REJECTED response request.
+ *
+ * @param io Pointer to the IO context.
+ * @param scsi_status Status of the response.
+ * @param flags Callback flags.
+ * @param arg Callback argument.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+static int
+efct_fc_tmf_rejected_cb(struct efct_io_s *io,
+			enum efct_scsi_io_status_e scsi_status,
+		       u32 flags, void *arg)
+{
+	efct_scsi_io_free(io);
+	return 0;
+}
+
+/**
+ * @brief Return next FC frame on node->pend_frames list
+ *
+ * The next FC frame on the node->pend_frames list is returned, or NULL
+ * if the list is empty.
+ *
+ * @param pend_list Pending list to be purged.
+ * @param list_lock Lock that protects pending list.
+ *
+ * @return Returns pointer to the next FC frame, or
+ * NULL if the pending frame list
+ * is empty.
+ */
+static struct efc_hw_sequence_s *
+efct_frame_next(struct list_head *pend_list, spinlock_t *list_lock)
+{
+	struct efc_hw_sequence_s *frame = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(list_lock, flags);
+
+	if (!list_empty(pend_list)) {
+		frame = list_first_entry(pend_list,
+					 struct efc_hw_sequence_s, list_entry);
+		list_del(&frame->list_entry);
+	}
+
+	spin_unlock_irqrestore(list_lock, flags);
+	return frame;
+}
+
+/**
+ * @brief Process send fcp response frame callback
+ *
+ * The function is called when the send FCP
+ * response posting has completed. Regardless
+ * of the outcome, the sequence is freed.
+ *
+ * @param arg Pointer to originator frame sequence.
+ * @param cqe Pointer to completion queue entry.
+ * @param status Status of operation.
+ *
+ * @return None.
+ */
+static void
+efct_sframe_common_send_cb(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_send_frame_context_s *ctx = arg;
+	struct efct_hw_s *hw = ctx->hw;
+
+	/* Free WQ completion callback */
+	efct_hw_reqtag_free(hw, ctx->wqcb);
+
+	/* Free sequence */
+	efct_hw_sequence_free(hw, ctx->seq);
+}
+
+/**
+ * @brief Send a frame, common code
+ *
+ * A frame is sent using SEND_FRAME, the R_CTL/F_CTL/TYPE may be specified,
+ * the payload is sent as a single frame.
+ *
+ * Memory resources are allocated from RQ buffers contained in the
+ * passed in sequence data.
+ *
+ * @param node Pointer to node object.
+ * @param seq Pointer to sequence object.
+ * @param r_ctl R_CTL value to place in FC header.
+ * @param info INFO value to place in FC header.
+ * @param f_ctl F_CTL value to place in FC header.
+ * @param type TYPE value to place in FC header.
+ * @param payload Pointer to payload data
+ * @param payload_len Length of payload in bytes.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+static int
+efct_sframe_common_send(struct efc_node_s *node,
+			struct efc_hw_sequence_s *seq,
+			enum fc_rctl r_ctl, u32 f_ctl,
+			u8 type, void *payload, u32 payload_len)
+{
+	struct efct_s *efct = node->efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+	enum efct_hw_rtn_e rc = 0;
+	struct fc_frame_header *req_hdr = seq->header->dma.virt;
+	struct fc_frame_header hdr;
+	struct efct_hw_send_frame_context_s *ctx;
+
+	u32 heap_size = seq->payload->dma.size;
+	uintptr_t heap_phys_base = seq->payload->dma.phys;
+	u8 *heap_virt_base = seq->payload->dma.virt;
+	u32 heap_offset = 0;
+
+	/* Build the FC header reusing the RQ header DMA buffer */
+	memset(&hdr, 0, sizeof(hdr));
+	hdr.fh_r_ctl = r_ctl;
+	/* send it back to whomever sent it to us */
+	memcpy(hdr.fh_d_id, req_hdr->fh_s_id, sizeof(hdr.fh_d_id));
+	memcpy(hdr.fh_s_id, req_hdr->fh_d_id, sizeof(hdr.fh_s_id));
+	hdr.fh_type = type;
+	hton24(hdr.fh_f_ctl, f_ctl);
+	hdr.fh_ox_id = req_hdr->fh_ox_id;
+	hdr.fh_rx_id = req_hdr->fh_rx_id;
+	hdr.fh_cs_ctl = 0;
+	hdr.fh_df_ctl = 0;
+	hdr.fh_seq_cnt = 0;
+	hdr.fh_parm_offset = 0;
+
+	/*
+	 * send_frame_seq_id is an atomic, we just let it increment,
+	 * while storing only the low 8 bits to hdr->seq_id
+	 */
+	hdr.fh_seq_id = (u8)atomic_add_return(1, &hw->send_frame_seq_id);
+	hdr.fh_seq_id--;
+
+	/* Allocate and fill in the send frame request context */
+	ctx = (void *)(heap_virt_base + heap_offset);
+	heap_offset += sizeof(*ctx);
+	if (heap_offset > heap_size) {
+		efc_log_err(efct, "Fill send frame failed offset %d size %d\n",
+				heap_offset, heap_size);
+		return -1;
+	}
+
+
+	memset(ctx, 0, sizeof(*ctx));
+
+	/* Save sequence */
+	ctx->seq = seq;
+
+	/* Allocate a response payload DMA buffer from the heap */
+	ctx->payload.phys = heap_phys_base + heap_offset;
+	ctx->payload.virt = heap_virt_base + heap_offset;
+	ctx->payload.size = payload_len;
+	ctx->payload.len = payload_len;
+	heap_offset += payload_len;
+	if (heap_offset > heap_size) {
+		efc_log_err(efct, "Fill send frame failed offset %d size %d\n",
+				heap_offset, heap_size);
+		return -1;
+	}
+
+	/* Copy the payload in */
+	memcpy(ctx->payload.virt, payload, payload_len);
+
+	/* Send */
+	rc = efct_hw_send_frame(&efct->hw, (void *)&hdr, FC_SOF_N3,
+				FC_EOF_T, &ctx->payload, ctx,
+				efct_sframe_common_send_cb, ctx);
+	if (rc)
+		efc_log_test(efct, "efct_hw_send_frame failed: %d\n", rc);
+
+	return rc ? -1 : 0;
+}
+
+/**
+ * @brief Send FCP response using SEND_FRAME
+ *
+ * The FCP response is send using the SEND_FRAME function.
+ *
+ * @param node Pointer to node object.
+ * @param seq Pointer to inbound sequence.
+ * @param rsp Pointer to response data.
+ * @param rsp_len Length of response data, in bytes.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+static int
+efct_sframe_send_fcp_rsp(struct efc_node_s *node,
+			 struct efc_hw_sequence_s *seq,
+			 void *rsp, u32 rsp_len)
+{
+	return efct_sframe_common_send(node, seq,
+				      FC_RCTL_DD_CMD_STATUS,
+				      FC_FC_EX_CTX |
+				      FC_FC_LAST_SEQ |
+				      FC_FC_END_SEQ |
+				      FC_FC_SEQ_INIT,
+				      FC_TYPE_FCP,
+				      rsp, rsp_len);
+}
+
+/**
+ * @brief Send task set full response
+ *
+ * Return a task set full or busy response using send frame.
+ *
+ * @param node Pointer to node object.
+ * @param seq Pointer to originator frame sequence.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+static int
+efct_sframe_send_task_set_full_or_busy(struct efc_node_s *node,
+				       struct efc_hw_sequence_s *seq)
+{
+	struct fcp_resp_with_ext fcprsp;
+	struct fcp_cmnd *fcpcmd = seq->payload->dma.virt;
+	int rc = 0;
+	unsigned long flags = 0;
+	struct efct_s *efct = node->efc->base;
+
+	/* construct task set full or busy response */
+	memset(&fcprsp, 0, sizeof(fcprsp));
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		fcprsp.resp.fr_status = list_empty(&node->active_ios) ?
+				SAM_STAT_BUSY : SAM_STAT_TASK_SET_FULL;
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	*((u32 *)&fcprsp.ext.fr_resid) = be32_to_cpu(fcpcmd->fc_dl);
+
+	/* send it using send_frame */
+	rc = efct_sframe_send_fcp_rsp(node, seq, &fcprsp, sizeof(fcprsp));
+	if (rc)
+		efc_log_test(efct,
+			      "efct_sframe_send_fcp_rsp failed: %d\n",
+			rc);
+
+	return rc;
+}
+
+/**
+ * @brief Send BA_ACC using sent frame
+ *
+ * A BA_ACC is sent using SEND_FRAME
+ *
+ * @param node Pointer to node object.
+ * @param seq Pointer to originator frame sequence.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int
+efct_sframe_send_bls_acc(struct efc_node_s *node,
+			 struct efc_hw_sequence_s *seq)
+{
+	struct fc_frame_header *behdr = seq->header->dma.virt;
+	u16 ox_id = be16_to_cpu(behdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(behdr->fh_rx_id);
+	struct fc_ba_acc acc = {0};
+
+	acc.ba_ox_id = cpu_to_be16(ox_id);
+	acc.ba_rx_id = cpu_to_be16(rx_id);
+	acc.ba_low_seq_cnt = U16_MAX;
+	acc.ba_high_seq_cnt = U16_MAX;
+
+	return efct_sframe_common_send(node, seq,
+				      FC_RCTL_BA_ACC,
+				      FC_FC_EX_CTX |
+				      FC_FC_LAST_SEQ |
+				      FC_FC_END_SEQ,
+				      FC_TYPE_BLS,
+				      &acc, sizeof(acc));
+}
+
+void
+efct_node_io_cleanup(struct efc_lport *efc, struct efc_node_s *node, bool force)
+{
+	struct efct_io_s *io;
+	struct efct_io_s *next;
+	unsigned long flags = 0;
+	struct efct_s *efct = efc->base;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry_safe(io, next, &node->active_ios, list_entry) {
+		list_del(&io->list_entry);
+		efct_io_pool_io_free(efct->xport->io_pool, io);
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+void
+efct_node_els_cleanup(struct efc_lport *efc, struct efc_node_s *node,
+		      bool force)
+{
+	struct efct_io_s *els;
+	struct efct_io_s *els_next;
+	struct efct_io_s *ls_acc_io;
+	unsigned long flags = 0;
+	struct efct_s *efct = efc->base;
+
+	/* first cleanup ELS's that are pending (not yet active) */
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry_safe(els, els_next, &node->els_io_pend_list,
+				 list_entry) {
+		/*
+		 * skip the ELS IO for which a response
+		 * will be sent after shutdown
+		 */
+		if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE &&
+		    els == node->ls_acc_io) {
+			continue;
+		}
+		/*
+		 * can't call efct_els_io_free()
+		 * because lock is held; cleanup manually
+		 */
+		node_printf(node, "Freeing pending els %s\n",
+			    els->display_name);
+		list_del(&els->list_entry);
+
+		dma_free_coherent(&efct->pcidev->dev,
+				  els->els_rsp.size, els->els_rsp.virt,
+				  els->els_rsp.phys);
+		dma_free_coherent(&efct->pcidev->dev,
+				  els->els_req.size, els->els_req.virt,
+				  els->els_req.phys);
+
+		efct_io_pool_io_free(efct->xport->io_pool, els);
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	ls_acc_io = node->ls_acc_io;
+
+	if (node->ls_acc_io && ls_acc_io->hio) {
+		/*
+		 * if there's an IO that will result in an LS_ACC after
+		 * shutdown and its HW IO is non-NULL, it better be an
+		 * implicit logout in vanilla sequence coalescing. In this
+		 * case, force the LS_ACC to go out on another XRI (hio)
+		 * since the previous will have been aborted by the UNREG_RPI
+		 */
+		node_printf(node,
+			    "invalidating ls_acc_io due to implicit logo\n");
+
+		/*
+		 * No need to abort because the unreg_rpi
+		 * takes care of it, just free
+		 */
+		efct_hw_io_free(&efct->hw, ls_acc_io->hio);
+
+		/* NULL out hio to force the LS_ACC to grab a new XRI */
+		ls_acc_io->hio = NULL;
+	}
+}
+
+void
+efct_node_abort_all_els(struct efc_lport *efc, struct efc_node_s *node)
+{
+	struct efct_io_s *els;
+	struct efct_io_s *els_next;
+	struct efc_node_cb_s cbdata;
+	struct efct_s *efct = efc->base;
+	unsigned long flags = 0;
+
+	memset(&cbdata, 0, sizeof(struct efc_node_cb_s));
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry_safe(els, els_next, &node->els_io_active_list,
+				 list_entry) {
+		if (els->els_req_free)
+			continue;
+		efc_log_debug(efct, "[%s] initiate ELS abort %s\n",
+			       node->display_name, els->display_name);
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		efct_els_abort(els, &cbdata);
+		spin_lock_irqsave(&node->active_ios_lock, flags);
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+/**
+ * @brief Process the ABTS.
+ *
+ * <h3 class="desc">Description</h3>
+ * Common code to process a received ABTS. If an active IO can be found
+ * that matches the OX_ID of the ABTS request, a call is made to the
+ * backend. Otherwise, a BA_RJT is returned to the initiator.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param hdr Pointer to the FC header.
+ * @param lun Pointer from FCP_CMND frame.
+ *
+ * @return Returns 0 on success, or a negative error value on failure.
+ */
+
+static int
+efct_process_abts(struct efct_io_s *io, struct fc_frame_header *hdr)
+{
+	struct efc_node_s *node = io->node;
+	struct efct_s *efct = io->efct;
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
+	struct efct_io_s *abortio;
+
+	/* Find IO and attempt to take a reference on it */
+	abortio = efct_io_find_tgt_io(efct, node, ox_id, rx_id);
+
+	if (abortio) {
+		/* Got a reference on the IO. Hold it until backend
+		 * is notified below
+		 */
+		node_printf(node, "Abort request: ox_id [%04x] rx_id [%04x]\n",
+			    ox_id, rx_id);
+
+		/*
+		 * Save the ox_id for the ABTS as the init_task_tag in our
+		 * manufactured
+		 * TMF IO object
+		 */
+		io->display_name = "abts";
+		io->init_task_tag = ox_id;
+		/* don't set tgt_task_tag, don't want to confuse with XRI */
+
+		/*
+		 * Save the rx_id from the ABTS as it is
+		 * needed for the BLS response,
+		 * regardless of the IO context's rx_id
+		 */
+		io->abort_rx_id = rx_id;
+
+		/* Call target server command abort */
+		io->tmf_cmd = EFCT_SCSI_TMF_ABORT_TASK;
+		efct_scsi_recv_tmf(io, abortio->tgt_io.lun,
+				   EFCT_SCSI_TMF_ABORT_TASK, abortio, 0);
+
+		/*
+		 * Backend will have taken an additional
+		 * reference on the IO if needed;
+		 * done with current reference.
+		 */
+		kref_put(&abortio->ref, abortio->release);
+	} else {
+		/*
+		 * Either IO was not found or it has been
+		 * freed between finding it
+		 * and attempting to get the reference,
+		 */
+		node_printf(node,
+			    "Abort request: ox_id [%04x], IO not found (exists=%d)\n",
+			    ox_id, (abortio != NULL));
+
+		/* Send a BA_RJT */
+		efct_bls_send_rjt_hdr(io, hdr);
+	}
+	return 0;
+}
+
+/**
+ * @ingroup node_common
+ * @brief Dispatch a ABTS frame (RQ Pair/sequence coalescing).
+ *
+ * <h3 class="desc">Description</h3>
+ * An ABTS frame is dispatched to the node state machine. This
+ * function is used for both RQ Pair and sequence coalescing.
+ *
+ * @param node Node that originated the frame.
+ * @param seq Header/payload sequence buffers
+ *
+ * @return Returns 0 if frame processed and RX buffers cleaned
+ * up appropriately, -1 if frame not handled and RX buffers need
+ * to be returned.
+ */
+
+int
+efct_node_recv_abts_frame(struct efc_lport *efc, struct efc_node_s *node,
+			  struct efc_hw_sequence_s *seq)
+{
+	struct efct_s *efct = efc->base;
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	struct efct_io_s *io = NULL;
+
+	node->abort_cnt++;
+
+	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
+	if (io) {
+		io->hw_priv = seq->hw_priv;
+		/* If we got this far, SIT=1 */
+		io->seq_init = 1;
+
+		/* fill out generic fields */
+		io->efct = efct;
+		io->node = node;
+		io->cmd_tgt = true;
+
+		efct_process_abts(io, seq->header->dma.virt);
+	} else {
+		node_printf(node,
+			    "SCSI IO allocation failed for ABTS received ");
+		node_printf(node,
+			    "s_id %06x d_id %06x ox_id %04x rx_id %04x\n",
+			ntoh24(hdr->fh_s_id),
+			ntoh24(hdr->fh_d_id),
+			be16_to_cpu(hdr->fh_ox_id),
+			be16_to_cpu(hdr->fh_rx_id));
+	}
+
+	/* ABTS processed, return RX buffer to the chip */
+	efct_hw_sequence_free(&efct->hw, seq->header->dma.virt);
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_unsol.h b/drivers/scsi/elx/efct/efct_unsol.h
new file mode 100644
index 000000000000..5c2cba9e4a47
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_unsol.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__OSC_UNSOL_H__)
+#define __OSC_UNSOL_H__
+
+extern int
+efct_unsolicited_cb(void *arg, struct efc_hw_sequence_s *seq);
+extern int
+efct_node_purge_pending(struct efc_lport *efc, struct efc_node_s *node);
+extern int
+efct_process_node_pending(struct efc_node_s *domain);
+extern int
+efct_domain_process_pending(struct efc_domain_s *domain);
+extern int
+efct_domain_purge_pending(struct efc_domain_s *domain);
+extern int
+efct_dispatch_unsolicited_bls(struct efc_node_s *node,
+			      struct efc_hw_sequence_s *seq);
+extern void
+efct_domain_hold_frames(struct efc_lport *efc, struct efc_domain_s *domain);
+extern void
+efct_domain_accept_frames(struct efc_lport *efc, struct efc_domain_s *domain);
+extern void
+efct_seq_coalesce_cleanup(struct efct_hw_io_s *io, u8 count);
+extern int
+efct_sframe_send_bls_acc(struct efc_node_s *node,
+			 struct efc_hw_sequence_s *seq);
+extern int
+efct_dispatch_fcp_cmd(struct efc_node_s *node, struct efc_hw_sequence_s *seq);
+
+extern int
+efct_node_recv_abts_frame(struct efc_lport *efc, struct efc_node_s *node,
+			  struct efc_hw_sequence_s *seq);
+extern void
+efct_node_els_cleanup(struct efc_lport *efc, struct efc_node_s *node,
+		      bool force);
+
+extern void
+efct_node_io_cleanup(struct efc_lport *efc, struct efc_node_s *node,
+		     bool force);
+
+void
+efct_node_abort_all_els(struct efc_lport *efc, struct efc_node_s *node);
+
+#endif /* __OSC_UNSOL_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 22/32] elx: efct: Extended link Service IO handling
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (20 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 21/32] elx: efct: Unsolicited FC frame processing routines James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 23/32] elx: efct: SCSI IO handling routines James Smart
                   ` (10 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Functions to build and send ELS/CT/BLS commands and responses.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_els.c | 2676 ++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_els.h |  139 ++
 2 files changed, 2815 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_els.c
 create mode 100644 drivers/scsi/elx/efct/efct_els.h

diff --git a/drivers/scsi/elx/efct/efct_els.c b/drivers/scsi/elx/efct/efct_els.c
new file mode 100644
index 000000000000..5aef991712c2
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_els.c
@@ -0,0 +1,2676 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Functions to build and send ELS/CT/BLS commands and responses.
+ */
+
+#include "efct_driver.h"
+#include "efct_els.h"
+
+#define ELS_IOFMT "[i:%04x t:%04x h:%04x]"
+
+#define node_els_trace()  \
+	do { \
+		if (EFCT_LOG_ENABLE_ELS_TRACE(efct)) \
+			efc_log_info(efct, "[%s] %-20s\n", \
+				node->display_name, __func__); \
+	} while (0)
+
+#define els_io_printf(els, fmt, ...) \
+	efc_log_debug((struct efct_s *)els->node->efc->base,\
+		      "[%s]" ELS_IOFMT " %-8s " fmt, \
+		      els->node->display_name,\
+		      els->init_task_tag, els->tgt_task_tag, els->hw_tag,\
+		      els->display_name, ##__VA_ARGS__)
+
+static int
+efct_els_send(struct efct_io_s *els,
+	      u32 reqlen, u32 timeout_sec,
+	efct_hw_srrs_cb_t cb);
+static int
+efct_els_send_rsp(struct efct_io_s *els, u32 rsplen);
+static int
+efct_els_acc_cb(struct efct_hw_io_s *hio, struct efc_remote_node_s *rnode,
+		u32 length, int status,
+		u32 ext_status, void *arg);
+static struct efct_io_s *
+efct_bls_send_acc(struct efct_io_s *, u32 s_id,
+		  u16 ox_id, u16 rx_id);
+static int
+efct_bls_send_acc_cb(struct efct_hw_io_s *, struct efc_remote_node_s *rnode,
+		     u32 length, int status,
+		u32 ext_status, void *app);
+static struct efct_io_s *
+efct_bls_send_rjt(struct efct_io_s *, u32 s_id,
+		  u16 ox_id, u16 rx_id);
+static int
+efct_bls_send_rjt_cb(struct efct_hw_io_s *, struct efc_remote_node_s *rnode,
+		     u32 length, int status,
+		u32 ext_status, void *app);
+
+static int
+efct_els_req_cb(struct efct_hw_io_s *hio, struct efc_remote_node_s *rnode,
+		u32 length, int status, u32 ext_status, void *arg);
+static struct efct_io_s *
+efct_els_abort_io(struct efct_io_s *els, bool send_abts);
+static void
+efct_els_delay_timer_cb(struct timer_list *t);
+
+static void
+efct_els_retry(struct efct_io_s *els);
+
+static void
+efct_els_abort_cleanup(struct efct_io_s *els);
+
+#define EFCT_ELS_RSP_LEN		1024
+#define EFCT_ELS_GID_PT_RSP_LEN	8096 /* Enough for 2K remote target nodes */
+
+void *
+efct_els_req_send(struct efc_lport *efc, struct efc_node_s *node, u32 cmd,
+		  u32 timeout_sec, u32 retries)
+{
+	struct efct_s *efct = efc->base;
+
+	switch (cmd) {
+	case ELS_PLOGI:
+		efc_log_debug(efct, "send efct_send_plogi\n");
+		efct_send_plogi(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case ELS_FLOGI:
+		efc_log_debug(efct, "send efct_send_flogi\n");
+		efct_send_flogi(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case ELS_LOGO:
+		efc_log_debug(efct, "send efct_send_logo\n");
+		efct_send_logo(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case ELS_PRLI:
+		efc_log_debug(efct, "send efct_send_prli\n");
+		efct_send_prli(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case ELS_ADISC:
+		efc_log_debug(efct, "send efct_send_prli\n");
+		efct_send_adisc(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case ELS_SCR:
+		efc_log_debug(efct, "send efct_send_scr\n");
+		efct_send_scr(node, timeout_sec, retries, NULL, NULL);
+		break;
+	default:
+		efc_log_debug(efct, "Unhandled command cmd: %x\n", cmd);
+	}
+
+	return NULL;
+}
+
+void *
+efct_els_resp_send(struct efc_lport *efc, struct efc_node_s *node,
+		   u32 cmd, u16 ox_id)
+{
+	struct efct_s *efct = efc->base;
+
+	switch (cmd) {
+	case ELS_PLOGI:
+		efct_send_plogi_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_FLOGI:
+		efct_send_flogi_acc(node, ox_id, 0, NULL, NULL);
+		break;
+	case ELS_LOGO:
+		efct_send_logo_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_PRLI:
+		efct_send_prli_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_PRLO:
+		efct_send_prlo_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_ADISC:
+		efct_send_adisc_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_LS_ACC:
+		efct_send_ls_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_PDISC:
+	case ELS_FDISC:
+	case ELS_RSCN:
+	case ELS_SCR:
+		efct_send_ls_rjt(efc, node, ox_id, ELS_RJT_UNAB,
+				 ELS_EXPL_NONE, 0);
+		break;
+	default:
+		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
+	}
+
+	return NULL;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Allocate an IO structure for an ELS IO context.
+ *
+ * <h3 class="desc">Description</h3>
+ * Allocate an IO for an ELS context.
+ * Uses EFCT_ELS_RSP_LEN as response size.
+ *
+ * @param node node to associate ELS IO with
+ * @param reqlen Length of ELS request
+ * @param role Role of ELS (originator/responder)
+ *
+ * @return pointer to IO structure allocated
+ */
+
+struct efct_io_s *
+efct_els_io_alloc(struct efc_node_s *node, u32 reqlen,
+		  enum efct_els_role_e role)
+{
+	return efct_els_io_alloc_size(node, reqlen, EFCT_ELS_RSP_LEN, role);
+}
+
+/**
+ * @ingroup els_api
+ * @brief Allocate an IO structure for an ELS IO context.
+ *
+ * <h3 class="desc">Description</h3>
+ * Allocate an IO for an ELS context, allowing the
+ * caller to specify the size of the response.
+ *
+ * @param node node to associate ELS IO with
+ * @param reqlen Length of ELS request
+ * @param rsplen Length of ELS response
+ * @param role Role of ELS (originator/responder)
+ *
+ * @return pointer to IO structure allocated
+ */
+
+struct efct_io_s *
+efct_els_io_alloc_size(struct efc_node_s *node, u32 reqlen,
+		       u32 rsplen, enum efct_els_role_e role)
+{
+	struct efct_s *efct;
+	struct efct_xport_s *xport;
+	struct efct_io_s *els;
+	unsigned long flags = 0;
+
+	efct = node->efc->base;
+
+	xport = efct->xport;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+
+	if (!node->io_alloc_enabled) {
+		efc_log_debug(efct,
+			       "called with io_alloc_enabled = FALSE\n");
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		return NULL;
+	}
+
+	els = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!els) {
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		return NULL;
+	}
+
+	/* initialize refcount */
+	kref_init(&els->ref);
+	els->release = _efct_els_io_free;
+
+	switch (role) {
+	case EFCT_ELS_ROLE_ORIGINATOR:
+		els->cmd_ini = true;
+		els->cmd_tgt = false;
+		break;
+	case EFCT_ELS_ROLE_RESPONDER:
+		els->cmd_ini = false;
+		els->cmd_tgt = true;
+		break;
+	}
+
+	/* IO should not have an associated HW IO yet.
+	 * Assigned below.
+	 */
+	if (els->hio) {
+		efc_log_err(efct,
+			     "assertion failed.  HIO is not null\n");
+		efct_io_pool_io_free(efct->xport->io_pool, els);
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		return NULL;
+	}
+
+	/* populate generic io fields */
+	els->efct = efct;
+	els->node = node;
+
+	/* set type and ELS-specific fields */
+	els->io_type = EFCT_IO_TYPE_ELS;
+	els->display_name = "pending";
+
+	/* now allocate DMA for request and response */
+	els->els_req.size = reqlen;
+	els->els_req.virt = dma_alloc_coherent(&efct->pcidev->dev,
+					       els->els_req.size,
+					       &els->els_req.phys,
+					       GFP_DMA);
+	if (els->els_req.virt) {
+		els->els_rsp.size = rsplen;
+		els->els_rsp.virt = dma_alloc_coherent(&efct->pcidev->dev,
+						       els->els_rsp.size,
+						       &els->els_rsp.phys,
+						       GFP_DMA);
+		if (!els->els_rsp.virt) {
+			efc_log_err(efct, "dma_alloc rsp\n");
+			dma_free_coherent(&efct->pcidev->dev,
+					  els->els_req.size,
+				els->els_req.virt, els->els_req.phys);
+			efct_io_pool_io_free(efct->xport->io_pool, els);
+			els = NULL;
+		}
+	} else {
+		efc_log_err(efct, "dma_alloc req\n");
+		efct_io_pool_io_free(efct->xport->io_pool, els);
+		els = NULL;
+	}
+
+	if (els) {
+		/* initialize fields */
+		els->els_retries_remaining =
+					EFCT_FC_ELS_DEFAULT_RETRIES;
+		els->els_pend = false;
+		els->els_active = false;
+
+		/* add els structure to ELS IO list */
+		INIT_LIST_HEAD(&els->list_entry);
+		list_add_tail(&els->list_entry,
+			      &node->els_io_pend_list);
+		els->els_pend = true;
+	}
+
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Free IO structure for an ELS IO context.
+ *
+ * <h3 class="desc">Description</h3> Free IO for an ELS
+ * IO context
+ *
+ * @param els ELS IO structure for which IO is allocated
+ *
+ * @return None
+ */
+
+void
+efct_els_io_free(struct efct_io_s *els)
+{
+	kref_put(&els->ref, els->release);
+}
+
+/**
+ * @ingroup els_api
+ * @brief Free IO structure for an ELS IO context.
+ *
+ * <h3 class="desc">Description</h3> Free IO for an ELS
+ * IO context
+ *
+ * @param arg ELS IO structure for which IO is allocated
+ *
+ * @return None
+ */
+
+void
+_efct_els_io_free(struct kref *arg)
+{
+	struct efct_io_s *els = container_of(arg, struct efct_io_s, ref);
+	struct efct_s *efct;
+	struct efc_node_s *node;
+	int send_empty_event = false;
+	unsigned long flags = 0;
+
+	node = els->node;
+	efct = node->efc->base;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		if (els->els_active) {
+			/* if active, remove from active list and check empty */
+			list_del(&els->list_entry);
+			/* Send list empty event if the IO allocator
+			 * is disabled, and the list is empty
+			 * If node->io_alloc_enabled was not checked,
+			 * the event would be posted continually
+			 */
+			send_empty_event = (!node->io_alloc_enabled) &&
+				list_empty(&node->els_io_active_list);
+			els->els_active = false;
+		} else if (els->els_pend) {
+			/* if pending, remove from pending list;
+			 * node shutdown isn't gated off the
+			 * pending list (only the active list),
+			 * so no need to check if pending list is empty
+			 */
+			list_del(&els->list_entry);
+			els->els_pend = 0;
+		} else {
+			efc_log_err(efct,
+				     "assertion fail: niether els_pend nor active set\n");
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return;
+		}
+
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	/* free ELS request and response buffers */
+	dma_free_coherent(&efct->pcidev->dev, els->els_rsp.size,
+			  els->els_rsp.virt, els->els_rsp.phys);
+	dma_free_coherent(&efct->pcidev->dev, els->els_req.size,
+			  els->els_req.virt, els->els_req.phys);
+
+	efct_io_pool_io_free(efct->xport->io_pool, els);
+
+	if (send_empty_event)
+		efc_scsi_io_list_empty(node->efc, node);
+
+	efct_scsi_check_pending(efct);
+}
+
+/**
+ * @ingroup els_api
+ * @brief Make ELS IO active
+ *
+ * @param els Pointer to the IO context to make active.
+ *
+ * @return Returns 0 on success; or a negative error code value on failure.
+ */
+
+static void
+efct_els_make_active(struct efct_io_s *els)
+{
+	struct efc_node_s *node = els->node;
+	unsigned long flags = 0;
+
+	/* move ELS from pending list to active list */
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		if (els->els_pend) {
+			if (els->els_active) {
+				efc_log_err(node->efc,
+					     "assertion fail:both els_pend and active set\n");
+				spin_unlock_irqrestore(&node->active_ios_lock,
+						       flags);
+				return;
+			}
+			/* remove from pending list */
+			list_del(&els->list_entry);
+			els->els_pend = false;
+
+			/* add els structure to ELS IO list */
+			INIT_LIST_HEAD(&els->list_entry);
+			list_add_tail(&els->list_entry,
+				      &node->els_io_active_list);
+			els->els_active = true;
+		} else {
+			/* must be retrying; make sure it's already active */
+			if (!els->els_active) {
+				efc_log_err(node->efc,
+					     "assertion fail: niether els_pend nor active set\n");
+			}
+		}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+static void efct_els_send_req(struct efc_node_s *node, struct efct_io_s *els)
+{
+	int rc = 0;
+	struct efct_s *efct;
+
+	efct = node->efc->base;
+	rc = efct_els_send(els, els->els_req.size,
+			   els->els_timeout_sec, efct_els_req_cb);
+
+	if (rc) {
+		struct efc_node_cb_s cbdata;
+
+		cbdata.status = INT_MAX;
+		cbdata.ext_status = INT_MAX;
+		cbdata.els_rsp = els->els_rsp;
+		efc_log_err(efct, "efct_els_send failed: %d\n", rc);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+				    &cbdata);
+	}
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send the ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * The command, given by the \c els IO context,
+ * is sent to the node that the IO was
+ * configured with, using efct_hw_srrs_send().
+ * Upon completion,the \c cb callback is invoked,
+ * with the application-specific argument set to
+ * the \c els IO context.
+ *
+ * @param els Pointer to the IO context.
+ * @param reqlen Byte count in the payload to send.
+ * @param timeout_sec Command timeout, in seconds (0 -> 2*R_A_TOV).
+ * @param cb Completion callback.
+ *
+ * @return Returns 0 on success; or a negative error code value on failure.
+ */
+
+static int efct_els_send(struct efct_io_s *els, u32 reqlen,
+			 u32 timeout_sec, efct_hw_srrs_cb_t cb)
+{
+	struct efc_node_s *node = els->node;
+
+	/* update ELS request counter */
+	node->els_req_cnt++;
+
+	/* move ELS from pending list to active list */
+	efct_els_make_active(els);
+
+	els->wire_len = reqlen;
+	return efct_scsi_io_dispatch(els, cb);
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send the ELS response.
+ *
+ * <h3 class="desc">Description</h3>
+ * The ELS response, given by the \c els IO context, is sent to the node
+ * that the IO was configured with, using efct_hw_srrs_send().
+ *
+ * @param els Pointer to the IO context.
+ * @param rsplen Byte count in the payload to send.
+ *
+ * @return Returns 0 on success; or a negative error value on failure.
+ */
+
+static int
+efct_els_send_rsp(struct efct_io_s *els, u32 rsplen)
+{
+	struct efc_node_s *node = els->node;
+
+	/* increment ELS completion counter */
+	node->els_cmpl_cnt++;
+
+	/* move ELS from pending list to active list */
+	efct_els_make_active(els);
+
+	els->wire_len = rsplen;
+	return efct_scsi_io_dispatch(els, efct_els_acc_cb);
+}
+
+/**
+ * @ingroup els_api
+ * @brief Handle ELS IO request completions.
+ *
+ * <h3 class="desc">Description</h3>
+ * This callback is used for several ELS send operations.
+ *
+ * @param hio Pointer to the HW IO context that completed.
+ * @param rnode Pointer to the remote node.
+ * @param length Length of the returned payload data.
+ * @param status Status of the completion.
+ * @param ext_status Extended status of the completion.
+ * @param arg Application-specific argument
+ * (generally a pointer to the ELS IO context).
+ *
+ * @return Returns 0 on success; or a negative error value on failure.
+ */
+
+static int
+efct_els_req_cb(struct efct_hw_io_s *hio, struct efc_remote_node_s *rnode,
+		u32 length, int status, u32 ext_status, void *arg)
+{
+	struct efct_io_s *els;
+	struct efc_node_s *node;
+	struct efct_s *efct;
+	struct efc_node_cb_s cbdata;
+	u32 reason_code;
+
+	els = arg;
+	node = els->node;
+	efct = node->efc->base;
+
+	if (status != 0)
+		els_io_printf(els, "status x%x ext x%x\n", status, ext_status);
+
+	/* set the response len element of els->rsp */
+	els->els_rsp.len = length;
+
+	cbdata.status = status;
+	cbdata.ext_status = ext_status;
+	cbdata.header = NULL;
+	cbdata.els_rsp = els->els_rsp;
+
+	/* FW returns the number of bytes received on the link in
+	 * the WCQE, not the amount placed in the buffer; use this info to
+	 * check if there was an overrun.
+	 */
+	if (length > els->els_rsp.size) {
+		efc_log_warn(efct,
+			      "ELS response returned len=%d > buflen=%zu\n",
+			     length, els->els_rsp.size);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
+		return 0;
+	}
+
+	/* Post event to ELS IO object */
+	switch (status) {
+	case SLI4_FC_WCQE_STATUS_SUCCESS:
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_OK, &cbdata);
+		break;
+
+	case SLI4_FC_WCQE_STATUS_LS_RJT:
+		reason_code = (ext_status >> 16) & 0xff;
+
+		/* delay and retry if reason code is Logical Busy */
+		switch (reason_code) {
+		case ELS_RJT_BUSY:
+			els->node->els_req_cnt--;
+			els_io_printf(els,
+				      "LS_RJT Logical Busy response,delay and retry\n");
+			timer_setup(&els->delay_timer,
+				    efct_els_delay_timer_cb, 0);
+			mod_timer(&els->delay_timer,
+				  jiffies + msecs_to_jiffies(5000));
+			els->state = EFCT_ELS_REQUEST_DELAYED;
+			break;
+		default:
+			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_RJT,
+					    &cbdata);
+			break;
+		}
+		break;
+
+	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+		switch (ext_status) {
+		case SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT:
+			efct_els_retry(els);
+			break;
+
+		case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
+			if (els->state == EFCT_ELS_ABORT_IO_COMPL) {
+				/* completion for ELS that was aborted */
+				efct_els_abort_cleanup(els);
+			} else {
+				/* completion for ELS received first,
+				 * transition to wait for abort cmpl
+				 */
+				els->state = EFCT_ELS_REQ_ABORTED;
+			}
+
+			break;
+		default:
+			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+					    &cbdata);
+			break;
+		}
+		break;
+	default:	/* Other error */
+		efc_log_warn(efct,
+			      "els req failed status x%x, ext_status, x%x\n",
+					status, ext_status);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
+		break;
+	}
+
+	return 0;
+}
+
+void
+efct_els_abort(struct efct_io_s *els, struct efc_node_cb_s *arg)
+{
+	struct efct_io_s *io = NULL;
+	struct efc_node_s *node;
+	struct efct_s *efct;
+
+	node = els->node;
+	efct = node->efc->base;
+
+	/* request to abort this ELS without an ABTS */
+	els_io_printf(els, "ELS abort requested\n");
+	/* Set retries to zero,we are done */
+	els->els_retries_remaining = 0;
+	if (els->state == EFCT_ELS_REQUEST) {
+		els->state = EFCT_ELS_REQ_ABORT;
+		io = efct_els_abort_io(els, false);
+		if (!io) {
+			efc_log_err(efct, "efct_els_abort_io failed\n");
+			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+					    arg);
+		}
+
+	} else if (els->state == EFCT_ELS_REQUEST_DELAYED) {
+		/* mod/resched the timer for a short duration */
+		mod_timer(&els->delay_timer,
+			  jiffies + msecs_to_jiffies(1));
+
+		els->state = EFCT_ELS_REQUEST_DELAY_ABORT;
+	}
+}
+
+/**
+ * @ingroup els_api
+ * @brief Handle ELS IO accept/response completions.
+ *
+ * <h3 class="desc">Description</h3>
+ * This callback is used for several ELS send operations.
+ *
+ * @param hio Pointer to the HW IO context that completed.
+ * @param rnode Pointer to the remote node.
+ * @param length Length of the returned payload data.
+ * @param status Status of the completion.
+ * @param ext_status Extended status of the completion.
+ * @param arg Application-specific argument
+ *	(generally a pointer to the ELS IO context).
+ *
+ * @return Returns 0 on success; or a negative error value on failure.
+ */
+
+static int
+efct_els_acc_cb(struct efct_hw_io_s *hio, struct efc_remote_node_s *rnode,
+		u32 length, int status, u32 ext_status, void *arg)
+{
+	struct efct_io_s *els;
+	struct efc_node_s *node;
+	struct efct_s *efct;
+	struct efc_node_cb_s cbdata;
+
+	els = arg;
+	node = els->node;
+	efct = node->efc->base;
+
+	cbdata.status = status;
+	cbdata.ext_status = ext_status;
+	cbdata.header = NULL;
+	cbdata.els_rsp = els->els_rsp;
+
+	/* Post node event */
+	switch (status) {
+	case SLI4_FC_WCQE_STATUS_SUCCESS:
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_OK, &cbdata);
+		break;
+
+	default:	/* Other error */
+		efc_log_warn(efct,
+			      "[%s] %-8s failed status x%x, ext_status x%x\n",
+			    node->display_name, els->display_name,
+			    status, ext_status);
+		efc_log_warn(efct,
+			      "els acc complete: failed status x%x, ext_status, x%x\n",
+		     status, ext_status);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_FAIL, &cbdata);
+		break;
+	}
+
+	return 0;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Format and send a PLOGI ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a PLOGI payload using the domain SLI port service parameters,
+ * and send to the \c node.
+ *
+ * @param node Node to which the PLOGI is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_plogi(struct efc_node_s *node, u32 timeout_sec,
+		u32 retries,
+	      void (*cb)(struct efc_node_s *node,
+			 struct efc_node_cb_s *cbdata, void *arg), void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct = node->efc->base;
+	struct fc_els_flogi  *plogi;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "plogi";
+
+		/* Build PLOGI request */
+		plogi = els->els_req.virt;
+
+		memcpy(plogi, node->sport->service_params, sizeof(*plogi));
+
+		plogi->fl_cmd = ELS_PLOGI;
+		memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Format and send a FLOGI ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct an FLOGI payload, and send to the \c node.
+ *
+ * @param node Node to which the FLOGI is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before
+ * reporting a failure.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_flogi(struct efc_node_s *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct;
+	struct fc_els_flogi  *flogi;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "flogi";
+
+		/* Build FLOGI request */
+		flogi = els->els_req.virt;
+
+		memcpy(flogi, node->sport->service_params, sizeof(*flogi));
+		flogi->fl_cmd = ELS_FLOGI;
+		memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Format and send a FDISC ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct an FDISC payload, and send to the \c node.
+ *
+ * @param node Node to which the FDISC is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_fdisc(struct efc_node_s *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct;
+	struct fc_els_flogi *fdisc;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*fdisc), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "fdisc";
+
+		/* Build FDISC request */
+		fdisc = els->els_req.virt;
+
+		memcpy(fdisc, node->sport->service_params, sizeof(*fdisc));
+		fdisc->fl_cmd = ELS_FDISC;
+		memset(fdisc->_fl_resvd, 0, sizeof(fdisc->_fl_resvd));
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a PRLI ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a PRLI ELS command, and send to the \c node.
+ *
+ * @param node Node to which the PRLI is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_prli(struct efc_node_s *node, u32 timeout_sec, u32 retries,
+	       els_cb_t cb, void *cbarg)
+{
+	struct efct_s *efct = node->efc->base;
+	struct efct_io_s *els;
+	struct {
+		struct fc_els_prli prli;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "prli";
+
+		/* Build PRLI request */
+		pp = els->els_req.virt;
+
+		memset(pp, 0, sizeof(*pp));
+
+		pp->prli.prli_cmd = ELS_PRLI;
+		pp->prli.prli_spp_len = 16;
+		pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
+		pp->spp.spp_type = FC_TYPE_FCP;
+		pp->spp.spp_type_ext = 0;
+		pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR;
+		pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
+				       (node->sport->enable_ini ?
+				       FCP_SPPF_INIT_FCN : 0) |
+				       (node->sport->enable_tgt ?
+				       FCP_SPPF_TARG_FCN : 0));
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a PRLO ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a PRLO ELS command, and send to the \c node.
+ *
+ * @param node Node to which the PRLO is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_prlo(struct efc_node_s *node, u32 timeout_sec, u32 retries,
+	       els_cb_t cb, void *cbarg)
+{
+	struct efct_s *efct = node->efc->base;
+	struct efct_io_s *els;
+	struct {
+		struct fc_els_prlo prlo;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "prlo";
+
+		/* Build PRLO request */
+		pp = els->els_req.virt;
+
+		memset(pp, 0, sizeof(*pp));
+		pp->prlo.prlo_cmd = ELS_PRLO;
+		pp->prlo.prlo_obs = 0x10;
+		pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
+
+		pp->spp.spp_type = FC_TYPE_FCP;
+		pp->spp.spp_type_ext = 0;
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a LOGO ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Format a LOGO, and send to the \c node.
+ *
+ * @param node Node to which the LOGO is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_logo(struct efc_node_s *node, u32 timeout_sec, u32 retries,
+	       els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct;
+	struct fc_els_logo *logo;
+	struct fc_els_flogi  *sparams;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	sparams = (struct fc_els_flogi *)node->sport->service_params;
+
+	els = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "logo";
+
+		/* Build LOGO request */
+
+		logo = els->els_req.virt;
+
+		memset(logo, 0, sizeof(*logo));
+		logo->fl_cmd = ELS_LOGO;
+		hton24(logo->fl_n_port_id, node->rnode.sport->fc_id);
+		logo->fl_n_port_wwn = sparams->fl_wwpn;
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send an ADISC ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct an ADISC ELS command, and send to the \c node.
+ *
+ * @param node Node to which the ADISC is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_adisc(struct efc_node_s *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct;
+	struct fc_els_adisc *adisc;
+	struct fc_els_flogi  *sparams;
+	struct efc_sli_port_s *sport = node->sport;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	sparams = (struct fc_els_flogi *)node->sport->service_params;
+
+	els = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "adisc";
+
+		/* Build ADISC request */
+
+		adisc = els->els_req.virt;
+
+		memset(adisc, 0, sizeof(*adisc));
+		adisc->adisc_cmd = ELS_ADISC;
+		hton24(adisc->adisc_hard_addr, sport->fc_id);
+		adisc->adisc_wwpn = sparams->fl_wwpn;
+		adisc->adisc_wwnn = sparams->fl_wwnn;
+		hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a PDISC ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a PDISC ELS command, and send to the \c node.
+ *
+ * @param node Node to which the PDISC is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_pdisc(struct efc_node_s *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct = node->efc->base;
+	struct fc_els_flogi  *pdisc;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*pdisc), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "pdisc";
+
+		pdisc = els->els_req.virt;
+
+		memcpy(pdisc, node->sport->service_params, sizeof(*pdisc));
+
+		pdisc->fl_cmd = ELS_PDISC;
+		memset(pdisc->_fl_resvd, 0, sizeof(pdisc->_fl_resvd));
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send an SCR ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Format an SCR, and send to the \c node.
+ *
+ * @param node Node to which the SCR is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function
+ * @param cbarg Callback function arg
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_scr(struct efc_node_s *node, u32 timeout_sec, u32 retries,
+	      els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct = node->efc->base;
+	struct fc_els_scr *req;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*req), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "scr";
+
+		req = els->els_req.virt;
+
+		memset(req, 0, sizeof(*req));
+		req->scr_cmd = ELS_SCR;
+		req->scr_reg_func = ELS_SCRF_FULL;
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send an RRQ ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Format an RRQ, and send to the \c node.
+ *
+ * @param node Node to which the RRQ is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function
+ * @param cbarg Callback function arg
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_rrq(struct efc_node_s *node, u32 timeout_sec, u32 retries,
+	      els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct = node->efc->base;
+	struct fc_els_scr *req;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*req), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "scr";
+
+		req = els->els_req.virt;
+
+		memset(req, 0, sizeof(*req));
+		req->scr_cmd = ELS_RRQ;
+		req->scr_reg_func = ELS_SCRF_FULL;
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send an RSCN ELS command.
+ *
+ * <h3 class="desc">Description</h3>
+ * Format an RSCN, and send to the \c node.
+ *
+ * @param node Node to which the RRQ is sent.
+ * @param timeout_sec Command timeout, in seconds.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param port_ids Pointer to port IDs
+ * @param port_ids_count Count of port IDs
+ * @param cb Callback function
+ * @param cbarg Callback function arg
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+struct efct_io_s *
+efct_send_rscn(struct efc_node_s *node, u32 timeout_sec, u32 retries,
+	       void *port_ids, u32 port_ids_count, els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct = node->efc->base;
+	struct fc_els_rscn *req;
+	struct fc_els_rscn_page *rscn_page;
+	u32 length = sizeof(*rscn_page) * port_ids_count;
+
+	length += sizeof(*req);
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, length, EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "rscn";
+
+		req = els->els_req.virt;
+
+		req->rscn_cmd = ELS_RSCN;
+		req->rscn_page_len = sizeof(struct fc_els_rscn_page);
+		req->rscn_plen = cpu_to_be16(length);
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		/* copy in the payload */
+		rscn_page = els->els_req.virt + sizeof(*req);
+		memcpy(rscn_page, port_ids,
+		       port_ids_count * sizeof(*rscn_page));
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @brief Send an LS_RJT ELS response.
+ *
+ * <h3 class="desc">Description</h3>
+ * Send an LS_RJT ELS response.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param ox_id Originator exchange ID being responded to.
+ * @param reason_code Reason code value for LS_RJT.
+ * @param reason_code_expl Reason code explanation value for LS_RJT.
+ * @param vendor_unique Vendor-unique value for LS_RJT.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+void *
+efct_send_ls_rjt(struct efc_lport *efc, struct efc_node_s *node,
+		 u32 ox_id, u32 reason_code,
+		u32 reason_code_expl, u32 vendor_unique)
+{
+	struct efct_io_s *io = NULL;
+	int rc;
+	struct efct_s *efct = node->efc->base;
+	struct fc_els_ls_rjt *rjt;
+
+	io = efct_els_io_alloc(node, sizeof(*rjt), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	node_els_trace();
+
+	io->els_callback = NULL;
+	io->els_callback_arg = NULL;
+	io->display_name = "ls_rjt";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	rjt = io->els_req.virt;
+	memset(rjt, 0, sizeof(*rjt));
+
+	rjt->er_cmd = ELS_LS_RJT;
+	rjt->er_reason = reason_code;
+	rjt->er_explan = reason_code_expl;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*rjt));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a PLOGI accept response.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a PLOGI LS_ACC, and send to the \c node,
+ * using the originator exchange ID ox_id.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param ox_id Originator exchange ID being responsed to.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+struct efct_io_s *
+efct_send_plogi_acc(struct efc_node_s *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct_s *efct = node->efc->base;
+	struct efct_io_s *io = NULL;
+	struct fc_els_flogi  *plogi;
+	struct fc_els_flogi  *req = (struct fc_els_flogi *)node->service_params;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "plog_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	plogi = io->els_req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(plogi, node->sport->service_params, sizeof(*plogi));
+	plogi->fl_cmd = ELS_LS_ACC;
+	memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
+
+	/* Set Application header support bit if requested */
+	if (req->fl_csp.sp_features & cpu_to_be16(FC_SP_FT_BCAST))
+		plogi->fl_csp.sp_features |= cpu_to_be32(FC_SP_FT_BCAST);
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*plogi));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+	return io;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send an FLOGI accept response for point-to-point negotiation.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct an FLOGI accept response, and send to the \c node using
+ * the originator exchange id \c ox_id. The \c s_id is used for the
+ * response frame source FC ID.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param ox_id Originator exchange ID for the response.
+ * @param s_id Source FC ID to be used in the response frame.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+void *
+efct_send_flogi_p2p_acc(struct efc_lport *efc, struct efc_node_s *node,
+			u32 ox_id, u32 s_id)
+{
+	struct efct_io_s *io = NULL;
+	int rc;
+	struct efct_s *efct = node->efc->base;
+	struct fc_els_flogi  *flogi;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = NULL;
+	io->els_callback_arg = NULL;
+	io->display_name = "flogi_p2p_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els_sid.ox_id = ox_id;
+	io->iparam.els_sid.s_id = s_id;
+
+	flogi = io->els_req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
+	flogi->fl_cmd = ELS_LS_ACC;
+	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
+
+	io->hio_type = EFCT_HW_ELS_RSP_SID;
+	rc = efct_els_send_rsp(io, sizeof(*flogi));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io_s *
+efct_send_flogi_acc(struct efc_node_s *node, u32 ox_id, u32 is_fport,
+		    els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct_s *efct = node->efc->base;
+	struct efct_io_s *io = NULL;
+	struct fc_els_flogi  *flogi;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "flogi_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els_sid.ox_id = ox_id;
+	io->iparam.els_sid.s_id = io->node->sport->fc_id;
+
+	flogi = io->els_req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
+
+	/* Set F_port */
+	if (is_fport) {
+		/* Set F_PORT and Multiple N_PORT_ID Assignment */
+		flogi->fl_csp.sp_r_a_tov |=  be32_to_cpu(3U << 28);
+	}
+
+	flogi->fl_cmd = ELS_LS_ACC;
+	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
+
+	io->hio_type = EFCT_HW_ELS_RSP_SID;
+	rc = efct_els_send_rsp(io, sizeof(*flogi));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a PRLI accept response
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a PRLI LS_ACC response, and send to the \c node,
+ * using the originator ox_id exchange ID.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param ox_id Originator exchange ID.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *efct_send_prli_acc(struct efc_node_s *node,
+				     u32 ox_id, els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct_s *efct = node->efc->base;
+	struct efct_io_s *io = NULL;
+	struct {
+		struct fc_els_prli prli;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "prli_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	pp = io->els_req.virt;
+	memset(pp, 0, sizeof(*pp));
+
+	pp->prli.prli_cmd = ELS_LS_ACC;
+	pp->prli.prli_spp_len = 0x10;
+	pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+	pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR | FC_SPP_RESP_ACK;
+
+	pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
+					(node->sport->enable_ini ?
+					 FCP_SPPF_INIT_FCN : 0) |
+					(node->sport->enable_tgt ?
+					 FCP_SPPF_TARG_FCN : 0));
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*pp));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a PRLO accept response.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a PRLO LS_ACC response, and send to the \c node,
+ * using the originator exchange ID \c ox_id.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param ox_id Originator exchange ID.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_prlo_acc(struct efc_node_s *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct_s *efct = node->efc->base;
+	struct efct_io_s *io = NULL;
+	struct {
+		struct fc_els_prlo prlo;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "prlo_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	pp = io->els_req.virt;
+	memset(pp, 0, sizeof(*pp));
+	pp->prlo.prlo_cmd = ELS_LS_ACC;
+	pp->prlo.prlo_obs = 0x10;
+	pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
+
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+	pp->spp.spp_flags = FC_SPP_RESP_ACK;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*pp));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a generic LS_ACC response without a payload.
+ *
+ * <h3 class="desc">Description</h3>
+ * A generic LS_ACC response is sent to the \c node using the
+ * originator exchange ID ox_id.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param ox_id Originator exchange id.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+struct efct_io_s *
+efct_send_ls_acc(struct efc_node_s *node, u32 ox_id, els_cb_t cb,
+		 void *cbarg)
+{
+	int rc;
+	struct efct_s *efct = node->efc->base;
+	struct efct_io_s *io = NULL;
+	struct fc_els_ls_acc *acc;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*acc), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "ls_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	acc = io->els_req.virt;
+	memset(acc, 0, sizeof(*acc));
+
+	acc->la_cmd = ELS_LS_ACC;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*acc));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a LOGO accept response.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a LOGO LS_ACC response, and send to the \c node,
+ * using the originator exchange ID \c ox_id.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param ox_id Originator exchange ID.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+struct efct_io_s *
+efct_send_logo_acc(struct efc_node_s *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct_io_s *io = NULL;
+	struct efct_s *efct = node->efc->base;
+	struct fc_els_ls_acc *logo;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "logo_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	logo = io->els_req.virt;
+	memset(logo, 0, sizeof(*logo));
+
+	logo->la_cmd = ELS_LS_ACC;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*logo));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send an ADISC accept response.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct an ADISC LS__ACC, and send to the \c node, using the originator
+ * exchange id \c ox_id.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param ox_id Originator exchange ID.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_send_adisc_acc(struct efc_node_s *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct_io_s *io = NULL;
+	struct fc_els_adisc *adisc;
+	struct fc_els_flogi  *sparams;
+	struct efct_s *efct;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "adisc_acc";
+	io->init_task_tag = ox_id;
+
+	/* Go ahead and send the ELS_ACC */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	sparams = (struct fc_els_flogi  *)node->sport->service_params;
+	adisc = io->els_req.virt;
+	memset(adisc, 0, sizeof(*adisc));
+	adisc->adisc_cmd = ELS_LS_ACC;
+	adisc->adisc_wwpn = sparams->fl_wwpn;
+	adisc->adisc_wwnn = sparams->fl_wwnn;
+	hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*adisc));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+void *
+efct_els_send_ct(struct efc_lport *efc, struct efc_node_s *node, u32 cmd,
+		 u32 timeout_sec, u32 retries)
+{
+	struct efct_s *efct = efc->base;
+
+	switch (cmd) {
+	case FC_RCTL_ELS_REQ:
+		efc_log_err(efct, "send efct_ns_send_rftid\n");
+		efct_ns_send_rftid(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case FC_NS_RFF_ID:
+		efc_log_err(efct, "send efct_ns_send_rffid\n");
+		efct_ns_send_rffid(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case FC_NS_GID_PT:
+		efc_log_err(efct, "send efct_ns_send_gidpt\n");
+		efct_ns_send_gidpt(node, timeout_sec, retries, NULL, NULL);
+		break;
+	default:
+		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
+	}
+
+	return NULL;
+}
+
+static inline void fcct_build_req_header(struct fc_ct_hdr  *hdr,
+					 u16 cmd, u16 max_size)
+{
+	hdr->ct_rev = FC_CT_REV;
+	hdr->ct_fs_type = FC_FST_DIR;
+	hdr->ct_fs_subtype = FC_NS_SUBTYPE;
+	hdr->ct_options = 0;
+	hdr->ct_cmd = cpu_to_be16(cmd);
+	/* words */
+	hdr->ct_mr_size = cpu_to_be16(max_size / (sizeof(u32)));
+	hdr->ct_reason = 0;
+	hdr->ct_explan = 0;
+	hdr->ct_vendor = 0;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a RFTID CT request.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct an RFTID CT request, and send to the \c node.
+ *
+ * @param node Node to which the RFTID request is sent.
+ * @param timeout_sec Time, in seconds, to wait before timing out the ELS.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+struct efct_io_s *
+efct_ns_send_rftid(struct efc_node_s *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct = node->efc->base;
+	struct fc_ct_hdr *ct;
+	struct fc_ns_rft_id *rftid;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*ct) + sizeof(*rftid),
+				EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
+		els->iparam.fc_ct.type = FC_TYPE_CT;
+		els->iparam.fc_ct.df_ctl = 0;
+		els->iparam.fc_ct.timeout = timeout_sec;
+
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "rftid";
+
+		ct = els->els_req.virt;
+		memset(ct, 0, sizeof(*ct));
+		fcct_build_req_header(ct, FC_NS_RFT_ID, sizeof(*rftid));
+
+		rftid = els->els_req.virt + sizeof(*ct);
+		memset(rftid, 0, sizeof(*rftid));
+		hton24(rftid->fr_fid.fp_fid, node->rnode.sport->fc_id);
+		rftid->fr_fts.ff_type_map[FC_TYPE_FCP / FC_NS_BPW] =
+			cpu_to_be32(1 << (FC_TYPE_FCP % FC_NS_BPW));
+
+		els->hio_type = EFCT_HW_FC_CT;
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a RFFID CT request.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct an RFFID CT request, and send to the \c node.
+ *
+ * @param node Node to which the RFFID request is sent.
+ * @param timeout_sec Time, in seconds, to wait before timing out the ELS.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+struct efct_io_s *
+efct_ns_send_rffid(struct efc_node_s *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els;
+	struct efct_s *efct = node->efc->base;
+	struct fc_ct_hdr *ct;
+	struct fc_ns_rff_id *rffid;
+	u32 size = 0;
+
+	node_els_trace();
+
+	size = sizeof(*ct) + sizeof(*rffid);
+
+	els = efct_els_io_alloc(node, size, EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
+		els->iparam.fc_ct.type = FC_TYPE_CT;
+		els->iparam.fc_ct.df_ctl = 0;
+		els->iparam.fc_ct.timeout = timeout_sec;
+
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "rffid";
+		ct = els->els_req.virt;
+
+		memset(ct, 0, sizeof(*ct));
+		fcct_build_req_header(ct, FC_NS_RFF_ID, sizeof(*rffid));
+
+		rffid = els->els_req.virt + sizeof(*ct);
+		memset(rffid, 0, sizeof(*rffid));
+
+		hton24(rffid->fr_fid.fp_fid, node->rnode.sport->fc_id);
+		if (node->sport->enable_ini)
+			rffid->fr_feat |= FCP_FEAT_INIT;
+		if (node->sport->enable_tgt)
+			rffid->fr_feat |= FCP_FEAT_TARG;
+		rffid->fr_type = FC_TYPE_FCP;
+
+		els->hio_type = EFCT_HW_FC_CT;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a GIDPT CT request.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a GIDPT CT request, and send to the \c node.
+ *
+ * @param node Node to which the GIDPT request is sent.
+ * @param timeout_sec Time, in seconds, to wait before timing out the ELS.
+ * @param retries Number of times to retry errors before reporting a failure.
+ * @param cb Callback function.
+ * @param cbarg Callback function argument.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_ns_send_gidpt(struct efc_node_s *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io_s *els = NULL;
+	struct efct_s *efct = node->efc->base;
+	struct fc_ct_hdr *ct;
+	struct fc_ns_gid_pt *gidpt;
+	u32 size = 0;
+
+	node_els_trace();
+
+	size = sizeof(*ct) + sizeof(*gidpt);
+	els = efct_els_io_alloc_size(node, size,
+				     EFCT_ELS_GID_PT_RSP_LEN,
+				   EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+
+	els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
+	els->iparam.fc_ct.type = FC_TYPE_CT;
+	els->iparam.fc_ct.df_ctl = 0;
+	els->iparam.fc_ct.timeout = timeout_sec;
+
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "gidpt";
+
+	ct = els->els_req.virt;
+
+	memset(ct, 0, sizeof(*ct));
+	fcct_build_req_header(ct, FC_NS_GID_PT, sizeof(*gidpt));
+
+	gidpt = els->els_req.virt + sizeof(*ct);
+	memset(gidpt, 0, sizeof(*gidpt));
+	gidpt->fn_pt_type = FC_TYPE_FCP;
+
+	els->hio_type = EFCT_HW_FC_CT;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a BA_ACC given the request's FC header
+ *
+ * <h3 class="desc">Description</h3>
+ * Using the S_ID/D_ID from the request's FC header, generate a BA_ACC.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param hdr Pointer to the FC header.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+//struct efct_io_s *
+void *
+efct_bls_send_acc_hdr(struct efc_lport *efc, struct efc_node_s *node,
+		      struct fc_frame_header *hdr)
+{
+	struct efct_io_s *io = NULL;
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
+	u32 d_id = ntoh24(hdr->fh_d_id);
+
+	io = efct_scsi_io_alloc(node, EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return io;
+	}
+
+	return efct_bls_send_acc(io, d_id, ox_id, rx_id);
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a BA_RJT given the request's FC header
+ *
+ * <h3 class="desc">Description</h3>
+ * Using the S_ID/D_ID from the request's FC header, generate a BA_ACC.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param hdr Pointer to the FC header.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+struct efct_io_s *
+efct_bls_send_rjt_hdr(struct efct_io_s *io, struct fc_frame_header *hdr)
+{
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
+	u32 d_id = ntoh24(hdr->fh_d_id);
+
+	return efct_bls_send_rjt(io, d_id, ox_id, rx_id);
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a BLS BA_RJT response.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a BLS BA_RJT response, and send to the \c node.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param s_id S_ID to use for the response.
+ * If U32_MAX, then use our SLI port (sport) S_ID.
+ * @param ox_id Originator exchange ID.
+ * @param rx_id Responder exchange ID.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+static struct efct_io_s *
+efct_bls_send_rjt(struct efct_io_s *io, u32 s_id,
+		  u16 ox_id, u16 rx_id)
+{
+	struct efc_node_s *node = io->node;
+	int rc;
+	struct fc_ba_rjt *acc;
+	struct efct_s *efct;
+
+	efct = node->efc->base;
+
+	if (node->rnode.sport->fc_id == s_id)
+		s_id = U32_MAX;
+
+	/* fill out generic fields */
+	io->efct = efct;
+	io->node = node;
+	io->cmd_tgt = true;
+
+	/* fill out BLS Response-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "ba_rjt";
+	io->hio_type = EFCT_HW_BLS_RJT;
+	io->init_task_tag = ox_id;
+
+	/* fill out iparam fields */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.bls_sid.ox_id = ox_id;
+	io->iparam.bls_sid.rx_id = rx_id;
+
+	acc = (void *)io->iparam.bls_sid.payload;
+
+	memset(io->iparam.bls_sid.payload, 0,
+	       sizeof(io->iparam.bls_sid.payload));
+	acc->br_reason = ELS_RJT_UNAB;
+	acc->br_explan = ELS_EXPL_NONE;
+
+	rc = efct_scsi_io_dispatch(io, efct_bls_send_rjt_cb);
+	if (rc) {
+		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
+		efct_scsi_io_free(io);
+		io = NULL;
+	}
+	return io;
+}
+
+/**
+ * @ingroup els_api
+ * @brief Send a BLS BA_ACC response.
+ *
+ * <h3 class="desc">Description</h3>
+ * Construct a BLS BA_ACC response, and send to the \c node.
+ *
+ * @param io Pointer to a SCSI IO object.
+ * @param s_id S_ID to use for the response.
+ * If U32_MAX, then use our SLI port (sport) S_ID.
+ * @param ox_id Originator exchange ID.
+ * @param rx_id Responder exchange ID.
+ *
+ * @return Returns pointer to IO object, or NULL if error.
+ */
+
+static struct efct_io_s *
+efct_bls_send_acc(struct efct_io_s *io, u32 s_id,
+		  u16 ox_id, u16 rx_id)
+{
+	struct efc_node_s *node = io->node;
+	int rc;
+	struct fc_ba_acc *acc;
+	struct efct_s *efct;
+
+	efct = node->efc->base;
+
+	if (node->rnode.sport->fc_id == s_id)
+		s_id = U32_MAX;
+
+	/* fill out generic fields */
+	io->efct = efct;
+	io->node = node;
+	io->cmd_tgt = true;
+
+	/* fill out BLS Response-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "ba_acc";
+	io->hio_type = EFCT_HW_BLS_ACC_SID;
+	io->init_task_tag = ox_id;
+
+	/* fill out iparam fields */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.bls_sid.s_id = s_id;
+	io->iparam.bls_sid.ox_id = ox_id;
+	io->iparam.bls_sid.rx_id = rx_id;
+
+	acc = (void *)io->iparam.bls_sid.payload;
+
+	memset(io->iparam.bls_sid.payload, 0,
+	       sizeof(io->iparam.bls_sid.payload));
+	acc->ba_ox_id = io->iparam.bls_sid.ox_id;
+	acc->ba_rx_id = io->iparam.bls_sid.rx_id;
+	acc->ba_high_seq_cnt = U16_MAX;
+
+	rc = efct_scsi_io_dispatch(io, efct_bls_send_acc_cb);
+	if (rc) {
+		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
+		efct_scsi_io_free(io);
+		io = NULL;
+	}
+	return io;
+}
+
+/**
+ * @brief Handle the BLS accept completion.
+ *
+ * <h3 class="desc">Description</h3>
+ * Upon completion of sending a BA_ACC, this callback is invoked by the HW.
+ *
+ * @param hio Pointer to the HW IO object.
+ * @param rnode Pointer to the HW remote node.
+ * @param length Length of the response payload, in bytes.
+ * @param status Completion status.
+ * @param ext_status Extended completion status.
+ * @param app Callback private argument.
+ *
+ * @return Returns 0 on success; or a negative error value on failure.
+ */
+
+static int efct_bls_send_acc_cb(struct efct_hw_io_s *hio,
+				struct efc_remote_node_s *rnode, u32 length,
+		int status, u32 ext_status, void *app)
+{
+	struct efct_io_s *io = app;
+
+	efct_scsi_io_free(io);
+	return 0;
+}
+
+/**
+ * @brief Handle the BLS reject completion.
+ *
+ * <h3 class="desc">Description</h3>
+ * Upon completion of sending a BA_RJT, this callback is invoked by the HW.
+ *
+ * @param hio Pointer to the HW IO object.
+ * @param rnode Pointer to the HW remote node.
+ * @param length Length of the response payload, in bytes.
+ * @param status Completion status.
+ * @param ext_status Extended completion status.
+ * @param app Callback private argument.
+ *
+ * @return Returns 0 on success; or a negative error value on failure.
+ */
+
+static int efct_bls_send_rjt_cb(struct efct_hw_io_s *hio,
+				struct efc_remote_node_s *rnode, u32 length,
+		int status, u32 ext_status, void *app)
+{
+	struct efct_io_s *io = app;
+
+	efct_scsi_io_free(io);
+	return 0;
+}
+
+/**
+ * @brief ELS abort callback.
+ *
+ * <h3 class="desc">Description</h3>
+ * This callback is invoked by the HW when an ELS IO is aborted.
+ *
+ * @param hio Pointer to the HW IO object.
+ * @param rnode Pointer to the HW remote node.
+ * @param length Length of the response payload, in bytes.
+ * @param status Completion status.
+ * @param ext_status Extended completion status.
+ * @param app Callback private argument.
+ *
+ * @return Returns 0 on success; or a negative error value on failure.
+ */
+
+static int
+efct_els_abort_cb(struct efct_hw_io_s *hio, struct efc_remote_node_s *rnode,
+		  u32 length, int status, u32 ext_status,
+		 void *app)
+{
+	struct efct_io_s *els;
+	struct efct_io_s *abort_io = NULL; /* IO structure used to abort ELS */
+	struct efct_s *efct;
+
+	abort_io = app;
+	els = abort_io->io_to_abort;
+
+	if (!els || !els->node || !els->node->efc)
+		return -1;
+
+	efct = els->node->efc->base;
+
+	if (status != 0)
+		efc_log_warn(efct, "status x%x ext x%x\n", status, ext_status);
+
+	/* now free the abort IO */
+	efct_io_pool_io_free(efct->xport->io_pool, abort_io);
+
+	/* send completion event to indicate abort process is complete
+	 * Note: The ELS SM will already be receiving
+	 * ELS_REQ_OK/FAIL/RJT/ABORTED
+	 */
+	if (els->state == EFCT_ELS_REQ_ABORTED) {
+		/* completion for ELS that was aborted */
+		efct_els_abort_cleanup(els);
+	} else {
+		/* completion for abort was received first,
+		 * transition to wait for req cmpl
+		 */
+		els->state = EFCT_ELS_ABORT_IO_COMPL;
+	}
+
+	/* done with ELS IO to abort */
+	kref_put(&els->ref, els->release);
+	return 0;
+}
+
+/**
+ * @brief Abort an ELS IO.
+ *
+ * <h3 class="desc">Description</h3>
+ * The ELS IO is aborted by making a HW abort IO request,
+ * optionally requesting that an ABTS is sent.
+ *
+ * \b Note: This function allocates a HW IO, and associates the HW IO
+ * with the ELS IO that it is aborting. It does not associate
+ * the HW IO with the node directly, like for ELS requests. The
+ * abort completion is propagated up to the node once the
+ * original WQE and the abort WQE are complete (the original WQE
+ * completion is not propagated up to node).
+ *
+ * @param els Pointer to the ELS IO.
+ * @param send_abts Boolean to indicate if hardware will
+ *	automatically generate an ABTS.
+ *
+ * @return Returns pointer to Abort IO object, or NULL if error.
+ */
+
+static struct efct_io_s *
+efct_els_abort_io(struct efct_io_s *els, bool send_abts)
+{
+	struct efct_s *efct;
+	struct efct_xport_s *xport;
+	int rc;
+	struct efct_io_s *abort_io = NULL;
+
+	efct = els->node->efc->base;
+	xport = efct->xport;
+
+	/* take a reference on IO being aborted */
+	if ((kref_get_unless_zero(&els->ref) == 0)) {
+		/* command no longer active */
+		efc_log_debug(efct, "els no longer active\n");
+		return NULL;
+	}
+
+	/* allocate IO structure to send abort */
+	abort_io = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!abort_io) {
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+	} else {
+		/* set generic fields */
+		abort_io->efct = efct;
+		abort_io->node = els->node;
+		abort_io->cmd_ini = true;
+
+		/* set type and ABORT-specific fields */
+		abort_io->io_type = EFCT_IO_TYPE_ABORT;
+		abort_io->display_name = "abort_els";
+		abort_io->io_to_abort = els;
+		abort_io->send_abts = send_abts;
+
+		/* now dispatch IO */
+		rc = efct_scsi_io_dispatch_abort(abort_io, efct_els_abort_cb);
+		if (rc) {
+			efc_log_err(efct,
+				     "efct_scsi_io_dispatch failed: %d\n", rc);
+			efct_io_pool_io_free(efct->xport->io_pool, abort_io);
+			abort_io = NULL;
+		}
+	}
+
+	/* if something failed, put reference on ELS to abort */
+	if (!abort_io)
+		kref_put(&els->ref, els->release);
+	return abort_io;
+}
+
+/**
+ * @brief Cleanup an ELS IO
+ *
+ * <h3 class="desc">Description</h3>
+ * Cleans up an ELS IO by posting the requested event to the
+ * owning node object; invoking the callback, if one is
+ * provided; and then freeing the ELS IO object.
+ *
+ * @param els Pointer to the ELS IO.
+ * @param node_evt Node SM event to post.
+ * @param arg Node SM event argument.
+ *
+ * @return None.
+ */
+
+void
+efct_els_io_cleanup(struct efct_io_s *els,
+		    enum efc_hw_node_els_event_e node_evt, void *arg)
+{
+	/* don't want further events that could come; e.g. abort requests
+	 * from the node state machine; thus, disable state machine
+	 */
+	els->els_req_free = true;
+	efc_node_post_els_resp(els->node, node_evt, arg);
+
+	/* If this IO has a callback, invoke it */
+	if (els->els_callback) {
+		(*els->els_callback)(els->node, arg,
+				    els->els_callback_arg);
+	}
+	efct_els_io_free(els);
+}
+
+/**
+ * @brief cleanup ELS after abort
+ *
+ * @param els ELS IO to cleanup
+ *
+ * @return Returns None.
+ */
+
+static void
+efct_els_abort_cleanup(struct efct_io_s *els)
+{
+	/* handle event for ABORT_WQE
+	 * whatever state ELS happened to be in, propagate aborted even
+	 * up to node state machine in lieu of EFC_HW_SRRS_ELS_* event
+	 */
+	struct efc_node_cb_s cbdata;
+
+	cbdata.status = 0;
+	cbdata.ext_status = 0;
+	cbdata.els_rsp = els->els_rsp;
+	els_io_printf(els, "Request aborted\n");
+	efct_els_io_cleanup(els, EFC_HW_ELS_REQ_ABORTED, &cbdata);
+}
+
+/**
+ * @brief return TRUE if given ELS list is empty (while taking proper locks)
+ *
+ * Test if given ELS list is empty while holding the node->active_ios_lock.
+ *
+ * @param node pointer to node object
+ * @param list pointer to list
+ *
+ * @return TRUE if els_io_list is empty
+ */
+
+int
+efct_els_io_list_empty(struct efc_node_s *node, struct list_head *list)
+{
+	int empty;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		empty = list_empty(list);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return empty;
+}
+
+/**
+ * @brief Handle CT send response completion
+ *
+ * Called when CT response completes, free IO
+ *
+ * @param hio Pointer to the HW IO context that completed.
+ * @param rnode Pointer to the remote node.
+ * @param length Length of the returned payload data.
+ * @param status Status of the completion.
+ * @param ext_status Extended status of the completion.
+ * @param arg Application-specific argument (generally a
+ * pointer to the ELS IO context).
+ *
+ * @return returns 0
+ */
+static int
+efct_ct_acc_cb(struct efct_hw_io_s *hio, struct efc_remote_node_s *rnode,
+	       u32 length, int status, u32 ext_status,
+	      void *arg)
+{
+	struct efct_io_s *io = arg;
+
+	efct_els_io_free(io);
+
+	return 0;
+}
+
+/**
+ * @brief Send CT response
+ *
+ * Sends a CT response frame with payload
+ *
+ * @param io Pointer to the IO context.
+ * @param ox_id Originator exchange ID
+ * @param ct_hdr Pointer to the CT IU
+ * @param cmd_rsp_code CT response code
+ * @param reason_code Reason code
+ * @param reason_code_explanation Reason code explanation
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+int
+efct_send_ct_rsp(struct efc_lport *efc, struct efc_node_s *node, __be16 ox_id,
+		 struct fc_ct_hdr  *ct_hdr, u32 cmd_rsp_code,
+		u32 reason_code, u32 reason_code_explanation)
+{
+	struct efct_io_s *io = NULL;
+	struct fc_ct_hdr  *rsp = NULL;
+
+	io = efct_els_io_alloc(node, 256, EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return -1;
+	}
+
+	rsp = io->els_rsp.virt;
+	io->io_type = EFCT_IO_TYPE_CT_RESP;
+
+	*rsp = *ct_hdr;
+
+	fcct_build_req_header(rsp, cmd_rsp_code, 0);
+	rsp->ct_reason = reason_code;
+	rsp->ct_explan = reason_code_explanation;
+
+	io->display_name = "ct response";
+	io->init_task_tag = ox_id;
+	io->wire_len += sizeof(*rsp);
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+
+	io->io_type = EFCT_IO_TYPE_CT_RESP;
+	io->hio_type = EFCT_HW_FC_CT_RSP;
+	io->iparam.fc_ct_rsp.ox_id = cpu_to_be16(ox_id);
+	io->iparam.fc_ct_rsp.r_ctl = 3;
+	io->iparam.fc_ct_rsp.type = FC_TYPE_CT;
+	io->iparam.fc_ct_rsp.df_ctl = 0;
+	io->iparam.fc_ct_rsp.timeout = 5;
+
+	if (efct_scsi_io_dispatch(io, efct_ct_acc_cb) < 0) {
+		efct_els_io_free(io);
+		return -1;
+	}
+	return 0;
+}
+
+static void
+efct_els_retry(struct efct_io_s *els)
+{
+	struct efct_s *efct;
+	struct efc_node_cb_s cbdata;
+
+	efct = els->node->efc->base;
+	cbdata.status = INT_MAX;
+	cbdata.ext_status = INT_MAX;
+	cbdata.els_rsp = els->els_rsp;
+
+	if (!els->els_retries_remaining) {
+		efc_log_err(efct, "ELS retries exhausted\n");
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+				    &cbdata);
+		return;
+	}
+
+	els->els_retries_remaining--;
+	 /* Free the HW IO so that a new oxid is used.*/
+	if (els->hio) {
+		efct_hw_io_free(&efct->hw, els->hio);
+		els->hio = NULL;
+	}
+
+	efct_els_send_req(els->node, els);
+}
+
+/**
+ * @brief Handle delay retry timeout
+ *
+ * Callback is invoked when the delay retry timer expires.
+ *
+ * @param arg pointer to the ELS IO object
+ *
+ * @return none
+ */
+static void
+efct_els_delay_timer_cb(struct timer_list *t)
+{
+	struct efct_io_s *els = from_timer(els, t, delay_timer);
+	struct efc_node_s *node = els->node;
+
+	/* Retry delay timer expired, retry the ELS request,
+	 * Free the HW IO so that a new oxid is used.
+	 */
+	if (els->state == EFCT_ELS_REQUEST_DELAY_ABORT) {
+		node->els_req_cnt++;
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+					    NULL);
+	} else {
+		efct_els_retry(els);
+	}
+
+}
diff --git a/drivers/scsi/elx/efct/efct_els.h b/drivers/scsi/elx/efct/efct_els.h
new file mode 100644
index 000000000000..19fbfcb77f78
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_els.h
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_ELS_H__)
+#define __EFCT_ELS_H__
+
+enum efct_els_role_e {
+	EFCT_ELS_ROLE_ORIGINATOR,
+	EFCT_ELS_ROLE_RESPONDER,
+};
+
+void _efct_els_io_free(struct kref *arg);
+extern struct efct_io_s *
+efct_els_io_alloc(struct efc_node_s *node, u32 reqlen,
+		  enum efct_els_role_e role);
+extern struct efct_io_s *
+efct_els_io_alloc_size(struct efc_node_s *node, u32 reqlen,
+		       u32 rsplen,
+				       enum efct_els_role_e role);
+void efct_els_io_free(struct efct_io_s *els);
+
+extern void *
+efct_els_req_send(struct efc_lport *efc, struct efc_node_s *node,
+		  u32 cmd, u32 timeout_sec, u32 retries);
+extern void *
+efct_els_send_ct(struct efc_lport *efc, struct efc_node_s *node,
+		 u32 cmd, u32 timeout_sec, u32 retries);
+extern void *
+efct_els_resp_send(struct efc_lport *efc, struct efc_node_s *node,
+		   u32 cmd, u16 ox_id);
+void
+efct_els_abort(struct efct_io_s *els, struct efc_node_cb_s *arg);
+/* ELS command send */
+typedef void (*els_cb_t)(struct efc_node_s *node,
+			 struct efc_node_cb_s *cbdata, void *arg);
+extern struct efct_io_s *
+efct_send_plogi(struct efc_node_s *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_flogi(struct efc_node_s *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_fdisc(struct efc_node_s *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_prli(struct efc_node_s *node, u32 timeout_sec,
+	       u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_prlo(struct efc_node_s *node, u32 timeout_sec,
+	       u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_logo(struct efc_node_s *node, u32 timeout_sec,
+	       u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_adisc(struct efc_node_s *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_pdisc(struct efc_node_s *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_scr(struct efc_node_s *node, u32 timeout_sec,
+	      u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_rrq(struct efc_node_s *node, u32 timeout_sec,
+	      u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_ns_send_rftid(struct efc_node_s *node,
+		   u32 timeout_sec,
+		  u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_ns_send_rffid(struct efc_node_s *node,
+		   u32 timeout_sec,
+		  u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_ns_send_gidpt(struct efc_node_s *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_rscn(struct efc_node_s *node, u32 timeout_sec,
+	       u32 retries, void *port_ids,
+	      u32 port_ids_count, els_cb_t cb, void *cbarg);
+extern void
+efct_els_io_cleanup(struct efct_io_s *els, enum efc_hw_node_els_event_e,
+		    void *arg);
+
+/* ELS acc send */
+extern struct efct_io_s *
+efct_send_ls_acc(struct efc_node_s *node, u32 ox_id,
+		 els_cb_t cb, void *cbarg);
+
+extern void *
+efct_send_ls_rjt(struct efc_lport *efc, struct efc_node_s *node, u32 ox_id,
+		 u32 reason_cod, u32 reason_code_expl,
+		u32 vendor_unique);
+extern void *
+efct_send_flogi_p2p_acc(struct efc_lport *efc, struct efc_node_s *node,
+			u32 ox_id, u32 s_id);
+extern struct efct_io_s *
+efct_send_flogi_acc(struct efc_node_s *node, u32 ox_id,
+		    u32 is_fport, els_cb_t cb,
+		   void *cbarg);
+extern struct efct_io_s *
+efct_send_plogi_acc(struct efc_node_s *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_prli_acc(struct efc_node_s *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_logo_acc(struct efc_node_s *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_prlo_acc(struct efc_node_s *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg);
+extern struct efct_io_s *
+efct_send_adisc_acc(struct efc_node_s *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg);
+
+/* BLS acc send */
+extern void *
+efct_bls_send_acc_hdr(struct efc_lport *efc, struct efc_node_s *node,
+		      struct fc_frame_header *hdr);
+/* BLS rjt send */
+extern struct efct_io_s *
+efct_bls_send_rjt_hdr(struct efct_io_s *io, struct fc_frame_header *hdr);
+
+/* Misc */
+extern int
+efct_els_io_list_empty(struct efc_node_s *node, struct list_head *list);
+
+/* CT */
+extern int
+efct_send_ct_rsp(struct efc_lport *efc, struct efc_node_s *node, __be16 ox_id,
+		 struct fc_ct_hdr *ct_hdr,
+		u32 cmd_rsp_code, u32 reason_code,
+		u32 reason_code_explanation);
+
+#endif /* __EFCT_ELS_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 23/32] elx: efct: SCSI IO handling routines
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (21 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 22/32] elx: efct: Extended link Service IO handling James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 24/32] elx: efct: LIO backend interface routines James Smart
                   ` (9 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for SCSI trasport IO alloc, build and send IO.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_scsi.c | 1970 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_scsi.h |  401 ++++++++
 2 files changed, 2371 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.c
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.h

diff --git a/drivers/scsi/elx/efct/efct_scsi.c b/drivers/scsi/elx/efct/efct_scsi.c
new file mode 100644
index 000000000000..349a6610ad0b
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_scsi.c
@@ -0,0 +1,1970 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_els.h"
+#include "efct_utils.h"
+#include "efct_hw.h"
+
+#define enable_tsend_auto_resp(efct)	1
+#define enable_treceive_auto_resp(efct)	0
+
+#define SCSI_IOFMT "[%04x][i:%04x t:%04x h:%04x]"
+
+#define scsi_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efct, "[%s]" SCSI_IOFMT fmt, \
+		io->node->display_name, io->instance_index,\
+		io->init_task_tag, io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
+
+#define scsi_io_trace(io, fmt, ...) \
+	do { \
+		if (EFCT_LOG_ENABLE_SCSI_TRACE(io->efct)) \
+			scsi_io_printf(io, fmt, ##__VA_ARGS__); \
+	} while (0)
+
+static int
+efct_target_send_bls_resp(struct efct_io_s *, efct_scsi_io_cb_t, void *);
+static void
+efct_scsi_io_free_ovfl(struct efct_io_s *);
+static u32
+efct_scsi_count_sgls(struct efct_hw_dif_info_s *, struct efct_scsi_sgl_s *,
+		     u32);
+static int
+efct_scsi_io_dispatch_hw_io(struct efct_io_s *, struct efct_hw_io_s *);
+static int
+efct_scsi_io_dispatch_no_hw_io(struct efct_io_s *);
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Returns a big-endian 32-bit value given a pointer.
+ *
+ * @param p Pointer to the 32-bit big-endian location.
+ *
+ * @return Returns the byte-swapped 32-bit value.
+ */
+
+static inline u32
+efct_fc_getbe32(void *p)
+{
+	return be32_to_cpu(*((u32 *)p));
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Enable IO allocation.
+ *
+ * @par Description
+ * The SCSI and Transport IO allocation functions are enabled.
+ * If the allocation functions are not enabled, then calls to
+ * efct_scsi_io_alloc() (and efct_els_io_alloc() for FC) will fail.
+ *
+ * @param node Pointer to node object.
+ *
+ * @return None.
+ */
+void
+efct_scsi_io_alloc_enable(struct efc_lport *efc, struct efc_node_s *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		node->io_alloc_enabled = true;
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Disable IO allocation
+ *
+ * @par Description
+ * The SCSI and Transport IO allocation functions are disabled.
+ * If the allocation functions are not enabled, then calls to
+ * efct_scsi_io_alloc() (and efct_els_io_alloc() for FC) will fail.
+ *
+ * @param node Pointer to node object
+ *
+ * @return None.
+ */
+void
+efct_scsi_io_alloc_disable(struct efc_lport *efc, struct efc_node_s *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		node->io_alloc_enabled = false;
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Allocate a SCSI IO context.
+ *
+ * @par Description
+ * A SCSI IO context is allocated and associated with a @c node.
+ * This function is called by an initiator-client when issuing SCSI
+ * commands to remote target devices. On completion, efct_scsi_io_free()
+ * is called.
+ * The returned struct efct_io_s structure has an element of type
+ * struct efct_scsi_ini_io_s named &quot;ini_io&quot; that is declared
+ * and used by an initiator-client
+ * for private information.
+ *
+ * @param node Pointer to the associated node structure.
+ * @param role Role for IO (originator/responder).
+ *
+ * @return Returns the pointer to the IO context, or NULL.
+ *
+ */
+
+struct efct_io_s *
+efct_scsi_io_alloc(struct efc_node_s *node, enum efct_scsi_io_role_e role)
+{
+	struct efct_s *efct;
+	struct efc_lport *efcp;
+	struct efct_xport_s *xport;
+	struct efct_io_s *io;
+	unsigned long flags = 0;
+
+	efcp = node->efc;
+	efct = efcp->base;
+
+	xport = efct->xport;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+
+		if (!node->io_alloc_enabled) {
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return NULL;
+		}
+
+		io = efct_io_pool_io_alloc(efct->xport->io_pool);
+		if (!io) {
+			atomic_add_return(1, &xport->io_alloc_failed_count);
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return NULL;
+		}
+
+		/* initialize refcount */
+		kref_init(&io->ref);
+		io->release = _efct_scsi_io_free;
+
+		if (io->hio) {
+			efc_log_err(efct,
+				     "assertion failed: io->hio is not NULL\n");
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return NULL;
+		}
+
+		/* set generic fields */
+		io->efct = efct;
+		io->node = node;
+
+		/* set type and name */
+		io->io_type = EFCT_IO_TYPE_IO;
+		io->display_name = "scsi_io";
+
+		switch (role) {
+		case EFCT_SCSI_IO_ROLE_ORIGINATOR:
+			io->cmd_ini = true;
+			io->cmd_tgt = false;
+			break;
+		case EFCT_SCSI_IO_ROLE_RESPONDER:
+			io->cmd_ini = false;
+			io->cmd_tgt = true;
+			break;
+		}
+
+		/* Add to node's active_ios list */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &node->active_ios);
+
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	return io;
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Free a SCSI IO context (internal).
+ *
+ * @par Description
+ * The IO context previously allocated using efct_scsi_io_alloc()
+ * is freed. This is called from within the transport layer,
+ * when the reference count goes to zero.
+ *
+ * @param arg Pointer to the IO context.
+ *
+ * @return None.
+ */
+void
+_efct_scsi_io_free(struct kref *arg)
+{
+	struct efct_io_s *io = container_of(arg, struct efct_io_s, ref);
+	struct efct_s *efct = io->efct;
+	struct efc_node_s *node = io->node;
+	int send_empty_event;
+	unsigned long flags = 0;
+
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+
+	if (io->io_free) {
+		efc_log_err(efct, "IO already freed.\n");
+		return;
+	}
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		list_del(&io->list_entry);
+		send_empty_event = (!node->io_alloc_enabled) &&
+					list_empty(&node->active_ios);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	if (send_empty_event)
+		efc_scsi_io_list_empty(node->efc, node);
+
+	io->node = NULL;
+	efct_io_pool_io_free(efct->xport->io_pool, io);
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Free a SCSI IO context.
+ *
+ * @par Description
+ * The IO context previously allocated using efct_scsi_io_alloc() is freed.
+ *
+ * @param io Pointer to the IO context.
+ *
+ * @return None.
+ */
+void
+efct_scsi_io_free(struct efct_io_s *io)
+{
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+	WARN_ON(refcount_read(&io->ref.refcount) != 0);
+	kref_put(&io->ref, io->release);
+}
+
+/**
+ * @brief Target response completion callback.
+ *
+ * @par Description
+ * Function is called upon the completion of a target IO request.
+ *
+ * @param hio Pointer to the HW IO structure.
+ * @param rnode Remote node associated with the IO that is completing.
+ * @param length Length of the response payload.
+ * @param status Completion status.
+ * @param ext_status Extended completion status.
+ * @param app Application-specific data (generally a pointer to
+ * the IO context).
+ *
+ * @return None.
+ */
+
+static void
+efct_target_io_cb(struct efct_hw_io_s *hio, struct efc_remote_node_s *rnode,
+		  u32 length, int status, u32 ext_status, void *app)
+{
+	struct efct_io_s *io = app;
+	struct efct_s *efct;
+	enum efct_scsi_io_status_e scsi_stat = EFCT_SCSI_STATUS_GOOD;
+
+	if (!io || !io->efct) {
+		pr_err("%s: IO can not be NULL\n", __func__);
+		return;
+	}
+
+	scsi_io_trace(io, "status x%x ext_status x%x\n", status, ext_status);
+
+	efct = io->efct;
+
+	efct_scsi_io_free_ovfl(io);
+
+	io->transferred += length;
+
+	/* Call target server completion */
+	if (io->scsi_tgt_cb) {
+		efct_scsi_io_cb_t cb = io->scsi_tgt_cb;
+		u32 flags = 0;
+
+		/* Clear the callback before invoking the callback */
+		io->scsi_tgt_cb = NULL;
+
+		/* if status was good, and auto-good-response was set,
+		 * then callback target-server with IO_CMPL_RSP_SENT,
+		 * otherwise send IO_CMPL
+		 */
+		if (status == 0 && io->auto_resp)
+			flags |= EFCT_SCSI_IO_CMPL_RSP_SENT;
+		else
+			flags |= EFCT_SCSI_IO_CMPL;
+
+		switch (status) {
+		case SLI4_FC_WCQE_STATUS_SUCCESS:
+			scsi_stat = EFCT_SCSI_STATUS_GOOD;
+			break;
+		case SLI4_FC_WCQE_STATUS_DI_ERROR:
+			if (ext_status & SLI4_FC_DI_ERROR_GE)
+				scsi_stat = EFCT_SCSI_STATUS_DIF_GUARD_ERR;
+			else if (ext_status & SLI4_FC_DI_ERROR_AE)
+				scsi_stat = EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR;
+			else if (ext_status & SLI4_FC_DI_ERROR_RE)
+				scsi_stat = EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR;
+			else
+				scsi_stat = EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR;
+			break;
+		case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+			switch (ext_status) {
+			case SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET:
+			case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
+				scsi_stat = EFCT_SCSI_STATUS_ABORTED;
+				break;
+			case SLI4_FC_LOCAL_REJECT_INVALID_RPI:
+				scsi_stat = EFCT_SCSI_STATUS_NEXUS_LOST;
+				break;
+			case SLI4_FC_LOCAL_REJECT_NO_XRI:
+				scsi_stat = EFCT_SCSI_STATUS_NO_IO;
+				break;
+			default:
+				/*we have seen 0x0d(TX_DMA_FAILED err)*/
+				scsi_stat = EFCT_SCSI_STATUS_ERROR;
+				break;
+			}
+			break;
+
+		case SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT:
+			/* target IO timed out */
+			scsi_stat = EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED;
+			break;
+
+		case SLI4_FC_WCQE_STATUS_SHUTDOWN:
+			/* Target IO cancelled by HW */
+			scsi_stat = EFCT_SCSI_STATUS_SHUTDOWN;
+			break;
+
+		default:
+			scsi_stat = EFCT_SCSI_STATUS_ERROR;
+			break;
+		}
+
+		cb(io, scsi_stat, flags, io->scsi_tgt_cb_arg);
+	}
+	efct_scsi_check_pending(efct);
+}
+
+/**
+ * @brief Return count of SGE's required for request
+ *
+ * @par Description
+ * An accurate count of SGEs is computed and returned.
+ *
+ * @param hw_dif Pointer to HW dif information.
+ * @param sgl Pointer to SGL from back end.
+ * @param sgl_count Count of SGEs in SGL.
+ *
+ * @return Count of SGEs.
+ */
+static u32
+efct_scsi_count_sgls(struct efct_hw_dif_info_s *hw_dif,
+		     struct efct_scsi_sgl_s *sgl, u32 sgl_count)
+{
+	u32 count = 0;
+	u32 i;
+
+	/* Convert DIF Information */
+	if (hw_dif->dif_oper != EFCT_HW_DIF_OPER_DISABLED) {
+		/* If we're not DIF separate, then emit a seed SGE */
+		if (!hw_dif->dif_separate)
+			count++;
+
+		for (i = 0; i < sgl_count; i++) {
+			/* If DIF is enabled, and DIF is separate,
+			 * then append a SEED then DIF SGE
+			 */
+			if (hw_dif->dif_separate)
+				count += 2;
+
+			count++;
+		}
+	} else {
+		count = sgl_count;
+	}
+	return count;
+}
+
+static int
+efct_scsi_build_sgls(struct efct_hw_s *hw, struct efct_hw_io_s *hio,
+		     struct efct_hw_dif_info_s *hw_dif,
+		struct efct_scsi_sgl_s *sgl, u32 sgl_count,
+		enum efct_hw_io_type_e type)
+{
+	int rc;
+	u32 i;
+	struct efct_s *efct = hw->os;
+	u32 blocksize = 0;
+	u32 blockcount;
+
+	/* Initialize HW SGL */
+	rc = efct_hw_io_init_sges(hw, hio, type);
+	if (rc) {
+		efc_log_err(efct, "efct_hw_io_init_sges failed: %d\n", rc);
+		return -1;
+	}
+
+	/* Convert DIF Information */
+	if (hw_dif->dif_oper != EFCT_HW_DIF_OPER_DISABLED) {
+		/* If we're not DIF separate, then emit a seed SGE */
+		if (!hw_dif->dif_separate) {
+			rc = efct_hw_io_add_seed_sge(hw, hio, hw_dif);
+			if (rc)
+				return rc;
+		}
+
+		/* if we are doing DIF separate, then figure out the
+		 * block size so that we can update the ref tag in the
+		 * DIF seed SGE.   Also verify that the
+		 * the sgl lengths are all multiples of the blocksize
+		 */
+		if (hw_dif->dif_separate) {
+			switch (hw_dif->blk_size) {
+			case EFCT_HW_DIF_BK_SIZE_512:
+				blocksize = 512;
+				break;
+			case EFCT_HW_DIF_BK_SIZE_1024:
+				blocksize = 1024;
+				break;
+			case EFCT_HW_DIF_BK_SIZE_2048:
+				blocksize = 2048;
+				break;
+			case EFCT_HW_DIF_BK_SIZE_4096:
+				blocksize = 4096;
+				break;
+			case EFCT_HW_DIF_BK_SIZE_520:
+				blocksize = 520;
+				break;
+			case EFCT_HW_DIF_BK_SIZE_4104:
+				blocksize = 4104;
+				break;
+			default:
+				efc_log_test(efct,
+					      "Invalid hw_dif blocksize %d\n",
+					hw_dif->blk_size);
+				return -1;
+			}
+			for (i = 0; i < sgl_count; i++) {
+				if ((sgl[i].len % blocksize) != 0) {
+					efc_log_test(efct,
+						      "sgl[%d] len of %ld is not multiple of blocksize\n",
+					i, sgl[i].len);
+					return -1;
+				}
+			}
+		}
+
+		for (i = 0; i < sgl_count; i++) {
+
+			/* If DIF is enabled, and DIF is separate,
+			 * then append a SEED then DIF SGE
+			 */
+			if (hw_dif->dif_separate) {
+				rc = efct_hw_io_add_seed_sge(hw, hio,
+							     hw_dif);
+				if (rc)
+					return rc;
+				rc = efct_hw_io_add_dif_sge(hw, hio,
+							    sgl[i].dif_addr);
+				if (rc)
+					return rc;
+				/* Update the ref_tag for next DIF seed SGE*/
+				blockcount = sgl[i].len / blocksize;
+				if (hw_dif->dif_oper ==
+					EFCT_HW_DIF_OPER_INSERT)
+					hw_dif->ref_tag_repl += blockcount;
+				else
+					hw_dif->ref_tag_cmp += blockcount;
+			}
+
+			/* Add data SGE */
+			rc = efct_hw_io_add_sge(hw, hio,
+						sgl[i].addr, sgl[i].len);
+			if (rc) {
+				efc_log_err(efct,
+					     "add sge failed cnt=%d rc=%d\n",
+					     sgl_count, rc);
+				return rc;
+			}
+		}
+	} else {
+		for (i = 0; i < sgl_count; i++) {
+
+			/* Add data SGE */
+			rc = efct_hw_io_add_sge(hw, hio,
+						sgl[i].addr, sgl[i].len);
+			if (rc) {
+				efc_log_err(efct,
+					     "add sge failed cnt=%d rc=%d\n",
+					     sgl_count, rc);
+				return rc;
+			}
+		}
+	}
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Convert SCSI API T10 DIF information into the FC HW format.
+ *
+ * @param efct Pointer to the efct structure for logging.
+ * @param scsi_dif_info Pointer to the SCSI API T10 DIF fields.
+ * @param hw_dif_info Pointer to the FC HW API T10 DIF fields.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+
+static int
+efct_scsi_convert_dif_info(struct efct_s *efct,
+			   struct efct_scsi_dif_info_s *scsi_dif_info,
+			  struct efct_hw_dif_info_s *hw_dif_info)
+{
+	u32 dif_seed;
+
+	memset(hw_dif_info, 0,
+	       sizeof(struct efct_hw_dif_info_s));
+
+	if (!scsi_dif_info) {
+		hw_dif_info->dif_oper = EFCT_HW_DIF_OPER_DISABLED;
+		hw_dif_info->blk_size =  EFCT_HW_DIF_BK_SIZE_NA;
+		return 0;
+	}
+
+	/* Convert the DIF operation */
+	switch (scsi_dif_info->dif_oper) {
+	case EFCT_SCSI_DIF_OPER_IN_NODIF_OUT_CRC:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INNODIFOUTCRC;
+		hw_dif_info->dif = SLI4_DIF_INSERT;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CRC_OUT_NODIF:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INCRCOUTNODIF;
+		hw_dif_info->dif = SLI4_DIF_STRIP;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_NODIF_OUT_CHKSUM:
+		hw_dif_info->dif_oper =
+				EFCT_HW_SGE_DIFOP_INNODIFOUTCHKSUM;
+		hw_dif_info->dif = SLI4_DIF_INSERT;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_NODIF:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INCHKSUMOUTNODIF;
+		hw_dif_info->dif = SLI4_DIF_STRIP;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CRC_OUT_CRC:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INCRCOUTCRC;
+		hw_dif_info->dif = SLI4_DIF_PASS_THROUGH;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_CHKSUM:
+		hw_dif_info->dif_oper =
+			EFCT_HW_SGE_DIFOP_INCHKSUMOUTCHKSUM;
+		hw_dif_info->dif = SLI4_DIF_PASS_THROUGH;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CRC_OUT_CHKSUM:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INCRCOUTCHKSUM;
+		hw_dif_info->dif = SLI4_DIF_PASS_THROUGH;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_CRC:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INCHKSUMOUTCRC;
+		hw_dif_info->dif = SLI4_DIF_PASS_THROUGH;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_RAW_OUT_RAW:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INRAWOUTRAW;
+		hw_dif_info->dif = SLI4_DIF_PASS_THROUGH;
+		break;
+	default:
+		efc_log_test(efct, "unhandled SCSI DIF operation %d\n",
+			      scsi_dif_info->dif_oper);
+		return -1;
+	}
+
+	switch (scsi_dif_info->blk_size) {
+	case EFCT_SCSI_DIF_BK_SIZE_512:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_512;
+		break;
+	case EFCT_SCSI_DIF_BK_SIZE_1024:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_1024;
+		break;
+	case EFCT_SCSI_DIF_BK_SIZE_2048:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_2048;
+		break;
+	case EFCT_SCSI_DIF_BK_SIZE_4096:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_4096;
+		break;
+	case EFCT_SCSI_DIF_BK_SIZE_520:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_520;
+		break;
+	case EFCT_SCSI_DIF_BK_SIZE_4104:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_4104;
+		break;
+	default:
+		efc_log_test(efct, "unhandled SCSI DIF block size %d\n",
+			      scsi_dif_info->blk_size);
+		return -1;
+	}
+
+	/* If the operation is an INSERT the tags provided are the
+	 * ones that should be inserted, otherwise they're the ones
+	 * to be checked against.
+	 */
+	if (hw_dif_info->dif == SLI4_DIF_INSERT) {
+		hw_dif_info->ref_tag_repl = scsi_dif_info->ref_tag;
+		hw_dif_info->app_tag_repl = scsi_dif_info->app_tag;
+	} else {
+		hw_dif_info->ref_tag_cmp = scsi_dif_info->ref_tag;
+		hw_dif_info->app_tag_cmp = scsi_dif_info->app_tag;
+	}
+
+	hw_dif_info->check_ref_tag = scsi_dif_info->check_ref_tag;
+	hw_dif_info->check_app_tag = scsi_dif_info->check_app_tag;
+	hw_dif_info->check_guard = scsi_dif_info->check_guard;
+	hw_dif_info->auto_incr_ref_tag = true;
+	hw_dif_info->dif_separate = scsi_dif_info->dif_separate;
+	hw_dif_info->disable_app_ffff = scsi_dif_info->disable_app_ffff;
+	hw_dif_info->disable_app_ref_ffff =
+			scsi_dif_info->disable_app_ref_ffff;
+
+	efct_hw_get(&efct->hw, EFCT_HW_DIF_SEED, &dif_seed);
+	hw_dif_info->dif_seed = dif_seed;
+
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief This function logs the SGLs for an IO.
+ *
+ * @param io Pointer to the IO context.
+ */
+static void efc_log_sgl(struct efct_io_s *io)
+{
+	struct efct_hw_io_s *hio = io->hio;
+	struct sli4_sge_s *data = NULL;
+	u32 *dword = NULL;
+	u32 i;
+	u32 n_sge;
+
+	scsi_io_trace(io, "def_sgl at 0x%x 0x%08x\n",
+		      upper_32_bits(hio->def_sgl.phys),
+		      lower_32_bits(hio->def_sgl.phys));
+	n_sge = (hio->sgl == &hio->def_sgl ?
+			hio->n_sge : hio->def_sgl_count);
+	for (i = 0, data = hio->def_sgl.virt; i < n_sge; i++, data++) {
+		dword = (u32 *)data;
+
+		scsi_io_trace(io, "SGL %2d 0x%08x 0x%08x 0x%08x 0x%08x\n",
+			      i, dword[0], dword[1], dword[2], dword[3]);
+
+		if (dword[2] & (1U << 31))
+			break;
+	}
+
+	if (hio->ovfl_sgl &&
+	    hio->sgl == hio->ovfl_sgl) {
+		scsi_io_trace(io, "Overflow at 0x%x 0x%08x\n",
+			      upper_32_bits(hio->ovfl_sgl->phys),
+			      lower_32_bits(hio->ovfl_sgl->phys));
+		for (i = 0, data = hio->ovfl_sgl->virt; i < hio->n_sge;
+			i++, data++) {
+			dword = (u32 *)data;
+
+			scsi_io_trace(io,
+				      "SGL %2d 0x%08x 0x%08x 0x%08x 0x%08x\n",
+				i, dword[0], dword[1], dword[2], dword[3]);
+			if (dword[2] & (1U << 31))
+				break;
+		}
+	}
+}
+
+/* @brief Check pending error asynchronous callback function.
+ *
+ * @par Description
+ * Invoke the HW callback function for a given IO. This
+ * function is called from the NOP mailbox completion context.
+ *
+ * @param hw Pointer to HW object.
+ * @param status Completion status.
+ * @param mqe Mailbox completion queue entry.
+ * @param arg General purpose argument.
+ *
+ * @return Returns 0.
+ */
+static int
+efct_scsi_check_pending_async_cb(struct efct_hw_s *hw, int status,
+				 u8 *mqe, void *arg)
+{
+	struct efct_io_s *io = arg;
+
+	if (io) {
+		if (io->hw_cb) {
+			efct_hw_done_t cb = io->hw_cb;
+
+			io->hw_cb = NULL;
+			(cb)(io->hio, NULL, 0,
+			 SLI4_FC_WCQE_STATUS_DISPATCH_ERROR, 0, io);
+		}
+	}
+	return 0;
+}
+
+/**
+ * @brief Check for pending IOs to dispatch.
+ *
+ * @par Description
+ * If there are IOs on the pending list, and a HW IO is available, then
+ * dispatch the IOs.
+ *
+ * @param efct Pointer to the EFCT structure.
+ *
+ * @return None.
+ */
+
+void
+efct_scsi_check_pending(struct efct_s *efct)
+{
+	struct efct_xport_s *xport = efct->xport;
+	struct efct_io_s *io = NULL;
+	struct efct_hw_io_s *hio;
+	int status;
+	int count = 0;
+	int dispatch;
+	unsigned long flags = 0;
+
+	/* Guard against recursion */
+	if (atomic_add_return(1, &xport->io_pending_recursing)) {
+		/* This function is already running.  Decrement and return. */
+		atomic_sub_return(1, &xport->io_pending_recursing);
+		return;
+	}
+
+	do {
+		spin_lock_irqsave(&xport->io_pending_lock, flags);
+		status = 0;
+		hio = NULL;
+		if (!list_empty(&xport->io_pending_list)) {
+			io = list_first_entry(&xport->io_pending_list,
+					      struct efct_io_s,
+					      io_pending_link);
+		}
+		if (io) {
+			list_del(&io->io_pending_link);
+			if (io->io_type == EFCT_IO_TYPE_ABORT) {
+				hio = NULL;
+			} else {
+				hio = efct_hw_io_alloc(&efct->hw);
+				if (!hio) {
+					/*
+					 * No HW IO available.Put IO back on
+					 * the front of pending list
+					 */
+					list_add(&xport->io_pending_list,
+						 &io->io_pending_link);
+					io = NULL;
+				} else {
+					hio->eq = io->hw_priv;
+				}
+			}
+		}
+		/* Must drop the lock before dispatching the IO */
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+		if (io) {
+			count++;
+
+			/*
+			 * We pulled an IO off the pending list,
+			 * and either got an HW IO or don't need one
+			 */
+			atomic_sub_return(1, &xport->io_pending_count);
+			if (!hio)
+				status = efct_scsi_io_dispatch_no_hw_io(io);
+			else
+				status = efct_scsi_io_dispatch_hw_io(io, hio);
+			if (status) {
+				/*
+				 * Invoke the HW callback, but do so in the
+				 * separate execution context,provided by the
+				 * NOP mailbox completion processing context
+				 * by using efct_hw_async_call()
+				 */
+				if (efct_hw_async_call(&efct->hw,
+					       efct_scsi_check_pending_async_cb,
+					io)) {
+					efc_log_test(efct,
+						      "call hw async failed\n");
+				}
+			}
+		}
+	} while (io);
+
+	/*
+	 * If nothing was removed from the list,
+	 * we might be in a case where we need to abort an
+	 * active IO and the abort is on the pending list.
+	 * Look for an abort we can dispatch.
+	 */
+	if (count == 0) {
+		dispatch = 0;
+
+		spin_lock_irqsave(&xport->io_pending_lock, flags);
+		list_for_each_entry(io, &xport->io_pending_list,
+				    io_pending_link) {
+			if (io->io_type == EFCT_IO_TYPE_ABORT) {
+				if (io->io_to_abort->hio) {
+					/* This IO has a HW IO, so it is
+					 * active.  Dispatch the abort.
+					 */
+					dispatch = 1;
+				} else {
+					/* Leave this abort on the pending
+					 * list and keep looking
+					 */
+					dispatch = 0;
+				}
+			}
+			if (dispatch) {
+				list_del(&io->io_pending_link);
+				atomic_sub_return(1, &xport->io_pending_count);
+				break;
+			}
+		}
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+		if (dispatch) {
+			status = efct_scsi_io_dispatch_no_hw_io(io);
+			if (status) {
+				if (efct_hw_async_call(&efct->hw,
+					       efct_scsi_check_pending_async_cb,
+					io)) {
+					efc_log_test(efct,
+						      "call to hw async failed\n");
+				}
+			}
+		}
+	}
+
+	atomic_sub_return(1, &xport->io_pending_recursing);
+}
+
+/**
+ * @brief Attempt to dispatch a non-abort IO
+ *
+ * @par Description
+ * An IO is dispatched:
+ * - if the pending list is not empty, add IO to pending list
+ *   and call a function to process the pending list.
+ * - if pending list is empty, try to allocate a HW IO. If none
+ *   is available, place this IO at the tail of the pending IO
+ *   list.
+ * - if HW IO is available, attach this IO to the HW IO and
+ *   submit it.
+ *
+ * @param io Pointer to IO structure.
+ * @param cb Callback function.
+ *
+ * @return Returns 0 on success, a negative error code value on failure.
+ */
+
+int
+efct_scsi_io_dispatch(struct efct_io_s *io, void *cb)
+{
+	struct efct_hw_io_s *hio;
+	struct efct_s *efct = io->efct;
+	struct efct_xport_s *xport = efct->xport;
+	unsigned long flags = 0;
+
+	io->hw_cb = cb;
+
+	/*
+	 * if this IO already has a HW IO, then this is either
+	 * not the first phase of the IO. Send it to the HW.
+	 */
+	if (io->hio)
+		return efct_scsi_io_dispatch_hw_io(io, io->hio);
+
+	/*
+	 * We don't already have a HW IO associated with the IO. First check
+	 * the pending list. If not empty, add IO to the tail and process the
+	 * pending list.
+	 */
+	spin_lock_irqsave(&xport->io_pending_lock, flags);
+		if (!list_empty(&xport->io_pending_list)) {
+			/*
+			 * If this is a low latency request,
+			 * the put at the front of the IO pending
+			 * queue, otherwise put it at the end of the queue.
+			 */
+			if (io->low_latency) {
+				INIT_LIST_HEAD(&io->io_pending_link);
+				list_add(&xport->io_pending_list,
+					 &io->io_pending_link);
+			} else {
+				INIT_LIST_HEAD(&io->io_pending_link);
+				list_add_tail(&io->io_pending_link,
+					      &xport->io_pending_list);
+			}
+			spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+			atomic_add_return(1, &xport->io_pending_count);
+			atomic_add_return(1, &xport->io_total_pending);
+
+			/* process pending list */
+			efct_scsi_check_pending(efct);
+			return 0;
+		}
+	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+	/*
+	 * We don't have a HW IO associated with the IO and there's nothing
+	 * on the pending list. Attempt to allocate a HW IO and dispatch it.
+	 */
+	hio = efct_hw_io_alloc(&io->efct->hw);
+	if (!hio) {
+		/* Couldn't get a HW IO. Save this IO on the pending list */
+		spin_lock_irqsave(&xport->io_pending_lock, flags);
+		INIT_LIST_HEAD(&io->io_pending_link);
+		list_add_tail(&io->io_pending_link, &xport->io_pending_list);
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+		atomic_add_return(1, &xport->io_total_pending);
+		atomic_add_return(1, &xport->io_pending_count);
+		return 0;
+	}
+
+	/* We successfully allocated a HW IO; dispatch to HW */
+	return efct_scsi_io_dispatch_hw_io(io, hio);
+}
+
+/**
+ * @brief Attempt to dispatch an Abort IO.
+ *
+ * @par Description
+ * An Abort IO is dispatched:
+ * - if the pending list is not empty, add IO to pending list
+ *   and call a function to process the pending list.
+ * - if pending list is empty, send abort to the HW.
+ *
+ * @param io Pointer to IO structure.
+ * @param cb Callback function.
+ *
+ * @return Returns 0 on success, a negative error code value on failure.
+ */
+
+int
+efct_scsi_io_dispatch_abort(struct efct_io_s *io, void *cb)
+{
+	struct efct_s *efct = io->efct;
+	struct efct_xport_s *xport = efct->xport;
+	unsigned long flags = 0;
+
+	io->hw_cb = cb;
+
+	/*
+	 * For aborts, we don't need a HW IO, but we still want
+	 * to pass through the pending list to preserve ordering.
+	 * Thus, if the pending list is not empty, add this abort
+	 * to the pending list and process the pending list.
+	 */
+	spin_lock_irqsave(&xport->io_pending_lock, flags);
+		if (!list_empty(&xport->io_pending_list)) {
+			INIT_LIST_HEAD(&io->io_pending_link);
+			list_add_tail(&io->io_pending_link,
+				      &xport->io_pending_list);
+			spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+			atomic_add_return(1, &xport->io_pending_count);
+			atomic_add_return(1, &xport->io_total_pending);
+
+			/* process pending list */
+			efct_scsi_check_pending(efct);
+			return 0;
+		}
+	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+	/* nothing on pending list, dispatch abort */
+	return efct_scsi_io_dispatch_no_hw_io(io);
+}
+
+/**
+ * @brief Dispatch IO
+ *
+ * @par Description
+ * An IO and its associated HW IO is dispatched to the HW.
+ *
+ * @param io Pointer to IO structure.
+ * @param hio Pointer to HW IO structure from which IO will be
+ * dispatched.
+ *
+ * @return Returns 0 on success, a negative error code value on failure.
+ */
+
+static int
+efct_scsi_io_dispatch_hw_io(struct efct_io_s *io, struct efct_hw_io_s *hio)
+{
+	int rc = 0;
+	struct efct_s *efct = io->efct;
+
+	/* Got a HW IO;
+	 * update ini/tgt_task_tag with HW IO info and dispatch
+	 */
+	io->hio = hio;
+	if (io->cmd_tgt)
+		io->tgt_task_tag = hio->indicator;
+	else if (io->cmd_ini)
+		io->init_task_tag = hio->indicator;
+	io->hw_tag = hio->reqtag;
+
+	hio->eq = io->hw_priv;
+
+	/* Copy WQ steering */
+	switch (io->wq_steering) {
+	case EFCT_SCSI_WQ_STEERING_CLASS >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_CLASS;
+		break;
+	case EFCT_SCSI_WQ_STEERING_REQUEST >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_REQUEST;
+		break;
+	case EFCT_SCSI_WQ_STEERING_CPU >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_CPU;
+		break;
+	}
+
+	switch (io->io_type) {
+	case EFCT_IO_TYPE_IO: {
+		u32 max_sgl;
+		u32 total_count;
+		u32 host_allocated;
+
+		efct_hw_get(&efct->hw, EFCT_HW_N_SGL, &max_sgl);
+		efct_hw_get(&efct->hw, EFCT_HW_SGL_CHAINING_HOST_ALLOCATED,
+			    &host_allocated);
+
+		/*
+		 * If the requested SGL is larger than the default size,
+		 * then we can allocate an overflow SGL.
+		 */
+		total_count = efct_scsi_count_sgls(&io->hw_dif,
+						   io->sgl, io->sgl_count);
+
+		/*
+		 * Lancer requires us to allocate the chained memory area
+		 */
+		if (host_allocated && total_count > max_sgl) {
+			/* Compute count needed, the number
+			 * extra plus 1 for the link sge
+			 */
+			u32 count = total_count - max_sgl + 1;
+
+			io->ovfl_sgl.size = count * sizeof(struct sli4_sge_s);
+			io->ovfl_sgl.virt =
+				dma_alloc_coherent(&efct->pcidev->dev,
+						   io->ovfl_sgl.size,
+						&io->ovfl_sgl.phys, GFP_DMA);
+			if (!io->ovfl_sgl.virt) {
+				efc_log_err(efct,
+					     "dma alloc overflow sgl failed\n");
+				break;
+			}
+			rc = efct_hw_io_register_sgl(&efct->hw,
+						     io->hio, &io->ovfl_sgl,
+						     count);
+			if (rc) {
+				efct_scsi_io_free_ovfl(io);
+				efc_log_err(efct,
+					     "efct_hw_io_register_sgl() failed\n");
+				break;
+			}
+			/* EVT: update chained_io_count */
+			io->node->chained_io_count++;
+		}
+
+		rc = efct_scsi_build_sgls(&efct->hw, io->hio, &io->hw_dif,
+					  io->sgl, io->sgl_count, io->hio_type);
+		if (rc) {
+			efct_scsi_io_free_ovfl(io);
+			break;
+		}
+
+		if (EFCT_LOG_ENABLE_SCSI_TRACE(efct))
+			efc_log_sgl(io);
+
+		if (io->app_id)
+			io->iparam.fcp_tgt.app_id = io->app_id;
+
+		rc = efct_hw_io_send(&io->efct->hw, io->hio_type, io->hio,
+				     io->wire_len, &io->iparam,
+				     &io->node->rnode, io->hw_cb, io);
+		break;
+	}
+	case EFCT_IO_TYPE_ELS:
+	case EFCT_IO_TYPE_CT: {
+		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
+				       &io->els_req, io->wire_len,
+			&io->els_rsp, &io->node->rnode, &io->iparam,
+			io->hw_cb, io);
+		break;
+	}
+	case EFCT_IO_TYPE_CT_RESP: {
+		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
+				       &io->els_rsp, io->wire_len,
+			NULL, &io->node->rnode, &io->iparam,
+			io->hw_cb, io);
+		break;
+	}
+	case EFCT_IO_TYPE_BLS_RESP: {
+		/* no need to update tgt_task_tag for BLS response since
+		 * the RX_ID will be specified by the payload, not the XRI
+		 */
+		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
+				       NULL, 0, NULL, &io->node->rnode,
+			&io->iparam, io->hw_cb, io);
+		break;
+	}
+	default:
+		scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type);
+		rc = -1;
+		break;
+	}
+	return rc;
+}
+
+/**
+ * @brief Dispatch IO
+ *
+ * @par Description
+ * An IO that does require a HW IO is dispatched to the HW.
+ *
+ * @param io Pointer to IO structure.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+
+static int
+efct_scsi_io_dispatch_no_hw_io(struct efct_io_s *io)
+{
+	int rc;
+
+	switch (io->io_type) {
+	case EFCT_IO_TYPE_ABORT: {
+		struct efct_hw_io_s *hio_to_abort = NULL;
+
+		hio_to_abort = io->io_to_abort->hio;
+
+		if (!hio_to_abort) {
+			/*
+			 * If "IO to abort" does not have an
+			 * associated HW IO, immediately make callback with
+			 * success. The command must have been sent to
+			 * the backend, but the data phase has not yet
+			 * started, so we don't have a HW IO.
+			 *
+			 * Note: since the backend shims should be
+			 * taking a reference on io_to_abort, it should not
+			 * be possible to have been completed and freed by
+			 * the backend before the abort got here.
+			 */
+			scsi_io_printf(io, "IO: not active\n");
+			((efct_hw_done_t)io->hw_cb)(io->hio, NULL, 0,
+					SLI4_FC_WCQE_STATUS_SUCCESS, 0, io);
+			rc = 0;
+		} else {
+			/* HW IO is valid, abort it */
+			scsi_io_printf(io, "aborting\n");
+			rc = efct_hw_io_abort(&io->efct->hw, hio_to_abort,
+					      io->send_abts, io->hw_cb, io);
+			if (rc) {
+				int status = SLI4_FC_WCQE_STATUS_SUCCESS;
+
+				if (rc != EFCT_HW_RTN_IO_NOT_ACTIVE &&
+				    rc != EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
+					status = -1;
+					scsi_io_printf(io,
+						       "Failed to abort IO: status=%d\n",
+						rc);
+				}
+				((efct_hw_done_t)io->hw_cb)(io->hio,
+						NULL, 0, status, 0, io);
+				rc = 0;
+			}
+		}
+
+		break;
+	}
+	default:
+		scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type);
+		rc = -1;
+		break;
+	}
+	return rc;
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Send read/write data.
+ *
+ * @par Description
+ * This call is made by a target-server to initiate a SCSI read
+ * or write data phase, transferring data between the target to
+ * the remote initiator. The payload is specified by the scatter-gather
+ * list @c sgl of length @c sgl_count.
+ * The @c wire_len argument specifies the payload length (independent
+ * of the scatter-gather list cumulative length).
+ * @n @n
+ * The @c flags argument has one bit, EFCT_SCSI_LAST_DATAPHASE, which is
+ * a hint to the base driver that it may use auto SCSI response features
+ * if the hardware supports it.
+ * @n @n
+ * Upon completion, the callback function @b cb is called with flags
+ * indicating that the IO has completed (EFCT_SCSI_IO_COMPL) and another
+ * data phase or response may be sent; that the IO has completed and no
+ * response needs to be sent (EFCT_SCSI_IO_COMPL_NO_RSP); or that the IO
+ * was aborted (EFCT_SCSI_IO_ABORTED).
+ *
+ * @param io Pointer to the IO context.
+ * @param flags Flags controlling the sending of data.
+ * @param dif_info Pointer to T10 DIF fields, or NULL if no DIF.
+ * @param sgl Pointer to the payload scatter-gather list.
+ * @param sgl_count Count of the scatter-gather list elements.
+ * @param xwire_len Length of the payload on wire, in bytes.
+ * @param type HW IO type.
+ * @param enable_ar Enable auto-response if true.
+ * @param cb Completion callback.
+ * @param arg Application-supplied callback data.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+
+static inline int
+efct_scsi_xfer_data(struct efct_io_s *io, u32 flags,
+		    struct efct_scsi_dif_info_s *dif_info,
+	struct efct_scsi_sgl_s *sgl, u32 sgl_count, u64 xwire_len,
+	enum efct_hw_io_type_e type, int enable_ar,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	int rc;
+	struct efct_s *efct;
+	size_t residual = 0;
+
+	if (dif_info &&
+	    dif_info->dif_oper == EFCT_SCSI_DIF_OPER_DISABLED)
+		dif_info = NULL;
+
+	io->sgl_count = sgl_count;
+
+	efct = io->efct;
+
+	scsi_io_trace(io, "%s wire_len %llu\n",
+		      (type == EFCT_HW_IO_TARGET_READ) ? "send" : "recv",
+		      xwire_len);
+
+	io->hio_type = type;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	rc = efct_scsi_convert_dif_info(efct, dif_info, &io->hw_dif);
+	if (rc)
+		return rc;
+
+	/* If DIF is used, then save lba for error recovery */
+	if (dif_info)
+		io->scsi_dif_info = *dif_info;
+
+	residual = io->exp_xfer_len - io->transferred;
+	io->wire_len = (xwire_len < residual) ? xwire_len : residual;
+	residual = (xwire_len - io->wire_len);
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = io->transferred;
+	io->iparam.fcp_tgt.dif_oper = io->hw_dif.dif;
+	io->iparam.fcp_tgt.blk_size = io->hw_dif.blk_size;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	/* if this is the last data phase and there is no residual, enable
+	 * auto-good-response
+	 */
+	if (enable_ar && (flags & EFCT_SCSI_LAST_DATAPHASE) &&
+	    residual == 0 &&
+		((io->transferred + io->wire_len) == io->exp_xfer_len) &&
+		(!(flags & EFCT_SCSI_NO_AUTO_RESPONSE))) {
+		io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE;
+		io->auto_resp = true;
+	} else {
+		io->auto_resp = false;
+	}
+
+	/* save this transfer length */
+	io->xfer_req = io->wire_len;
+
+	/* Adjust the transferred count to account for overrun
+	 * when the residual is calculated in efct_scsi_send_resp
+	 */
+	io->transferred += residual;
+
+	/* Adjust the SGL size if there is overrun */
+
+	if (residual) {
+		struct efct_scsi_sgl_s  *sgl_ptr = &io->sgl[sgl_count - 1];
+
+		while (residual) {
+			size_t len = sgl_ptr->len;
+
+			if (len > residual) {
+				sgl_ptr->len = len - residual;
+				residual = 0;
+			} else {
+				sgl_ptr->len = 0;
+				residual -= len;
+				io->sgl_count--;
+			}
+			sgl_ptr--;
+		}
+	}
+
+	/* Set latency and WQ steering */
+	io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0;
+	io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >>
+				EFCT_SCSI_WQ_STEERING_SHIFT;
+	io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >>
+				EFCT_SCSI_WQ_CLASS_SHIFT;
+
+	if (efct->xport) {
+		struct efct_xport_s *xport = efct->xport;
+
+		if (type == EFCT_HW_IO_TARGET_READ) {
+			xport->fcp_stats.input_requests++;
+			xport->fcp_stats.input_bytes += xwire_len;
+		} else if (type == EFCT_HW_IO_TARGET_WRITE) {
+			xport->fcp_stats.output_requests++;
+			xport->fcp_stats.output_bytes += xwire_len;
+		}
+	}
+	return efct_scsi_io_dispatch(io, efct_target_io_cb);
+}
+
+int
+efct_scsi_send_rd_data(struct efct_io_s *io, u32 flags,
+		       struct efct_scsi_dif_info_s *dif_info,
+	struct efct_scsi_sgl_s *sgl, u32 sgl_count, u64 len,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	return efct_scsi_xfer_data(io, flags, dif_info, sgl, sgl_count,
+				 len, EFCT_HW_IO_TARGET_READ,
+				 enable_tsend_auto_resp(io->efct), cb, arg);
+}
+
+int
+efct_scsi_recv_wr_data(struct efct_io_s *io, u32 flags,
+		       struct efct_scsi_dif_info_s *dif_info,
+	struct efct_scsi_sgl_s *sgl, u32 sgl_count, u64 len,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	return efct_scsi_xfer_data(io, flags, dif_info, sgl, sgl_count, len,
+				 EFCT_HW_IO_TARGET_WRITE,
+				 enable_treceive_auto_resp(io->efct), cb, arg);
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Free overflow SGL.
+ *
+ * @par Description
+ * Free the overflow SGL if it is present.
+ *
+ * @param io Pointer to IO object.
+ *
+ * @return None.
+ */
+static void
+efct_scsi_io_free_ovfl(struct efct_io_s *io)
+{
+	if (io->ovfl_sgl.size) {
+		dma_free_coherent(&io->efct->pcidev->dev,
+				  io->ovfl_sgl.size, io->ovfl_sgl.virt,
+				  io->ovfl_sgl.phys);
+		memset(&io->ovfl_sgl, 0, sizeof(struct efc_dma_s));
+	}
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Send response data.
+ *
+ * @par Description
+ * This function is used by a target-server to send the SCSI response
+ * data to a remote initiator node. The target-server populates the
+ * @c struct efct_scsi_cmd_resp_s argument with scsi status, status qualifier,
+ * sense data, and response data, as needed.
+ * @n @n
+ * Upon completion, the callback function @c cb is invoked. The
+ * target-server will generally clean up its IO context resources and
+ * call efct_scsi_io_complete().
+ *
+ * @param io Pointer to the IO context.
+ * @param flags Flags to control sending of the SCSI response.
+ * @param rsp Pointer to the response data populated by the caller.
+ * @param cb Completion callback.
+ * @param arg Application-specified completion callback argument.
+
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int
+efct_scsi_send_resp(struct efct_io_s *io, u32 flags,
+		    struct efct_scsi_cmd_resp_s *rsp,
+		   efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct_s *efct;
+	int residual;
+	bool auto_resp = true;		/* Always try auto resp */
+	u8 scsi_status = 0;
+	u16 scsi_status_qualifier = 0;
+	u8 *sense_data = NULL;
+	u32 sense_data_length = 0;
+
+	efct = io->efct;
+
+	efct_scsi_convert_dif_info(efct, NULL, &io->hw_dif);
+
+	if (rsp) {
+		scsi_status = rsp->scsi_status;
+		scsi_status_qualifier = rsp->scsi_status_qualifier;
+		sense_data = rsp->sense_data;
+		sense_data_length = rsp->sense_data_length;
+		residual = rsp->residual;
+	} else {
+		residual = io->exp_xfer_len - io->transferred;
+	}
+
+	io->wire_len = 0;
+	io->hio_type = EFCT_HW_IO_TARGET_RSP;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = 0;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	/* Set low latency queueing request */
+	io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0;
+	io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >>
+				EFCT_SCSI_WQ_STEERING_SHIFT;
+	io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >>
+				EFCT_SCSI_WQ_CLASS_SHIFT;
+
+	if (scsi_status != 0 || residual || sense_data_length) {
+		struct fcp_resp_with_ext *fcprsp = io->rspbuf.virt;
+		u8 *sns_data = io->rspbuf.virt + sizeof(*fcprsp);
+
+		if (!fcprsp) {
+			efc_log_err(efct, "NULL response buffer\n");
+			return -1;
+		}
+
+		auto_resp = false;
+
+		memset(fcprsp, 0, sizeof(*fcprsp));
+
+		io->wire_len += sizeof(*fcprsp);
+
+		fcprsp->resp.fr_status = scsi_status;
+		fcprsp->resp.fr_retry_delay =
+			cpu_to_be16(scsi_status_qualifier);
+
+		/* set residual status if necessary */
+		if (residual != 0) {
+			/* FCP: if data transferred is less than the
+			 * amount expected, then this is an underflow.
+			 * If data transferred would have been greater
+			 * than the amount expected this is an overflow
+			 */
+			if (residual > 0) {
+				fcprsp->resp.fr_flags |= FCP_RESID_UNDER;
+				fcprsp->ext.fr_resid =	cpu_to_be32(residual);
+			} else {
+				fcprsp->resp.fr_flags |= FCP_RESID_OVER;
+				fcprsp->ext.fr_resid = cpu_to_be32(-residual);
+			}
+		}
+
+		if (EFCT_SCSI_SNS_BUF_VALID(sense_data) && sense_data_length) {
+			if (sense_data_length > SCSI_SENSE_BUFFERSIZE) {
+				efc_log_err(efct, "Sense exceeds max size.\n");
+				return -1;
+			}
+
+			fcprsp->resp.fr_flags |= FCP_SNS_LEN_VAL;
+			memcpy(sns_data, sense_data, sense_data_length);
+			fcprsp->ext.fr_sns_len = cpu_to_be32(sense_data_length);
+			io->wire_len += sense_data_length;
+		}
+
+		io->sgl[0].addr = io->rspbuf.phys;
+		io->sgl[0].dif_addr = 0;
+		io->sgl[0].len = io->wire_len;
+		io->sgl_count = 1;
+	}
+
+	if (auto_resp)
+		io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE;
+
+	return efct_scsi_io_dispatch(io, efct_target_io_cb);
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Send TMF response data.
+ *
+ * @par Description
+ * This function is used by a target-server to send SCSI TMF response\
+ * data to a remote initiator node.
+ * Upon completion, the callback function @c cb is invoked.
+ * The target-server will generally
+ * clean up its IO context resources and call efct_scsi_io_complete().
+ *
+ * @param io Pointer to the IO context.
+ * @param rspcode TMF response code.
+ * @param addl_rsp_info Additional TMF response information
+ *		(may be NULL for zero data).
+ * @param cb Completion callback.
+ * @param arg Application-specified completion callback argument.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int
+efct_scsi_send_tmf_resp(struct efct_io_s *io,
+			enum efct_scsi_tmf_resp_e rspcode,
+			u8 addl_rsp_info[3],
+			efct_scsi_io_cb_t cb, void *arg)
+{
+	int rc = -1;
+	struct efct_s *efct = NULL;
+	struct fcp_resp_with_ext *fcprsp = NULL;
+	struct fcp_resp_rsp_info *rspinfo = NULL;
+	u8 fcp_rspcode;
+
+	efct = io->efct;
+
+	io->wire_len = 0;
+	efct_scsi_convert_dif_info(efct, NULL, &io->hw_dif);
+
+	switch (rspcode) {
+	case EFCT_SCSI_TMF_FUNCTION_COMPLETE:
+		fcp_rspcode = FCP_TMF_CMPL;
+		break;
+	case EFCT_SCSI_TMF_FUNCTION_SUCCEEDED:
+	case EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND:
+		fcp_rspcode = FCP_TMF_CMPL;
+		break;
+	case EFCT_SCSI_TMF_FUNCTION_REJECTED:
+		fcp_rspcode = FCP_TMF_REJECTED;
+		break;
+	case EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER:
+		fcp_rspcode = FCP_TMF_INVALID_LUN;
+		break;
+	case EFCT_SCSI_TMF_SERVICE_DELIVERY:
+		fcp_rspcode = FCP_TMF_FAILED;
+		break;
+	default:
+		fcp_rspcode = FCP_TMF_REJECTED;
+		break;
+	}
+
+	io->hio_type = EFCT_HW_IO_TARGET_RSP;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	if (io->tmf_cmd == EFCT_SCSI_TMF_ABORT_TASK) {
+		rc = efct_target_send_bls_resp(io, cb, arg);
+		return rc;
+	}
+
+	/* populate the FCP TMF response */
+	fcprsp = io->rspbuf.virt;
+	memset(fcprsp, 0, sizeof(*fcprsp));
+
+	fcprsp->resp.fr_flags |= FCP_SNS_LEN_VAL;
+
+	rspinfo = io->rspbuf.virt + sizeof(*fcprsp);
+	if (addl_rsp_info) {
+		memcpy(rspinfo->_fr_resvd, addl_rsp_info,
+		       sizeof(rspinfo->_fr_resvd));
+	}
+	rspinfo->rsp_code = fcp_rspcode;
+
+	io->wire_len = sizeof(*fcprsp) + sizeof(*rspinfo);
+
+	fcprsp->ext.fr_rsp_len = cpu_to_be32(sizeof(*rspinfo));
+
+	io->sgl[0].addr = io->rspbuf.phys;
+	io->sgl[0].dif_addr = 0;
+	io->sgl[0].len = io->wire_len;
+	io->sgl_count = 1;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = 0;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	rc = efct_scsi_io_dispatch(io, efct_target_io_cb);
+
+	return rc;
+}
+
+/**
+ * @brief Process target abort callback.
+ *
+ * @par Description
+ * Accepts HW abort requests.
+ *
+ * @param hio HW IO context.
+ * @param rnode Remote node.
+ * @param length Length of response data.
+ * @param status Completion status.
+ * @param ext_status Extended completion status.
+ * @param app Application-specified callback data.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+
+static int
+efct_target_abort_cb(struct efct_hw_io_s *hio,
+		     struct efc_remote_node_s *rnode,
+		     u32 length, int status,
+		     u32 ext_status, void *app)
+{
+	struct efct_io_s *io = app;
+	struct efct_s *efct;
+	enum efct_scsi_io_status_e scsi_status;
+
+	efct = io->efct;
+
+	if (io->abort_cb) {
+		efct_scsi_io_cb_t abort_cb = io->abort_cb;
+		void *abort_cb_arg = io->abort_cb_arg;
+
+		io->abort_cb = NULL;
+		io->abort_cb_arg = NULL;
+
+		switch (status) {
+		case SLI4_FC_WCQE_STATUS_SUCCESS:
+			scsi_status = EFCT_SCSI_STATUS_GOOD;
+			break;
+		case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+			switch (ext_status) {
+			case SLI4_FC_LOCAL_REJECT_NO_XRI:
+				scsi_status = EFCT_SCSI_STATUS_NO_IO;
+				break;
+			case SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS:
+				scsi_status =
+					EFCT_SCSI_STATUS_ABORT_IN_PROGRESS;
+				break;
+			default:
+				/*we have seen 0x15 (abort in progress)*/
+				scsi_status = EFCT_SCSI_STATUS_ERROR;
+				break;
+			}
+			break;
+		case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE:
+			scsi_status = EFCT_SCSI_STATUS_CHECK_RESPONSE;
+			break;
+		default:
+			scsi_status = EFCT_SCSI_STATUS_ERROR;
+			break;
+		}
+		/* invoke callback */
+		abort_cb(io->io_to_abort, scsi_status, 0, abort_cb_arg);
+	}
+
+	/* done with IO to abort,efct_ref_get(): efct_scsi_tgt_abort_io() */
+	kref_put(&io->io_to_abort->ref, io->io_to_abort->release);
+
+	efct_io_pool_io_free(efct->xport->io_pool, io);
+
+	efct_scsi_check_pending(efct);
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Abort a target IO.
+ *
+ * @par Description
+ * This routine is called from a SCSI target-server. It initiates an
+ * abort of a previously-issued target data phase or response request.
+ *
+ * @param io IO context.
+ * @param cb SCSI target server callback.
+ * @param arg SCSI target server supplied callback argument.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+int
+efct_scsi_tgt_abort_io(struct efct_io_s *io, efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct_s *efct;
+	struct efct_xport_s *xport;
+	int rc;
+	struct efct_io_s *abort_io = NULL;
+
+	efct = io->efct;
+	xport = efct->xport;
+
+	/* take a reference on IO being aborted */
+	if ((kref_get_unless_zero(&io->ref) == 0)) {
+		/* command no longer active */
+		scsi_io_printf(io, "command no longer active\n");
+		return -1;
+	}
+
+	/*
+	 * allocate a new IO to send the abort request. Use efct_io_alloc()
+	 * directly, as we need an IO object that will not fail allocation
+	 * due to allocations being disabled (in efct_scsi_io_alloc())
+	 */
+	abort_io = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!abort_io) {
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+		kref_put(&io->ref, io->release);
+		return -1;
+	}
+
+	/* Save the target server callback and argument */
+	/* set generic fields */
+	abort_io->cmd_tgt = true;
+	abort_io->node = io->node;
+
+	/* set type and abort-specific fields */
+	abort_io->io_type = EFCT_IO_TYPE_ABORT;
+	abort_io->display_name = "tgt_abort";
+	abort_io->io_to_abort = io;
+	abort_io->send_abts = false;
+	abort_io->abort_cb = cb;
+	abort_io->abort_cb_arg = arg;
+
+	/* now dispatch IO */
+	rc = efct_scsi_io_dispatch_abort(abort_io, efct_target_abort_cb);
+	if (rc)
+		kref_put(&io->ref, io->release);
+	return rc;
+}
+
+/**
+ * @brief Process target BLS response callback.
+ *
+ * @par Description
+ * Accepts HW abort requests.
+ *
+ * @param hio HW IO context.
+ * @param rnode Remote node.
+ * @param length Length of response data.
+ * @param status Completion status.
+ * @param ext_status Extended completion status.
+ * @param app Application-specified callback data.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+
+static int
+efct_target_bls_resp_cb(struct efct_hw_io_s *hio,
+			struct efc_remote_node_s *rnode,
+	u32 length, int status, u32 ext_status, void *app)
+{
+	struct efct_io_s *io = app;
+	struct efct_s *efct;
+	enum efct_scsi_io_status_e bls_status;
+
+	efct = io->efct;
+
+	/* BLS isn't really a "SCSI" concept, but use SCSI status */
+	if (status) {
+		io_error_log(io, "s=%#x x=%#x\n", status, ext_status);
+		bls_status = EFCT_SCSI_STATUS_ERROR;
+	} else {
+		bls_status = EFCT_SCSI_STATUS_GOOD;
+	}
+
+	if (io->bls_cb) {
+		efct_scsi_io_cb_t bls_cb = io->bls_cb;
+		void *bls_cb_arg = io->bls_cb_arg;
+
+		io->bls_cb = NULL;
+		io->bls_cb_arg = NULL;
+
+		/* invoke callback */
+		bls_cb(io, bls_status, 0, bls_cb_arg);
+	}
+
+	efct_scsi_check_pending(efct);
+	return 0;
+}
+
+/**
+ * @brief Complete abort request.
+ *
+ * @par Description
+ * An abort request is completed by posting a BA_ACC for the IO that
+ * requested the abort.
+ *
+ * @param io Pointer to the IO context.
+ * @param cb Callback function to invoke upon completion.
+ * @param arg Application-specified completion callback argument.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+
+static int
+efct_target_send_bls_resp(struct efct_io_s *io,
+			  efct_scsi_io_cb_t cb, void *arg)
+{
+	int rc;
+	struct fc_ba_acc *acc;
+
+	/* fill out IO structure with everything needed to send BA_ACC */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.bls.ox_id = io->init_task_tag;
+	io->iparam.bls.rx_id = io->abort_rx_id;
+
+	acc = (void *)io->iparam.bls.payload;
+
+	memset(io->iparam.bls.payload, 0,
+	       sizeof(io->iparam.bls.payload));
+	acc->ba_ox_id = io->iparam.bls.ox_id;
+	acc->ba_rx_id = io->iparam.bls.rx_id;
+	acc->ba_high_seq_cnt = U16_MAX;
+
+	/* generic io fields have already been populated */
+
+	/* set type and BLS-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "bls_rsp";
+	io->hio_type = EFCT_HW_BLS_ACC;
+	io->bls_cb = cb;
+	io->bls_cb_arg = arg;
+
+	/* dispatch IO */
+	rc = efct_scsi_io_dispatch(io, efct_target_bls_resp_cb);
+	return rc;
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Notify the base driver that the IO is complete.
+ *
+ * @par Description
+ * This function is called by a target-server to notify the base
+ * driver that an IO has completed, allowing for the base driver
+ * to free resources.
+ * @n
+ * @n @b Note: This function is not called by initiator-clients.
+ *
+ * @param io Pointer to IO context.
+ *
+ * @return None.
+ */
+void
+efct_scsi_io_complete(struct efct_io_s *io)
+{
+	if (io->io_free) {
+		efc_log_test(io->efct,
+			      "Got completion for non-busy io with tag 0x%x\n",
+		    io->tag);
+		return;
+	}
+
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+	kref_put(&io->ref, io->release);
+}
+
+/**
+ * @ingroup scsi_api_base
+ * @brief Return SCSI API integer valued property.
+ *
+ * @par Description
+ * This function is called by a target-server or initiator-client to
+ * retrieve an integer valued property.
+ *
+ * @param efct Pointer to the efct.
+ * @param prop Property value to return.
+ *
+ * @return Returns a value, or 0 if invalid property was requested.
+ */
+u32
+efct_scsi_get_property(struct efct_s *efct, enum efct_scsi_property_e prop)
+{
+	struct efct_xport_s *xport = efct->xport;
+	u32	val;
+
+	switch (prop) {
+	case EFCT_SCSI_MAX_SGE:
+		if (efct_hw_get(&efct->hw, EFCT_HW_MAX_SGE, &val) == 0)
+			return val;
+		break;
+	case EFCT_SCSI_MAX_SGL:
+		if (efct_hw_get(&efct->hw, EFCT_HW_N_SGL, &val) == 0)
+			return val;
+		break;
+	case EFCT_SCSI_MAX_IOS:
+		return efct_io_pool_allocated(xport->io_pool);
+	case EFCT_SCSI_DIF_CAPABLE:
+		if (efct_hw_get(&efct->hw,
+				EFCT_HW_DIF_CAPABLE, &val) == 0) {
+			return val;
+		}
+		break;
+	case EFCT_SCSI_MAX_FIRST_BURST:
+		return 0;
+	case EFCT_SCSI_DIF_MULTI_SEPARATE:
+		if (efct_hw_get(&efct->hw,
+				EFCT_HW_DIF_MULTI_SEPARATE, &val) == 0) {
+			return val;
+		}
+		break;
+	case EFCT_SCSI_ENABLE_TASK_SET_FULL:
+		/* Return FALSE if we are send frame capable */
+		if (efct_hw_get(&efct->hw,
+				EFCT_HW_SEND_FRAME_CAPABLE, &val) == 0) {
+			return !val;
+		}
+		break;
+	default:
+		break;
+	}
+
+	efc_log_debug(efct, "invalid property request %d\n", prop);
+	return 0;
+}
+
+/**
+ * @brief Update transferred count
+ *
+ * @par Description
+ * Updates io->transferred, as required when using first burst,
+ * when the amount of first burst data processed differs from the
+ * amount of first burst data received.
+ *
+ * @param io Pointer to the io object.
+ * @param transferred Number of bytes transferred out of first burst buffers.
+ *
+ * @return None.
+ */
+void
+efct_scsi_update_first_burst_transferred(struct efct_io_s *io,
+					 u32 transferred)
+{
+	io->transferred = transferred;
+}
diff --git a/drivers/scsi/elx/efct/efct_scsi.h b/drivers/scsi/elx/efct/efct_scsi.h
new file mode 100644
index 000000000000..f4d0d453c792
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_scsi.h
@@ -0,0 +1,401 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_SCSI_H__)
+#define __EFCT_SCSI_H__
+
+/* efct_scsi_rcv_cmd() efct_scsi_rcv_tmf() flags */
+#define EFCT_SCSI_CMD_DIR_IN		BIT(0)
+#define EFCT_SCSI_CMD_DIR_OUT		BIT(1)
+#define EFCT_SCSI_CMD_SIMPLE		BIT(2)
+#define EFCT_SCSI_CMD_HEAD_OF_QUEUE	BIT(3)
+#define EFCT_SCSI_CMD_ORDERED		BIT(4)
+#define EFCT_SCSI_CMD_UNTAGGED		BIT(5)
+#define EFCT_SCSI_CMD_ACA		BIT(6)
+#define EFCT_SCSI_FIRST_BURST_ERR	BIT(7)
+#define EFCT_SCSI_FIRST_BURST_ABORTED	BIT(8)
+
+/* efct_scsi_send_rd_data/recv_wr_data/send_resp flags */
+#define EFCT_SCSI_LAST_DATAPHASE		BIT(0)
+#define EFCT_SCSI_NO_AUTO_RESPONSE	BIT(1)
+#define EFCT_SCSI_LOW_LATENCY		BIT(2)
+
+#define EFCT_SCSI_SNS_BUF_VALID(sense)	((sense) && \
+			(0x70 == (((const u8 *)(sense))[0] & 0x70)))
+
+#define EFCT_SCSI_WQ_STEERING_SHIFT	(16)
+#define EFCT_SCSI_WQ_STEERING_MASK	(0xf << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_CLASS	(0 << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_REQUEST	BIT(EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_CPU	(2 << EFCT_SCSI_WQ_STEERING_SHIFT)
+
+#define EFCT_SCSI_WQ_CLASS_SHIFT		(20)
+#define EFCT_SCSI_WQ_CLASS_MASK		(0xf << EFCT_SCSI_WQ_CLASS_SHIFT)
+#define EFCT_SCSI_WQ_CLASS(x)		((x & EFCT_SCSI_WQ_CLASS_MASK) << \
+						EFCT_SCSI_WQ_CLASS_SHIFT)
+
+#define EFCT_SCSI_WQ_CLASS_LOW_LATENCY	(1)
+
+/*!
+ * @defgroup scsi_api_base SCSI Base Target/Initiator
+ * @defgroup scsi_api_target SCSI Target
+ * @defgroup scsi_api_initiator SCSI Initiator
+ */
+
+/**
+ * @brief SCSI command response.
+ *
+ * This structure is used by target-servers to specify SCSI status and
+ * sense data.  In this case all but the @b residual element are used. For
+ * initiator-clients, this structure is used by the SCSI API to convey the
+ * response data for issued commands, including the residual element.
+ */
+struct efct_scsi_cmd_resp_s {
+	u8 scsi_status;			/**< SCSI status */
+	u16 scsi_status_qualifier;		/**< SCSI status qualifier */
+	/**< pointer to response data buffer */
+	u8 *response_data;
+	/**< length of response data buffer (bytes) */
+	u32 response_data_length;
+	u8 *sense_data;		/**< pointer to sense data buffer */
+	/**< length of sense data buffer (bytes) */
+	u32 sense_data_length;
+	/* command residual (not used for target), positive value
+	 * indicates an underflow, negative value indicates overflow
+	 */
+	int residual;
+	/**< Command response length received in wcqe */
+	u32 response_wire_length;
+};
+
+struct efct_vport_s {
+	struct efct_s *efct;
+	bool is_vport;
+	struct fc_host_statistics fc_host_stats;
+	struct Scsi_Host *shost;
+	struct fc_vport *fc_vport;
+	u64 npiv_wwpn;
+	u64 npiv_wwnn;
+
+};
+
+/* Status values returned by IO callbacks */
+enum efct_scsi_io_status_e {
+	EFCT_SCSI_STATUS_GOOD = 0,
+	EFCT_SCSI_STATUS_ABORTED,
+	EFCT_SCSI_STATUS_ERROR,
+	EFCT_SCSI_STATUS_DIF_GUARD_ERR,
+	EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR,
+	EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR,
+	EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR,
+	EFCT_SCSI_STATUS_PROTOCOL_CRC_ERROR,
+	EFCT_SCSI_STATUS_NO_IO,
+	EFCT_SCSI_STATUS_ABORT_IN_PROGRESS,
+	EFCT_SCSI_STATUS_CHECK_RESPONSE,
+	EFCT_SCSI_STATUS_COMMAND_TIMEOUT,
+	EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED,
+	EFCT_SCSI_STATUS_SHUTDOWN,
+	EFCT_SCSI_STATUS_NEXUS_LOST,
+
+};
+
+struct efct_io_s;
+struct efc_node_s;
+struct efc_domain_s;
+struct efc_sli_port_s;
+
+/* Callback used by send_rd_data(), recv_wr_data(), send_resp() */
+typedef int (*efct_scsi_io_cb_t)(struct efct_io_s *io,
+				    enum efct_scsi_io_status_e status,
+				    u32 flags, void *arg);
+
+/* Callback used by send_rd_io(), send_wr_io() */
+typedef int (*efct_scsi_rsp_io_cb_t)(struct efct_io_s *io,
+			enum efct_scsi_io_status_e status,
+			struct efct_scsi_cmd_resp_s *rsp,
+			u32 flags, void *arg);
+
+/* efct_scsi_cb_t flags */
+#define EFCT_SCSI_IO_CMPL		BIT(0)	/* IO completed */
+/* IO completed, response sent */
+#define EFCT_SCSI_IO_CMPL_RSP_SENT	BIT(1)
+#define EFCT_SCSI_IO_ABORTED		BIT(2)	/* IO was aborted */
+
+/* efct_scsi_recv_tmf() request values */
+enum efct_scsi_tmf_cmd_e {
+	EFCT_SCSI_TMF_ABORT_TASK = 1,
+	EFCT_SCSI_TMF_QUERY_TASK_SET,
+	EFCT_SCSI_TMF_ABORT_TASK_SET,
+	EFCT_SCSI_TMF_CLEAR_TASK_SET,
+	EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT,
+	EFCT_SCSI_TMF_LOGICAL_UNIT_RESET,
+	EFCT_SCSI_TMF_CLEAR_ACA,
+	EFCT_SCSI_TMF_TARGET_RESET,
+};
+
+/* efct_scsi_send_tmf_resp() response values */
+enum efct_scsi_tmf_resp_e {
+	EFCT_SCSI_TMF_FUNCTION_COMPLETE = 1,
+	EFCT_SCSI_TMF_FUNCTION_SUCCEEDED,
+	EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND,
+	EFCT_SCSI_TMF_FUNCTION_REJECTED,
+	EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER,
+	EFCT_SCSI_TMF_SERVICE_DELIVERY,
+};
+
+/**
+ * @brief property names for efct_scsi_get_property() functions
+ */
+
+enum efct_scsi_property_e {
+	EFCT_SCSI_MAX_SGE,
+	EFCT_SCSI_MAX_SGL,
+	EFCT_SCSI_WWNN,
+	EFCT_SCSI_WWPN,
+	EFCT_SCSI_SERIALNUMBER,
+	EFCT_SCSI_PARTNUMBER,
+	EFCT_SCSI_PORTNUM,
+	EFCT_SCSI_BIOS_VERSION_STRING,
+	EFCT_SCSI_MAX_IOS,
+	EFCT_SCSI_DIF_CAPABLE,
+	EFCT_SCSI_DIF_MULTI_SEPARATE,
+	EFCT_SCSI_MAX_FIRST_BURST,
+	EFCT_SCSI_ENABLE_TASK_SET_FULL,
+};
+
+#define DIF_SIZE		8
+
+/**
+ * @brief T10 DIF operations
+ *
+ *	WARNING: do not reorder or insert to this list without making
+ *	appropriate changes in efct_dif.c
+ */
+enum efct_scsi_dif_oper_e {
+	EFCT_SCSI_DIF_OPER_DISABLED,
+	EFCT_SCSI_DIF_OPER_IN_NODIF_OUT_CRC,
+	EFCT_SCSI_DIF_OPER_IN_CRC_OUT_NODIF,
+	EFCT_SCSI_DIF_OPER_IN_NODIF_OUT_CHKSUM,
+	EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_NODIF,
+	EFCT_SCSI_DIF_OPER_IN_CRC_OUT_CRC,
+	EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_CHKSUM,
+	EFCT_SCSI_DIF_OPER_IN_CRC_OUT_CHKSUM,
+	EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_CRC,
+	EFCT_SCSI_DIF_OPER_IN_RAW_OUT_RAW,
+};
+
+#define EFCT_SCSI_DIF_OPER_PASS_THRU	EFCT_SCSI_DIF_OPER_IN_CRC_OUT_CRC
+#define EFCT_SCSI_DIF_OPER_STRIP	EFCT_SCSI_DIF_OPER_IN_CRC_OUT_NODIF
+#define EFCT_SCSI_DIF_OPER_INSERT	EFCT_SCSI_DIF_OPER_IN_NODIF_OUT_CRC
+
+/**
+ * @brief T10 DIF block sizes
+ */
+enum efct_scsi_dif_blk_size_e {
+	EFCT_SCSI_DIF_BK_SIZE_512,
+	EFCT_SCSI_DIF_BK_SIZE_1024,
+	EFCT_SCSI_DIF_BK_SIZE_2048,
+	EFCT_SCSI_DIF_BK_SIZE_4096,
+	EFCT_SCSI_DIF_BK_SIZE_520,
+	EFCT_SCSI_DIF_BK_SIZE_4104
+};
+
+/**
+ * @brief generic scatter-gather list structure
+ */
+
+struct efct_scsi_sgl_s {
+	uintptr_t	addr;		/**< physical address */
+	/**< address of DIF segment, zero if DIF is interleaved */
+	uintptr_t	dif_addr;
+	size_t		len;		/**< length */
+};
+
+/**
+ * @brief T10 DIF information passed to the transport
+ */
+
+struct efct_scsi_dif_info_s {
+	enum efct_scsi_dif_oper_e dif_oper;
+	enum efct_scsi_dif_blk_size_e blk_size;
+	u32 ref_tag;
+	bool check_ref_tag;
+	bool check_app_tag;
+	bool check_guard;
+	bool dif_separate;
+
+	/* If the APP TAG is 0xFFFF, disable checking
+	 * the REF TAG and CRC fields
+	 */
+	bool disable_app_ffff;
+
+	/* if the APP TAG is 0xFFFF and REF TAG is 0xFFFF_FFFF,
+	 * disable checking the received CRC field.
+	 */
+	bool disable_app_ref_ffff;
+	u64 lba;
+	u16 app_tag;
+};
+
+/* Return values for calls from base driver to
+ * target-server/initiator-client
+ */
+#define EFCT_SCSI_CALL_COMPLETE	0 /* All work is done */
+#define EFCT_SCSI_CALL_ASYNC	1 /* Work will be completed asynchronously */
+
+/* Calls from target/initiator to base driver */
+
+enum efct_scsi_io_role_e {
+	EFCT_SCSI_IO_ROLE_ORIGINATOR,
+	EFCT_SCSI_IO_ROLE_RESPONDER,
+};
+
+void efct_scsi_io_alloc_enable(struct efc_lport *efc, struct efc_node_s *node);
+void efct_scsi_io_alloc_disable(struct efc_lport *efc, struct efc_node_s *node);
+extern struct efct_io_s *
+efct_scsi_io_alloc(struct efc_node_s *node, enum efct_scsi_io_role_e);
+void efct_scsi_io_free(struct efct_io_s *io);
+struct efct_io_s *efct_io_get_instance(struct efct_s *efct, u32 index);
+
+/* Calls from base driver to target-server */
+
+int efct_scsi_tgt_driver_init(void);
+int efct_scsi_tgt_driver_exit(void);
+int efct_scsi_tgt_io_init(struct efct_io_s *io);
+int efct_scsi_tgt_io_exit(struct efct_io_s *io);
+int efct_scsi_tgt_new_device(struct efct_s *efct);
+int efct_scsi_tgt_del_device(struct efct_s *efct);
+int
+efct_scsi_tgt_new_domain(struct efc_lport *efc, struct efc_domain_s *domain);
+void
+efct_scsi_tgt_del_domain(struct efc_lport *efc, struct efc_domain_s *domain);
+int
+efct_scsi_tgt_new_sport(struct efc_lport *efc, struct efc_sli_port_s *sport);
+void
+efct_scsi_tgt_del_sport(struct efc_lport *efc, struct efc_sli_port_s *sport);
+int
+efct_scsi_validate_initiator(struct efc_lport *efc, struct efc_node_s *node);
+int
+efct_scsi_new_initiator(struct efc_lport *efc, struct efc_node_s *node);
+
+enum efct_scsi_del_initiator_reason_e {
+	EFCT_SCSI_INITIATOR_DELETED,
+	EFCT_SCSI_INITIATOR_MISSING,
+};
+
+extern int
+efct_scsi_del_initiator(struct efc_lport *efc, struct efc_node_s *node,
+			int reason);
+extern int
+efct_scsi_recv_cmd(struct efct_io_s *io, uint64_t lun, u8 *cdb,
+		   u32 cdb_len, u32 flags);
+extern int
+efct_scsi_recv_cmd_first_burst(struct efct_io_s *io, uint64_t lun,
+			       u8 *cdb, u32 cdb_len, u32 flags,
+	struct efc_dma_s first_burst_buffers[], u32 first_burst_bytes);
+extern int
+efct_scsi_recv_tmf(struct efct_io_s *tmfio, u32 lun,
+		   enum efct_scsi_tmf_cmd_e cmd, struct efct_io_s *abortio,
+		  u32 flags);
+extern struct efc_sli_port_s *
+efct_sport_get_instance(struct efc_domain_s *domain, u32 index);
+extern struct efc_domain_s *
+efct_domain_get_instance(struct efct_s *efct, u32 index);
+
+/* Calls from target-server to base driver */
+
+extern int
+efct_scsi_send_rd_data(struct efct_io_s *io, u32 flags,
+		       struct efct_scsi_dif_info_s *dif_info,
+		      struct efct_scsi_sgl_s *sgl, u32 sgl_count,
+		      u64 wire_len, efct_scsi_io_cb_t cb, void *arg);
+extern int
+efct_scsi_recv_wr_data(struct efct_io_s *io, u32 flags,
+		       struct efct_scsi_dif_info_s *dif_info,
+		      struct efct_scsi_sgl_s *sgl, u32 sgl_count,
+		      u64 wire_len, efct_scsi_io_cb_t cb, void *arg);
+extern int
+efct_scsi_send_resp(struct efct_io_s *io, u32 flags,
+		    struct efct_scsi_cmd_resp_s *rsp, efct_scsi_io_cb_t cb,
+		   void *arg);
+extern int
+efct_scsi_send_tmf_resp(struct efct_io_s *io,
+			enum efct_scsi_tmf_resp_e rspcode,
+		       u8 addl_rsp_info[3],
+		       efct_scsi_io_cb_t cb, void *arg);
+extern int
+efct_scsi_tgt_abort_io(struct efct_io_s *io, efct_scsi_io_cb_t cb, void *arg);
+
+void efct_scsi_io_complete(struct efct_io_s *io);
+
+extern u32
+efct_scsi_get_property(struct efct_s *efct, enum efct_scsi_property_e prop);
+
+//extern void efct_scsi_del_initiator_complete(struct efc_node_s *node);
+
+extern void
+efct_scsi_update_first_burst_transferred(struct efct_io_s *io,
+					 u32 transferred);
+
+/* Calls from base driver to initiator-client */
+
+int efct_scsi_ini_driver_init(void);
+int efct_scsi_ini_driver_exit(void);
+int efct_scsi_reg_fc_transport(void);
+int efct_scsi_release_fc_transport(void);
+int efct_scsi_ini_io_init(struct efct_io_s *io);
+int efct_scsi_ini_io_exit(struct efct_io_s *io);
+int efct_scsi_new_device(struct efct_s *efct);
+int efct_scsi_del_device(struct efct_s *efct);
+int
+efct_scsi_ini_new_domain(struct efc_lport *efc, struct efc_domain_s *domain);
+void
+efct_scsi_ini_del_domain(struct efc_lport *efc, struct efc_domain_s *domain);
+int
+efct_scsi_ini_new_sport(struct efc_lport *efc, struct efc_sli_port_s *sport);
+void
+efct_scsi_ini_del_sport(struct efc_lport *efc, struct efc_sli_port_s *sport);
+int
+efct_scsi_new_target(struct efc_lport *efc, struct efc_node_s *node);
+void _efct_scsi_io_free(struct kref *arg);
+
+enum efct_scsi_del_target_reason_e {
+	EFCT_SCSI_TARGET_DELETED,
+	EFCT_SCSI_TARGET_MISSING,
+};
+
+int efct_scsi_del_target(struct efc_lport *efc,
+			 struct efc_node_s *node, int reason);
+
+int efct_scsi_send_tmf(struct efc_node_s *node,
+		       struct efct_io_s *io,
+		       struct efct_io_s *io_to_abort, u32 lun,
+		       enum efct_scsi_tmf_cmd_e tmf,
+		       struct efct_scsi_sgl_s *sgl,
+		       u32 sgl_count, u32 len,
+		       efct_scsi_rsp_io_cb_t cb, void *arg);
+
+struct efct_scsi_vaddr_len_s {
+	void *vaddr;
+	u32 length;
+};
+
+extern int
+efct_scsi_get_block_vaddr(struct efct_io_s *io, uint64_t blocknumber,
+			  struct efct_scsi_vaddr_len_s addrlen[],
+			  u32 max_addrlen, void **dif_vaddr);
+extern int
+efct_scsi_del_vport(struct efct_s *efct, struct Scsi_Host *shost);
+
+extern struct efct_vport_s *
+efct_scsi_new_vport(struct efct_s *efct, struct device *dev);
+
+/* Calls within base driver */
+int efct_scsi_io_dispatch(struct efct_io_s *io, void *cb);
+int efct_scsi_io_dispatch_abort(struct efct_io_s *io, void *cb);
+void efct_scsi_check_pending(struct efct_s *efct);
+
+#endif /* __EFCT_SCSI_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 24/32] elx: efct: LIO backend interface routines
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (22 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 23/32] elx: efct: SCSI IO handling routines James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-24 22:27   ` Bart Van Assche
  2019-10-23 21:55 ` [PATCH 25/32] elx: efct: Hardware IO submission routines James Smart
                   ` (8 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
LIO backend template registration and template functions.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_lio.c | 2643 ++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_lio.h |  371 ++++++
 2 files changed, 3014 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_lio.c
 create mode 100644 drivers/scsi/elx/efct/efct_lio.h

diff --git a/drivers/scsi/elx/efct/efct_lio.c b/drivers/scsi/elx/efct/efct_lio.c
new file mode 100644
index 000000000000..c2661ab3e9c3
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_lio.c
@@ -0,0 +1,2643 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_tcq.h>
+#include <target/target_core_base.h>
+#include <target/target_core_fabric.h>
+
+#include "efct_lio.h"
+
+#define	FABRIC_NAME		"efct"
+#define FABRIC_NAME_NPIV	"efct_npiv"
+
+#define	FABRIC_SNPRINTF_LEN	32
+#define	FABRIC_SNPRINTF(str, len, pre, wwn)	snprintf(str, len, \
+		"%s%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x", pre,  \
+	    (u8)((wwn >> 56) & 0xff), (u8)((wwn >> 48) & 0xff),    \
+	    (u8)((wwn >> 40) & 0xff), (u8)((wwn >> 32) & 0xff),    \
+	    (u8)((wwn >> 24) & 0xff), (u8)((wwn >> 16) & 0xff),    \
+	    (u8)((wwn >>  8) & 0xff), (u8)((wwn & 0xff)))
+
+#define	ARRAY2WWN(w, a)	(w = ((((u64)(a)[0]) << 56) | (((u64)(a)[1]) << 48) | \
+			    (((u64)(a)[2]) << 40) | (((u64)(a)[3]) << 32) | \
+			    (((u64)(a)[4]) << 24) | (((u64)(a)[5]) << 16) | \
+			    (((u64)(a)[6]) <<  8) | (((u64)(a)[7]))))
+
+struct efct_lio_sport {
+	u64 wwpn;
+	unsigned char wwpn_str[FABRIC_SNPRINTF_LEN];
+	struct se_wwn sport_wwn;
+	struct efct_lio_tpg *tpg;
+	struct efct_s *efct;
+	struct dentry *sessions;
+	atomic_t enable;
+};
+
+struct efct_lio_tpg_attrib {
+	int generate_node_acls;
+	int cache_dynamic_acls;
+	int demo_mode_write_protect;
+	int prod_mode_write_protect;
+	int demo_mode_login_only;
+	bool session_deletion_wait;
+};
+
+struct efct_lio_tpg {
+	struct se_portal_group tpg;
+	struct efct_lio_sport *sport;
+	struct efct_lio_vport *vport;
+	struct efct_lio_tpg_attrib tpg_attrib;
+	unsigned short tpgt;
+	atomic_t enabled;
+};
+
+struct efct_lio_nacl {
+	u64			nport_wwnn;
+	char			nport_name[FABRIC_SNPRINTF_LEN];
+	struct se_session	*session;
+	struct se_node_acl	se_node_acl;
+};
+
+/* Per-target data for virtual targets */
+struct efct_lio_vport_data_t {
+	struct list_head list_entry;
+	bool initiator_mode;
+	bool target_mode;
+	u64 phy_wwpn;
+	u64 phy_wwnn;
+	u64 vport_wwpn;
+	u64 vport_wwnn;
+	struct efct_lio_vport *lio_vport;
+};
+
+/* Per-target data for virtual targets */
+struct efct_lio_vport_list_t {
+	struct list_head list_entry;
+	struct efct_lio_vport *lio_vport;
+};
+
+/* local prototypes */
+static char *efct_lio_get_npiv_fabric_wwn(struct se_portal_group *);
+static char *efct_lio_get_fabric_wwn(struct se_portal_group *);
+static u16 efct_lio_get_tag(struct se_portal_group *);
+static u16 efct_lio_get_npiv_tag(struct se_portal_group *);
+static int efct_lio_check_demo_mode(struct se_portal_group *);
+static int efct_lio_check_demo_mode_cache(struct se_portal_group *);
+static int efct_lio_check_demo_write_protect(struct se_portal_group *);
+static int efct_lio_check_prod_write_protect(struct se_portal_group *);
+static int efct_lio_npiv_check_demo_write_protect(struct se_portal_group *);
+static int efct_lio_npiv_check_prod_write_protect(struct se_portal_group *);
+static u32 efct_lio_tpg_get_inst_index(struct se_portal_group *);
+static int efct_lio_check_stop_free(struct se_cmd *se_cmd);
+static void efct_lio_aborted_task(struct se_cmd *se_cmd);
+static void efct_lio_release_cmd(struct se_cmd *);
+static void efct_lio_close_session(struct se_session *);
+static u32 efct_lio_sess_get_index(struct se_session *);
+static int efct_lio_write_pending(struct se_cmd *);
+static void efct_lio_set_default_node_attrs(struct se_node_acl *);
+static int efct_lio_get_cmd_state(struct se_cmd *);
+static int efct_lio_queue_data_in(struct se_cmd *);
+static int efct_lio_queue_status(struct se_cmd *);
+static void efct_lio_queue_tm_rsp(struct se_cmd *);
+static struct se_wwn *efct_lio_make_sport(struct target_fabric_configfs *,
+					  struct config_group *, const char *);
+static void efct_lio_drop_sport(struct se_wwn *);
+static void efct_lio_npiv_drop_sport(struct se_wwn *);
+static int efct_lio_parse_wwn(const char *, u64 *, u8 npiv);
+static int efct_lio_parse_npiv_wwn(const char *name, size_t size,
+				   u64 *wwpn, u64 *wwnn);
+static struct se_portal_group *efct_lio_make_tpg(struct se_wwn *,
+						 const char *);
+static struct se_portal_group *efct_lio_npiv_make_tpg(struct se_wwn *,
+						      const char *);
+static void efct_lio_drop_tpg(struct se_portal_group *);
+static struct se_wwn *efct_lio_npiv_make_sport(struct target_fabric_configfs *,
+					       struct config_group *,
+					       const char *);
+static int
+efct_lio_parse_npiv_wwn(const char *name, size_t size, u64 *wwpn, u64 *wwnn);
+static void efct_lio_npiv_drop_tpg(struct se_portal_group *);
+static int efct_lio_async_worker(struct efct_s *efct);
+static void efct_lio_sg_unmap(struct efct_io_s *io);
+static int efct_lio_abort_tgt_cb(struct efct_io_s *io,
+				 enum efct_scsi_io_status_e scsi_status,
+				    u32 flags, void *arg);
+
+static int efct_lio_init_nodeacl(struct se_node_acl *, const char *);
+
+static int efct_lio_check_demo_mode_login_only(struct se_portal_group *);
+static int efct_lio_npiv_check_demo_mode_login_only(struct se_portal_group *);
+
+/* Start items for efct_lio_tpg_attrib_cit */
+
+#define DEF_EFCT_TPG_ATTRIB(name)					  \
+									  \
+static ssize_t efct_lio_tpg_attrib_##name##_show(			  \
+		struct config_item *item, char *page)			  \
+{									  \
+	struct se_portal_group *se_tpg = to_tpg(item);			  \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			  \
+			struct efct_lio_tpg, tpg);			  \
+									  \
+	return sprintf(page, "%u\n", tpg->tpg_attrib.name);		  \
+}									  \
+									  \
+static ssize_t efct_lio_tpg_attrib_##name##_store(			  \
+		struct config_item *item, const char *page, size_t count) \
+{									  \
+	struct se_portal_group *se_tpg = to_tpg(item);			  \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			  \
+					struct efct_lio_tpg, tpg);	  \
+	struct efct_lio_tpg_attrib *a = &tpg->tpg_attrib;		  \
+	unsigned long val;						  \
+	int ret;							  \
+									  \
+	ret = kstrtoul(page, 0, &val);					  \
+	if (ret < 0) {							  \
+		pr_err("kstrtoul() failed with ret: %d\n", ret);	  \
+		return -EINVAL;						  \
+	}								  \
+									  \
+	if (val != 0 && val != 1) {					  \
+		pr_err("Illegal boolean value %lu\n", val);		  \
+		return -EINVAL;						  \
+	}								  \
+									  \
+	a->name = val;							  \
+									  \
+	return count;							  \
+}									  \
+CONFIGFS_ATTR(efct_lio_tpg_attrib_, name)
+
+DEF_EFCT_TPG_ATTRIB(generate_node_acls);
+DEF_EFCT_TPG_ATTRIB(cache_dynamic_acls);
+DEF_EFCT_TPG_ATTRIB(demo_mode_write_protect);
+DEF_EFCT_TPG_ATTRIB(prod_mode_write_protect);
+DEF_EFCT_TPG_ATTRIB(demo_mode_login_only);
+DEF_EFCT_TPG_ATTRIB(session_deletion_wait);
+
+static struct configfs_attribute *efct_lio_tpg_attrib_attrs[] = {
+	&efct_lio_tpg_attrib_attr_generate_node_acls,
+	&efct_lio_tpg_attrib_attr_cache_dynamic_acls,
+	&efct_lio_tpg_attrib_attr_demo_mode_write_protect,
+	&efct_lio_tpg_attrib_attr_prod_mode_write_protect,
+	&efct_lio_tpg_attrib_attr_demo_mode_login_only,
+	&efct_lio_tpg_attrib_attr_session_deletion_wait,
+	NULL,
+};
+
+#define DEF_EFCT_NPIV_TPG_ATTRIB(name)					   \
+									   \
+static ssize_t efct_lio_npiv_tpg_attrib_##name##_show(			   \
+		struct config_item *item, char *page)			   \
+{									   \
+	struct se_portal_group *se_tpg = to_tpg(item);			   \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			   \
+			struct efct_lio_tpg, tpg);			   \
+									   \
+	return sprintf(page, "%u\n", tpg->tpg_attrib.name);		   \
+}									   \
+									   \
+static ssize_t efct_lio_npiv_tpg_attrib_##name##_store(			   \
+		struct config_item *item, const char *page, size_t count)  \
+{									   \
+	struct se_portal_group *se_tpg = to_tpg(item);			   \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			   \
+			struct efct_lio_tpg, tpg);			   \
+	struct efct_lio_tpg_attrib *a = &tpg->tpg_attrib;		   \
+	unsigned long val;						   \
+	int ret;							   \
+									   \
+	ret = kstrtoul(page, 0, &val);					   \
+	if (ret < 0) {							   \
+		pr_err("kstrtoul() failed with ret: %d\n", ret);	   \
+		return -EINVAL;						   \
+	}								   \
+									   \
+	if (val != 0 && val != 1) {					   \
+		pr_err("Illegal boolean value %lu\n", val);		   \
+		return -EINVAL;						   \
+	}								   \
+									   \
+	a->name = val;							   \
+									   \
+	return count;							   \
+}									   \
+CONFIGFS_ATTR(efct_lio_npiv_tpg_attrib_, name)
+
+DEF_EFCT_NPIV_TPG_ATTRIB(generate_node_acls);
+DEF_EFCT_NPIV_TPG_ATTRIB(cache_dynamic_acls);
+DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_write_protect);
+DEF_EFCT_NPIV_TPG_ATTRIB(prod_mode_write_protect);
+DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_login_only);
+DEF_EFCT_NPIV_TPG_ATTRIB(session_deletion_wait);
+
+static struct configfs_attribute *efct_lio_npiv_tpg_attrib_attrs[] = {
+	&efct_lio_npiv_tpg_attrib_attr_generate_node_acls,
+	&efct_lio_npiv_tpg_attrib_attr_cache_dynamic_acls,
+	&efct_lio_npiv_tpg_attrib_attr_demo_mode_write_protect,
+	&efct_lio_npiv_tpg_attrib_attr_prod_mode_write_protect,
+	&efct_lio_npiv_tpg_attrib_attr_demo_mode_login_only,
+	&efct_lio_npiv_tpg_attrib_attr_session_deletion_wait,
+	NULL,
+};
+
+static ssize_t
+efct_lio_wwn_version_show(struct config_item *item, char *page)
+{
+	return sprintf(page, "Emulex EFCT fabric module version %s\n",
+		       __stringify(EFCT_LIO_VERSION));
+}
+
+CONFIGFS_ATTR_RO(efct_lio_wwn_, version);
+static struct configfs_attribute *efct_lio_wwn_attrs[] = {
+			&efct_lio_wwn_attr_version, NULL };
+
+static ssize_t
+efct_lio_tpg_enable_show(struct config_item *item, char *page)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return snprintf(page, PAGE_SIZE, "%d\n", atomic_read(&tpg->enabled));
+}
+
+static ssize_t
+efct_lio_tpg_enable_store(struct config_item *item, const char *page,
+			  size_t count)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+	unsigned long op;
+	int ret;
+	struct efct_s *efct;
+	struct efc_lport *efc;
+
+	if (!tpg->sport || !tpg->sport->efct) {
+		pr_err("%s: Unable to find EFCT device\n", __func__);
+		return -EINVAL;
+	}
+
+	efct = tpg->sport->efct;
+	efc = efct->efcport;
+
+	if (kstrtoul(page, 0, &op) < 0)
+		return -EINVAL;
+	if (op == 1) {
+		atomic_set(&tpg->enabled, 1);
+		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
+
+		ret = efct_xport_control(efct->xport, EFCT_XPORT_PORT_ONLINE);
+		if (ret) {
+			efct->tgt_efct.lio_sport = NULL;
+			efc_log_test(efct, "cannot bring port online\n");
+			return ret;
+		}
+
+	} else if (op == 0) {
+		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
+
+		if (efc->domain && efc->domain->sport)
+			efct_scsi_tgt_del_sport(efc, efc->domain->sport);
+
+		atomic_set(&tpg->enabled, 0);
+	} else {
+		return -EINVAL;
+	}
+	return count;
+}
+
+static ssize_t
+efct_lio_npiv_tpg_enable_show(struct config_item *item, char *page)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return snprintf(page, PAGE_SIZE, "%d\n", atomic_read(&tpg->enabled));
+}
+
+static ssize_t
+efct_lio_npiv_tpg_enable_store(struct config_item *item, const char *page,
+			       size_t count)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+	unsigned long op;
+	struct efct_lio_vport *lio_vport = tpg->vport;
+	struct efct_lio_vport_data_t *vport_data;
+	int ret = -1;
+	struct efct_s *efct;
+	struct efc_lport *efc;
+	unsigned long flags = 0;
+
+	if (kstrtoul(page, 0, &op) < 0)
+		return -EINVAL;
+
+	if (!lio_vport) {
+		pr_err("Unable to find vport\n");
+		return -EINVAL;
+	}
+	efct = lio_vport->efct;
+	efc = efct->efcport;
+
+	if (op == 1) {
+		atomic_set(&tpg->enabled, 1);
+		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
+
+		if (efc->domain) {
+			ret = efc_sport_vport_new(efc->domain,
+						  lio_vport->npiv_wwpn,
+						  lio_vport->npiv_wwnn,
+						  U32_MAX, false, true,
+						  NULL, NULL, true);
+			if (ret != 0) {
+				efc_log_err(efct, "Failed to create Vport\n");
+				return ret;
+			}
+			return count;
+		}
+
+		vport_data = kmalloc(sizeof(*vport_data), GFP_KERNEL);
+		if (!vport_data)
+			return ret;
+
+		memset(vport_data, 0, sizeof(struct efct_lio_vport_data_t));
+		vport_data->phy_wwpn            = lio_vport->wwpn;
+		vport_data->vport_wwpn          = lio_vport->npiv_wwpn;
+		vport_data->vport_wwnn          = lio_vport->npiv_wwnn;
+		vport_data->target_mode         = 1;
+		vport_data->initiator_mode      = 0;
+		vport_data->lio_vport           = lio_vport;
+
+		/* There is no domain.  Add to pending list. When the
+		 * domain is created, the driver will create the vport.
+		 */
+		efc_log_test(efct, "link down, move to pending\n");
+		spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+		INIT_LIST_HEAD(&vport_data->list_entry);
+		list_add_tail(&vport_data->list_entry,
+			      &efct->tgt_efct.vport_pending_enable_list);
+		spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock,
+				       flags);
+
+	} else if (op == 0) {
+		struct efct_lio_vport_data_t *virt_target_data, *next;
+
+		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
+
+		atomic_set(&tpg->enabled, 0);
+		/* only physical sport should exist, free lio_sport
+		 * allocated in efct_lio_make_sport
+		 */
+		if (efc->domain) {
+			efc_sport_vport_del(efct->efcport, efc->domain,
+					    lio_vport->npiv_wwpn,
+					    lio_vport->npiv_wwnn);
+		} else {
+			spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+			list_for_each_entry_safe(virt_target_data, next,
+				&efct->tgt_efct.vport_pending_enable_list,
+				list_entry) {
+				if (virt_target_data->lio_vport == lio_vport) {
+					list_del(&virt_target_data->list_entry);
+					kfree(virt_target_data);
+					break;
+				}
+			}
+			spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock,
+					       flags);
+		}
+	} else {
+		return -EINVAL;
+	}
+	return count;
+}
+
+CONFIGFS_ATTR(efct_lio_tpg_, enable);
+static struct configfs_attribute *efct_lio_tpg_attrs[] = {
+				&efct_lio_tpg_attr_enable, NULL };
+CONFIGFS_ATTR(efct_lio_npiv_tpg_, enable);
+static struct configfs_attribute *efct_lio_npiv_tpg_attrs[] = {
+				&efct_lio_npiv_tpg_attr_enable, NULL };
+
+static struct efct_lio_tpg *
+efct_get_vport_tpg(struct efc_node_s *node)
+{
+	struct efct_s *efct;
+	u64 wwpn = node->sport->wwpn;
+	struct efct_lio_vport_list_t *vport, *next;
+	struct efct_lio_vport *lio_vport = NULL;
+	struct efct_lio_tpg *tpg = NULL;
+	unsigned long flags = 0;
+
+	efct = node->efc->base;
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+		list_for_each_entry_safe(vport, next,
+				 &efct->tgt_efct.vport_list, list_entry) {
+			lio_vport = vport->lio_vport;
+			if (wwpn && lio_vport &&
+			    lio_vport->npiv_wwpn == wwpn) {
+				efc_log_test(efct, "found tpg on vport\n");
+				tpg = lio_vport->tpg;
+				break;
+			}
+		}
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+	return tpg;
+}
+
+/* local static data */
+static const struct target_core_fabric_ops efct_lio_ops = {
+	.fabric_name			= FABRIC_NAME,
+	.module				= THIS_MODULE,
+	.node_acl_size			= sizeof(struct efct_lio_nacl),
+	.max_data_sg_nents		= 65535,
+	.tpg_get_wwn			= efct_lio_get_fabric_wwn,
+	.tpg_get_tag			= efct_lio_get_tag,
+	.fabric_init_nodeacl		= efct_lio_init_nodeacl,
+	.tpg_check_demo_mode		= efct_lio_check_demo_mode,
+	.tpg_check_demo_mode_cache      = efct_lio_check_demo_mode_cache,
+	.tpg_check_demo_mode_write_protect = efct_lio_check_demo_write_protect,
+	.tpg_check_prod_mode_write_protect = efct_lio_check_prod_write_protect,
+	.tpg_get_inst_index		= efct_lio_tpg_get_inst_index,
+	.check_stop_free		= efct_lio_check_stop_free,
+	.aborted_task			= efct_lio_aborted_task,
+	.release_cmd			= efct_lio_release_cmd,
+	.close_session			= efct_lio_close_session,
+	.sess_get_index			= efct_lio_sess_get_index,
+	.write_pending			= efct_lio_write_pending,
+	.set_default_node_attributes	= efct_lio_set_default_node_attrs,
+	.get_cmd_state			= efct_lio_get_cmd_state,
+	.queue_data_in			= efct_lio_queue_data_in,
+	.queue_status			= efct_lio_queue_status,
+	.queue_tm_rsp			= efct_lio_queue_tm_rsp,
+	.fabric_make_wwn		= efct_lio_make_sport,
+	.fabric_drop_wwn		= efct_lio_drop_sport,
+	.fabric_make_tpg		= efct_lio_make_tpg,
+	.fabric_drop_tpg		= efct_lio_drop_tpg,
+	.tpg_check_demo_mode_login_only = efct_lio_check_demo_mode_login_only,
+	.tpg_check_prot_fabric_only	= NULL,
+	.sess_get_initiator_sid		= NULL,
+	.tfc_wwn_attrs			= efct_lio_wwn_attrs,
+	.tfc_tpg_base_attrs		= efct_lio_tpg_attrs,
+	.tfc_tpg_attrib_attrs           = efct_lio_tpg_attrib_attrs,
+};
+
+/* local static data */
+static const struct target_core_fabric_ops efct_lio_npiv_ops = {
+	.fabric_name			= FABRIC_NAME_NPIV,
+	.module				= THIS_MODULE,
+	.node_acl_size			= sizeof(struct efct_lio_nacl),
+	.max_data_sg_nents		= 65535,
+	.tpg_get_wwn			= efct_lio_get_npiv_fabric_wwn,
+	.tpg_get_tag			= efct_lio_get_npiv_tag,
+	.fabric_init_nodeacl		= efct_lio_init_nodeacl,
+	.tpg_check_demo_mode		= efct_lio_check_demo_mode,
+	.tpg_check_demo_mode_cache      = efct_lio_check_demo_mode_cache,
+	.tpg_check_demo_mode_write_protect =
+					efct_lio_npiv_check_demo_write_protect,
+	.tpg_check_prod_mode_write_protect =
+					efct_lio_npiv_check_prod_write_protect,
+	.tpg_get_inst_index		= efct_lio_tpg_get_inst_index,
+	.check_stop_free		= efct_lio_check_stop_free,
+	.aborted_task			= efct_lio_aborted_task,
+	.release_cmd			= efct_lio_release_cmd,
+	.close_session			= efct_lio_close_session,
+	.sess_get_index			= efct_lio_sess_get_index,
+	.write_pending			= efct_lio_write_pending,
+	.set_default_node_attributes	= efct_lio_set_default_node_attrs,
+	.get_cmd_state			= efct_lio_get_cmd_state,
+	.queue_data_in			= efct_lio_queue_data_in,
+	.queue_status			= efct_lio_queue_status,
+	.queue_tm_rsp			= efct_lio_queue_tm_rsp,
+	.fabric_make_wwn		= efct_lio_npiv_make_sport,
+	.fabric_drop_wwn		= efct_lio_npiv_drop_sport,
+	.fabric_make_tpg		= efct_lio_npiv_make_tpg,
+	.fabric_drop_tpg		= efct_lio_npiv_drop_tpg,
+	.tpg_check_demo_mode_login_only =
+				efct_lio_npiv_check_demo_mode_login_only,
+	.tpg_check_prot_fabric_only	= NULL,
+	.sess_get_initiator_sid		= NULL,
+	.tfc_wwn_attrs			= efct_lio_wwn_attrs,
+	.tfc_tpg_base_attrs		= efct_lio_npiv_tpg_attrs,
+	.tfc_tpg_attrib_attrs		= efct_lio_npiv_tpg_attrib_attrs,
+};
+
+static struct target_fabric_configfs *fabric;
+static struct target_fabric_configfs *npiv_fabric;
+
+#define LIO_IOFMT "[%04x][i:%0*x t:%0*x h:%04x][c:%02x]"
+#define LIO_TMFIOFMT "[%04x][i:%0*x t:%0*x h:%04x][f:%02x]"
+#define LIO_IOFMT_ITT_SIZE(efct)	4
+
+#define efct_lio_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efct, "[%s]" LIO_IOFMT " " fmt,	\
+	io->node->display_name, io->instance_index,		\
+	LIO_IOFMT_ITT_SIZE(io->efct), io->init_task_tag,		\
+	LIO_IOFMT_ITT_SIZE(io->efct), io->tgt_task_tag, io->hw_tag,\
+	(io->tgt_io.cdb ? io->tgt_io.cdb[0] : 0xFF), ##__VA_ARGS__)
+#define efct_lio_tmfio_printf(io, fmt, ...) \
+	efc_log_debug(io->efct, "[%s]" LIO_TMFIOFMT " " fmt,\
+	io->node->display_name, io->instance_index,		\
+	LIO_IOFMT_ITT_SIZE(io->efct), io->init_task_tag,		\
+	LIO_IOFMT_ITT_SIZE(io->efct), io->tgt_task_tag, io->hw_tag,\
+	io->tgt_io.tmf,  ##__VA_ARGS__)
+
+#define efct_lio_io_trace(io, fmt, ...)					\
+	do {								\
+		if (EFCT_LOG_ENABLE_LIO_IO_TRACE(io->efct))		\
+			efct_lio_io_printf(io, fmt, ##__VA_ARGS__);	\
+	} while (0)
+
+#define api_trace(efct)							\
+	do {								\
+		if (EFCT_LOG_ENABLE_LIO_TRACE(efct))			\
+			efc_log_debug(efct, "*****\n");		\
+	} while (0)
+
+#define efct_lio_io_state_trace(io, value) (io->tgt_io.state |= value)
+
+/* Check if node is with valid initiator NPORT ID or not */
+
+static bool efct_lio_node_is_initiator(struct efc_node_s *node)
+{
+	if (!node)
+		return 0;
+	if (node->rnode.fc_id && node->rnode.fc_id != FC_FID_FLOGI &&
+	    node->rnode.fc_id != FC_FID_DIR_SERV &&
+	    node->rnode.fc_id != FC_FID_FCTRL) {
+		return 1;
+	}
+
+	return 0;
+}
+
+static int  efct_lio_tgt_session_data(struct efct_s *efct, u64 wwpn,
+				      char *buf, int size)
+{
+	struct efc_sli_port_s *sport = NULL;
+	struct efc_node_s *node = NULL;
+	struct efc_lport *efc = efct->efcport;
+	u16 loop_id = 0;
+	int off = 0, rc = 0;
+
+	if (!efc->domain) {
+		efc_log_err(efct, "failed to find efct/domain\n");
+		return -1;
+	}
+
+	list_for_each_entry(sport, &efc->domain->sport_list, list_entry) {
+		if (sport->wwpn == wwpn) {
+			list_for_each_entry(node, &sport->node_list,
+					    list_entry) {
+				/* Dump sessions only remote NPORT
+				 * sessions
+				 */
+				if (efct_lio_node_is_initiator(node)) {
+					rc = snprintf(buf + off,
+						      size - off,
+						"0x%016llx,0x%08x,0x%04x\n",
+						be64_to_cpup((__force __be64 *)
+								node->wwpn),
+						node->rnode.fc_id,
+						loop_id);
+					if (rc < 0)
+						break;
+					off += rc;
+				}
+			}
+		}
+	}
+
+	buf[size - 1] = '\0';
+	return 0;
+}
+
+static int efct_debugfs_session_open(struct inode *inode, struct file *filp)
+{
+	struct efct_lio_sport *sport = inode->i_private;
+	int size = 17 * PAGE_SIZE; /* > 34 byte per session*2048 sessions */
+
+	if (!(filp->f_mode & FMODE_READ)) {
+		filp->private_data = sport;
+		return 0;
+	}
+
+	filp->private_data = kmalloc(size, GFP_KERNEL);
+	if (!filp->private_data)
+		return -ENOMEM;
+
+	memset(filp->private_data, 0, size);
+	efct_lio_tgt_session_data(sport->efct, sport->wwpn, filp->private_data,
+				  size);
+	return 0;
+}
+
+static int efct_debugfs_session_close(struct inode *inode, struct file *filp)
+{
+	if (filp->f_mode & FMODE_READ)
+		kfree(filp->private_data);
+
+	return 0;
+}
+
+static ssize_t efct_debugfs_session_read(struct file *filp, char __user *buf,
+					 size_t count, loff_t *ppos)
+{
+	if (!(filp->f_mode & FMODE_READ))
+		return -EPERM;
+	return simple_read_from_buffer(buf, count, ppos, filp->private_data,
+				       strlen(filp->private_data));
+}
+
+static int efct_npiv_debugfs_session_open(struct inode *inode,
+					  struct file *filp)
+{
+	struct efct_lio_vport *sport = inode->i_private;
+	int size = 17 * PAGE_SIZE; /* > 34 byte per session*2048 sessions */
+
+	if (!(filp->f_mode & FMODE_READ)) {
+		filp->private_data = sport;
+		return 0;
+	}
+
+	filp->private_data = kmalloc(size, GFP_KERNEL);
+	if (!filp->private_data)
+		return -ENOMEM;
+
+	memset(filp->private_data, 0, size);
+	efct_lio_tgt_session_data(sport->efct, sport->npiv_wwpn,
+				  filp->private_data, size);
+	return 0;
+}
+
+static ssize_t efct_debugfs_session_write(struct file *filp,
+					  const char __user *buf,
+		size_t count, loff_t *ppos)
+{
+	return 0;
+}
+
+static const struct file_operations efct_debugfs_session_fops = {
+	.owner		= THIS_MODULE,
+	.open		= efct_debugfs_session_open,
+	.release	= efct_debugfs_session_close,
+	.read		= efct_debugfs_session_read,
+	.write		= efct_debugfs_session_write,
+	.llseek		= default_llseek,
+};
+
+static const struct file_operations efct_npiv_debugfs_session_fops = {
+	.owner		= THIS_MODULE,
+	.open		= efct_npiv_debugfs_session_open,
+	.release	= efct_debugfs_session_close,
+	.read		= efct_debugfs_session_read,
+	.write		= efct_debugfs_session_write,
+	.llseek		= default_llseek,
+};
+
+static char *efct_lio_get_fabric_wwn(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->sport->wwpn_str;
+}
+
+static char *efct_lio_get_npiv_fabric_wwn(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->vport->wwpn_str;
+}
+
+static u16 efct_lio_get_tag(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpgt;
+}
+
+static u16 efct_lio_get_npiv_tag(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpgt;
+}
+
+static int efct_lio_check_demo_mode(struct se_portal_group *se_tpg)
+{
+	return 1;
+}
+
+static int efct_lio_check_demo_mode_cache(struct se_portal_group *se_tpg)
+{
+	return 1;
+}
+
+static int efct_lio_check_demo_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_write_protect;
+}
+
+static int
+efct_lio_npiv_check_demo_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_write_protect;
+}
+
+static int efct_lio_check_prod_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.prod_mode_write_protect;
+}
+
+static int
+efct_lio_npiv_check_prod_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.prod_mode_write_protect;
+}
+
+static u32 efct_lio_tpg_get_inst_index(struct se_portal_group *se_tpg)
+{
+	return 0;
+}
+
+/* This function is called by LIO so the fabric driver can "ACK"
+ * when LIO performs transport_cmd_check_stop(). This is done to
+ * avoid a race between 1. the call to transport_generic_free_cmd()
+ * (after posting of a response -- .queue_status()) and 2. the
+ * accounting/cleanup of the se_cmd in transport_cmd_check_stop()
+ * itself.
+ * See TARGET_SCF_ACK_KREF for more details.
+ */
+static int efct_lio_check_stop_free(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io_s *ocp = container_of(se_cmd,
+						     struct efct_scsi_tgt_io_s,
+						     cmd);
+	struct efct_io_s *io = container_of(ocp, struct efct_io_s, tgt_io);
+
+	efct_lio_io_trace(io, "%s\n", __func__);
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_TFO_CHK_STOP_FREE);
+	return target_put_sess_cmd(se_cmd);
+}
+
+/* command has been aborted, cleanup here */
+static void efct_lio_aborted_task(struct se_cmd *se_cmd)
+{
+	int rc;
+	struct efct_scsi_tgt_io_s *ocp = container_of(se_cmd,
+						     struct efct_scsi_tgt_io_s,
+						     cmd);
+	struct efct_io_s *io = container_of(ocp, struct efct_io_s, tgt_io);
+
+	efct_lio_io_trace(io, "%s\n", __func__);
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_TFO_ABORTED_TASK);
+
+	if (!(se_cmd->transport_state & CMD_T_ABORTED) || ocp->rsp_sent)
+		return;
+
+	/*
+	 * if io is non-null, take a reference out on it so it isn't
+	 * freed until the abort operation is complete.
+	 */
+	if (kref_get_unless_zero(&io->ref) == 0) {
+		/* command no longer active */
+		struct efct_s *efct = io->efct;
+
+		efc_log_test(efct,
+			      "success: command no longer active (exists=%d)\n",
+			     (io != NULL));
+		return;
+	}
+
+	efct_lio_io_printf(io, "CMD_T_ABORTED set, aborting=%d\n",
+			   ocp->aborting);
+	ocp->aborting = true;
+	/* set to non-success so data moves won't continue */
+	ocp->err = EFCT_SCSI_STATUS_ABORTED;
+
+	/* wait until abort is complete; once we return, LIO will call
+	 * queue_tm_rsp() which will send response to TMF
+	 */
+	init_completion(&io->tgt_io.done);
+
+	rc = efct_scsi_tgt_abort_io(io, efct_lio_abort_tgt_cb, NULL);
+	if (rc == 0) {
+		/* wait for abort to complete before returning */
+		rc = wait_for_completion_timeout(&io->tgt_io.done,
+						 usecs_to_jiffies(10000000));
+
+		/* done with reference on aborted IO */
+		kref_put(&io->ref, io->release);
+
+		if (rc) {
+			efct_lio_io_printf(io,
+					   "abort completed successfully\n");
+			/* check if TASK_ABORTED status should be sent
+			 * for this IO
+			 */
+		} else {
+			efct_lio_io_printf(io,
+					   "timeout waiting for abort completed\n");
+		}
+	} else {
+		efct_lio_io_printf(io, "Failed to abort\n");
+	}
+}
+
+/* called when se_cmd's ref count goes to 0 */
+static void efct_lio_release_cmd(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io_s *ocp = container_of(se_cmd,
+						     struct efct_scsi_tgt_io_s,
+						     cmd);
+	struct efct_io_s *io = container_of(ocp, struct efct_io_s, tgt_io);
+	struct efct_s *efct = io->efct;
+
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_TFO_RELEASE_CMD);
+	efct_lio_io_trace(io, "%s\n", __func__);
+	efct_scsi_io_complete(io);
+	atomic_sub_return(1, &efct->tgt_efct.ios_in_use);
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_SCSI_CMPL_CMD);
+}
+
+static void efct_lio_close_session(struct se_session *se_sess)
+{
+	struct efc_node_s *node = se_sess->fabric_sess_ptr;
+	struct efct_s *efct = NULL;
+	int rc;
+
+	pr_debug("se_sess=%p node=%p", se_sess, node);
+
+	if (node) {
+		efct = node->efc->base;
+		rc = efct_xport_control(efct->xport,
+					EFCT_XPORT_POST_NODE_EVENT, node,
+			EFCT_XPORT_SHUTDOWN, NULL);
+		if (rc != 0) {
+			efc_log_test(efct,
+				      "Failed to shutdown session %p node %p\n",
+				     se_sess, node);
+			return;
+		}
+
+	} else {
+		pr_debug("node is NULL");
+	}
+}
+
+static u32 efct_lio_sess_get_index(struct se_session *se_sess)
+{
+	return 0;
+}
+
+static void efct_lio_set_default_node_attrs(struct se_node_acl *nacl)
+{
+}
+
+static int efct_lio_get_cmd_state(struct se_cmd *se_cmd)
+{
+	return 0;
+}
+
+/**
+ * @brief Housekeeping for LIO SG mapping.
+ *
+ * @param io Pointer to IO context.
+ *
+ * @return count Count returned by pci_map_sg.
+ */
+static int
+efct_lio_sg_map(struct efct_io_s *io)
+{
+	struct efct_scsi_tgt_io_s *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &ocp->cmd;
+
+	ocp->seg_map_cnt = pci_map_sg(io->efct->pcidev, cmd->t_data_sg,
+				      cmd->t_data_nents, cmd->data_direction);
+	if (ocp->seg_map_cnt == 0)
+		return -EFAULT;
+	return 0;
+}
+
+/**
+ * @brief Housekeeping for LIO SG unmapping.
+ *
+ * @param io Pointer to IO context.
+ *
+ * @return None.
+ */
+static void
+efct_lio_sg_unmap(struct efct_io_s *io)
+{
+	struct efct_scsi_tgt_io_s *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &ocp->cmd;
+
+	efct_lio_io_trace(io, "%s\n", __func__);
+	if (WARN_ON(!ocp->seg_map_cnt || !cmd->t_data_sg))
+		return;
+
+	pci_unmap_sg(io->efct->pcidev, cmd->t_data_sg,
+		     ocp->seg_map_cnt, cmd->data_direction);
+	ocp->seg_map_cnt = 0;
+}
+
+static int
+efct_lio_status_done(struct efct_io_s *io,
+		     enum efct_scsi_io_status_e scsi_status,
+		     u32 flags, void *arg)
+{
+	struct efct_scsi_tgt_io_s *ocp = &io->tgt_io;
+
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_SCSI_RSP_DONE);
+	if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+		efct_lio_io_printf(io, "callback completed with error=%d\n",
+				   scsi_status);
+		ocp->err = scsi_status;
+	}
+	if (ocp->seg_map_cnt)
+		efct_lio_sg_unmap(io);
+
+	efct_lio_io_trace(io, "status=%d, err=%d flags=0x%x, dir=%d\n",
+			  scsi_status, ocp->err, flags, ocp->ddir);
+
+	transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
+	return 0;
+}
+
+static int
+efct_lio_datamove_done(struct efct_io_s *io,
+		       enum efct_scsi_io_status_e scsi_status,
+		      u32 flags, void *arg)
+{
+	struct efct_scsi_tgt_io_s *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &io->tgt_io.cmd;
+	int rc;
+
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_SCSI_DATA_DONE);
+	if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+		efct_lio_io_printf(io, "callback completed with error=%d\n",
+				   scsi_status);
+		ocp->err = scsi_status;
+	}
+	efct_lio_io_trace(io, "seg_map_cnt=%d\n", ocp->seg_map_cnt);
+	if (ocp->seg_map_cnt) {
+		if (ocp->err == EFCT_SCSI_STATUS_GOOD &&
+		    ocp->cur_seg < ocp->seg_cnt) {
+			efct_lio_io_trace(io, "continuing cmd at segm=%d\n",
+					  ocp->cur_seg);
+			if (ocp->ddir == DDIR_FROM_INITIATOR)
+				rc = efct_lio_write_pending(&ocp->cmd);
+			else
+				rc = efct_lio_queue_data_in(&ocp->cmd);
+			if (rc == 0)
+				return 0;
+			ocp->err = EFCT_SCSI_STATUS_ERROR;
+			efct_lio_io_printf(io, "could not continue command\n");
+		}
+		efct_lio_sg_unmap(io);
+	}
+
+	if (io->tgt_io.aborting) {
+		/* If this command is in the process of being aborted,
+		 * free here; I/O will be completed when abort is complete
+		 * (kref taken for abort)
+		 */
+		efct_lio_io_printf(io, "IO done aborted\n");
+		return 0;
+	}
+
+	if (ocp->ddir == DDIR_FROM_INITIATOR) {
+		efct_lio_io_trace(io, "Write done, trans_state=0x%x\n",
+				  io->tgt_io.cmd.transport_state);
+		if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+			transport_generic_request_failure(&io->tgt_io.cmd,
+					TCM_CHECK_CONDITION_ABORT_CMD);
+			efct_lio_io_state_trace(io,
+				EFCT_LIO_STATE_TGT_GENERIC_REQ_FAILURE);
+		} else {
+			efct_lio_io_state_trace(io,
+						EFCT_LIO_STATE_TGT_EXECUTE_CMD);
+			target_execute_cmd(&io->tgt_io.cmd);
+		}
+	} else {
+		if ((flags & EFCT_SCSI_IO_CMPL_RSP_SENT) == 0) {
+			struct efct_scsi_cmd_resp_s rsp;
+			/* send check condition if an error occurred */
+			memset(&rsp, 0, sizeof(rsp));
+			rsp.scsi_status = cmd->scsi_status;
+			rsp.sense_data = (uint8_t *)io->tgt_io.sense_buffer;
+			rsp.sense_data_length = cmd->scsi_sense_length;
+
+			/* Check for residual underrun or overrun */
+			if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT)
+				rsp.residual = -cmd->residual_count;
+			else if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT)
+				rsp.residual = cmd->residual_count;
+
+			rc = efct_scsi_send_resp(io, 0, &rsp,
+						 efct_lio_status_done, NULL);
+			efct_lio_io_state_trace(io,
+						EFCT_LIO_STATE_SCSI_SEND_RSP);
+			if (rc != 0) {
+				efct_lio_io_printf(io,
+						   "Read done, failed to send rsp, rc=%d\n",
+				      rc);
+				transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+				efct_lio_io_state_trace(io,
+					EFCT_LIO_STATE_TGT_GENERIC_FREE);
+			} else {
+				ocp->rsp_sent = true;
+			}
+		} else {
+			ocp->rsp_sent = true;
+			transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+			efct_lio_io_state_trace(io,
+					EFCT_LIO_STATE_TGT_GENERIC_FREE);
+		}
+	}
+	return 0;
+}
+
+static int
+efct_lio_tmf_done(struct efct_io_s *io, enum efct_scsi_io_status_e scsi_status,
+		  u32 flags, void *arg)
+{
+	efct_lio_tmfio_printf(io, "cmd=%p status=%d, flags=0x%x\n",
+			      &io->tgt_io.cmd, scsi_status, flags);
+
+	transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
+	return 0;
+}
+
+static int
+efct_lio_null_tmf_done(struct efct_io_s *tmfio,
+		       enum efct_scsi_io_status_e scsi_status,
+		      u32 flags, void *arg)
+{
+	efct_lio_tmfio_printf(tmfio, "cmd=%p status=%d, flags=0x%x\n",
+			      &tmfio->tgt_io.cmd, scsi_status, flags);
+
+	/* free struct efct_io_s only, no active se_cmd */
+	efct_scsi_io_complete(tmfio);
+	return 0;
+}
+
+static int
+efct_lio_write_pending(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io_s *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io_s,
+						     cmd);
+	struct efct_io_s *io = container_of(ocp, struct efct_io_s, tgt_io);
+	struct efct_scsi_sgl_s *sgl = io->sgl;
+	struct scatterlist *sg;
+	u32 flags = 0, cnt, curcnt;
+	u64 length = 0;
+	int rc = 0;
+
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_TFO_WRITE_PENDING);
+	efct_lio_io_trace(io, "trans_state=0x%x se_cmd_flags=0x%x\n",
+			  cmd->transport_state, cmd->se_cmd_flags);
+
+	if (ocp->seg_cnt == 0) {
+		ocp->seg_cnt = cmd->t_data_nents;
+		ocp->cur_seg = 0;
+		if (efct_lio_sg_map(io)) {
+			efct_lio_io_printf(io, "efct_lio_sg_map failed\n");
+			return -EFAULT;
+		}
+	}
+	curcnt = (ocp->seg_map_cnt - ocp->cur_seg);
+	curcnt = (curcnt < io->sgl_allocated) ? curcnt : io->sgl_allocated;
+	/* find current sg */
+	for (cnt = 0, sg = cmd->t_data_sg; cnt < ocp->cur_seg; cnt++,
+	     sg = sg_next(sg))
+		;
+
+	for (cnt = 0; cnt < curcnt; cnt++, sg = sg_next(sg)) {
+		sgl[cnt].addr = sg_dma_address(sg);
+		sgl[cnt].dif_addr = 0;
+		sgl[cnt].len = sg_dma_len(sg);
+		length += sgl[cnt].len;
+		ocp->cur_seg++;
+	}
+	if (ocp->cur_seg == ocp->seg_cnt)
+		flags = EFCT_SCSI_LAST_DATAPHASE;
+	rc = efct_scsi_recv_wr_data(io, flags, NULL, sgl, curcnt, length,
+				    efct_lio_datamove_done, NULL);
+	return rc;
+}
+
+static int
+efct_lio_queue_data_in(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io_s *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io_s,
+						     cmd);
+	struct efct_io_s *io = container_of(ocp, struct efct_io_s, tgt_io);
+	struct efct_scsi_sgl_s *sgl = io->sgl;
+	struct scatterlist *sg = NULL;
+	uint flags = 0, cnt = 0, curcnt = 0;
+	u64 length = 0;
+	int rc = 0;
+
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_TFO_QUEUE_DATA_IN);
+	efct_lio_io_trace(io, "trans_state=0x%x se_cmd_flags=0x%x\n",
+			  cmd->transport_state, cmd->se_cmd_flags);
+
+	if (ocp->seg_cnt == 0) {
+		if (cmd->data_length) {
+			ocp->seg_cnt = cmd->t_data_nents;
+			ocp->cur_seg = 0;
+			if (efct_lio_sg_map(io)) {
+				efct_lio_io_printf(io,
+						   "efct_lio_sg_map failed\n");
+				return -EAGAIN;
+			}
+		} else {
+			/* If command length is 0, send the response status */
+			struct efct_scsi_cmd_resp_s rsp;
+
+			memset(&rsp, 0, sizeof(rsp));
+			efct_lio_io_printf(io,
+					   "cmd : %p length 0, send status\n",
+					   cmd);
+			return efct_scsi_send_resp(io, 0, &rsp,
+						  efct_lio_status_done, NULL);
+		}
+	}
+	curcnt = min(ocp->seg_map_cnt - ocp->cur_seg, io->sgl_allocated);
+
+	while (cnt < curcnt) {
+		sg = &cmd->t_data_sg[ocp->cur_seg];
+		sgl[cnt].addr = sg_dma_address(sg);
+		sgl[cnt].dif_addr = 0;
+		if (ocp->transferred_len + sg_dma_len(sg) >= cmd->data_length)
+			sgl[cnt].len = cmd->data_length - ocp->transferred_len;
+		else
+			sgl[cnt].len = sg_dma_len(sg);
+
+		ocp->transferred_len += sgl[cnt].len;
+		length += sgl[cnt].len;
+		ocp->cur_seg++;
+		cnt++;
+		if (ocp->transferred_len == cmd->data_length)
+			break;
+	}
+
+	if (ocp->transferred_len == cmd->data_length) {
+		flags = EFCT_SCSI_LAST_DATAPHASE;
+		ocp->seg_cnt = ocp->cur_seg; // Reset to only segs we use.
+	}
+
+	/* If there is residual, disable Auto Good Response */
+	if (cmd->residual_count)
+		flags |= EFCT_SCSI_NO_AUTO_RESPONSE;
+
+	rc = efct_scsi_send_rd_data(io, flags, NULL, sgl, curcnt, length,
+				    efct_lio_datamove_done, NULL);
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_SCSI_SEND_RD_DATA);
+	return rc;
+}
+
+static int
+efct_lio_abort_tgt_cb(struct efct_io_s *io,
+		      enum efct_scsi_io_status_e scsi_status,
+		      u32 flags, void *arg)
+{
+	efct_lio_io_printf(io, "%s\n", __func__);
+	complete(&io->tgt_io.done);
+	return 0;
+}
+
+static int
+efct_lio_queue_status(struct se_cmd *cmd)
+{
+	struct efct_scsi_cmd_resp_s rsp;
+	struct efct_scsi_tgt_io_s *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io_s,
+						     cmd);
+	struct efct_io_s *io = container_of(ocp, struct efct_io_s, tgt_io);
+	int rc = 0;
+
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_TFO_QUEUE_STATUS);
+	efct_lio_io_trace(io,
+			  "status=0x%x trans_state=0x%x se_cmd_flags=0x%x sns_len=%d\n",
+		cmd->scsi_status, cmd->transport_state, cmd->se_cmd_flags,
+		cmd->scsi_sense_length);
+
+	memset(&rsp, 0, sizeof(rsp));
+	rsp.scsi_status = cmd->scsi_status;
+	rsp.sense_data = (u8 *)io->tgt_io.sense_buffer;
+	rsp.sense_data_length = cmd->scsi_sense_length;
+
+	/* Check for residual underrun or overrun, mark negitive value for
+	 * underrun to recognize in HW
+	 */
+	if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT)
+		rsp.residual = -cmd->residual_count;
+	else if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT)
+		rsp.residual = cmd->residual_count;
+
+	rc = efct_scsi_send_resp(io, 0, &rsp, efct_lio_status_done, NULL);
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_SCSI_SEND_RSP);
+	if (rc == 0)
+		ocp->rsp_sent = true;
+	return rc;
+}
+
+static void efct_lio_queue_tm_rsp(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io_s *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io_s,
+						     cmd);
+	struct efct_io_s *tmfio = container_of(ocp, struct efct_io_s, tgt_io);
+	struct se_tmr_req *se_tmr = cmd->se_tmr_req;
+	u8 rspcode;
+
+	efct_lio_tmfio_printf(tmfio, "cmd=%p function=0x%x tmr->response=%d\n",
+			      cmd, se_tmr->function, se_tmr->response);
+	switch (se_tmr->response) {
+	case TMR_FUNCTION_COMPLETE:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_COMPLETE;
+		break;
+	case TMR_TASK_DOES_NOT_EXIST:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND;
+		break;
+	case TMR_LUN_DOES_NOT_EXIST:
+		rspcode = EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER;
+		break;
+	case TMR_FUNCTION_REJECTED:
+	default:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_REJECTED;
+		break;
+	}
+	efct_scsi_send_tmf_resp(tmfio, rspcode, NULL, efct_lio_tmf_done, NULL);
+}
+
+static struct se_wwn *
+efct_lio_make_sport(struct target_fabric_configfs *tf,
+		    struct config_group *group, const char *name)
+{
+	struct efct_lio_sport *lio_sport;
+	struct efct_s *efct;
+	int efctidx, ret;
+	u64 wwpn;
+	char *sessions_name;
+
+	ret = efct_lio_parse_wwn(name, &wwpn, 0);
+	if (ret)
+		return ERR_PTR(ret);
+
+	/* Now search for the HBA that has this WWPN */
+	for (efctidx = 0; efctidx < MAX_EFCT_DEVICES; efctidx++) {
+		u64 pwwn;
+		u8 pn[8];
+
+		efct = efct_devices[efctidx];
+		if (!efct)
+			continue;
+		memcpy(pn, efct_hw_get_ptr(&efct->hw, EFCT_HW_WWN_PORT),
+		       sizeof(pn));
+		ARRAY2WWN(pwwn, pn);
+		if (pwwn == wwpn)
+			break;
+	}
+	if (efctidx == MAX_EFCT_DEVICES) {
+		pr_err("cannot find EFCT for wwpn %s\n", name);
+		return ERR_PTR(-ENXIO);
+	}
+	efct = efct_devices[efctidx];
+	lio_sport = kzalloc(sizeof(*lio_sport), GFP_KERNEL);
+	if (!lio_sport)
+		return ERR_PTR(-ENOMEM);
+	lio_sport->efct = efct;
+	lio_sport->wwpn = wwpn;
+	FABRIC_SNPRINTF(lio_sport->wwpn_str, sizeof(lio_sport->wwpn_str),
+			"naa.", wwpn);
+	efct->tgt_efct.lio_sport = lio_sport;
+
+	sessions_name = kasprintf(GFP_KERNEL, "efct-sessions-%d",
+				  efct->instance_index);
+	if (sessions_name && efct->sess_debugfs_dir)
+		lio_sport->sessions = debugfs_create_file(sessions_name,
+							  0644,
+						efct->sess_debugfs_dir,
+						lio_sport,
+						&efct_debugfs_session_fops);
+	kfree(sessions_name);
+
+	return &lio_sport->sport_wwn;
+}
+
+static struct se_wwn *
+efct_lio_npiv_make_sport(struct target_fabric_configfs *tf,
+			 struct config_group *group, const char *name)
+{
+	struct efct_lio_vport *lio_vport;
+	struct efct_s *efct;
+	int efctidx, ret = -1;
+	u64 p_wwpn, npiv_wwpn, npiv_wwnn;
+	char *p, tmp[128];
+	struct efct_lio_vport_list_t *vport_list;
+	char *sessions_name;
+	struct fc_vport *new_fc_vport;
+	struct fc_vport_identifiers vport_id;
+	unsigned long flags = 0;
+
+	snprintf(tmp, 128, "%s", name);
+
+	p = strchr(tmp, '@');
+
+	if (!p) {
+		pr_err("Unable to find separator operator(@)\n");
+		return ERR_PTR(ret);
+	}
+	*p++ = '\0';
+
+	ret = efct_lio_parse_wwn(tmp, &p_wwpn, 0);
+	if (ret)
+		return ERR_PTR(ret);
+
+	ret = efct_lio_parse_npiv_wwn(p, strlen(p) + 1, &npiv_wwpn, &npiv_wwnn);
+	if (ret)
+		return ERR_PTR(ret);
+
+	 /* Now search for the HBA that has this WWPN */
+	for (efctidx = 0; efctidx < MAX_EFCT_DEVICES; efctidx++) {
+		u64 pwwn;
+		u8 pn[8];
+
+		efct = efct_devices[efctidx];
+		if (!efct)
+			continue;
+		if (!efct->xport->req_wwpn) {
+			memcpy(pn, efct_hw_get_ptr(&efct->hw,
+				   EFCT_HW_WWN_PORT), sizeof(pn));
+			ARRAY2WWN(pwwn, pn);
+		} else {
+			pwwn = efct->xport->req_wwpn;
+		}
+		if (pwwn == p_wwpn)
+			break;
+	}
+	if (efctidx == MAX_EFCT_DEVICES) {
+		pr_err("cannot find EFCT for base wwpn %s\n", name);
+		return ERR_PTR(-ENXIO);
+	}
+	efct = efct_devices[efctidx];
+	lio_vport = kzalloc(sizeof(*lio_vport), GFP_KERNEL);
+	if (!lio_vport)
+		return ERR_PTR(-ENOMEM);
+
+	lio_vport->efct = efct;
+	lio_vport->wwpn = p_wwpn;
+	lio_vport->npiv_wwpn = npiv_wwpn;
+	lio_vport->npiv_wwnn = npiv_wwnn;
+
+	FABRIC_SNPRINTF(lio_vport->wwpn_str, sizeof(lio_vport->wwpn_str),
+			"naa.", npiv_wwpn);
+
+	vport_list = kmalloc(sizeof(*vport_list), GFP_KERNEL);
+	if (!vport_list) {
+		kfree(lio_vport);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	memset(vport_list, 0, sizeof(struct efct_lio_vport_list_t));
+	vport_list->lio_vport = lio_vport;
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+	INIT_LIST_HEAD(&vport_list->list_entry);
+	list_add_tail(&vport_list->list_entry, &efct->tgt_efct.vport_list);
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+
+	sessions_name = kasprintf(GFP_KERNEL, "sessions-npiv-%d",
+				  efct->instance_index);
+	if (sessions_name && efct->sess_debugfs_dir)
+		lio_vport->sessions = debugfs_create_file(sessions_name,
+							  0644,
+					   efct->sess_debugfs_dir,
+					   lio_vport,
+					   &efct_npiv_debugfs_session_fops);
+	kfree(sessions_name);
+	memset(&vport_id, 0, sizeof(vport_id));
+	vport_id.port_name = npiv_wwpn;
+	vport_id.node_name = npiv_wwnn;
+	vport_id.roles = FC_PORT_ROLE_FCP_INITIATOR;
+	vport_id.vport_type = FC_PORTTYPE_NPIV;
+	vport_id.disable = false;
+
+	new_fc_vport = fc_vport_create(efct->shost, 0, &vport_id);
+	if (!new_fc_vport) {
+		efc_log_err(efct, "fc_vport_create failed\n");
+		kfree(lio_vport);
+		kfree(vport_list);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	lio_vport->fc_vport = new_fc_vport;
+
+	return &lio_vport->vport_wwn;
+}
+
+static void
+efct_lio_drop_sport(struct se_wwn *wwn)
+{
+	struct efct_lio_sport *lio_sport = container_of(wwn,
+					    struct efct_lio_sport, sport_wwn);
+	struct efct_s *efct = lio_sport->efct;
+
+	api_trace(efct);
+
+	/* only physical sport should exist, free lio_sport allocated
+	 * in efct_lio_make_sport
+	 */
+
+	debugfs_remove(lio_sport->sessions);
+	lio_sport->sessions = NULL;
+
+	kfree(efct->tgt_efct.lio_sport);
+	efct->tgt_efct.lio_sport = NULL;
+}
+
+static void
+efct_lio_npiv_drop_sport(struct se_wwn *wwn)
+{
+	struct efct_lio_vport *lio_vport = container_of(wwn,
+			struct efct_lio_vport, vport_wwn);
+	struct efct_lio_vport_list_t *vport, *next_vport;
+	struct efct_s *efct = lio_vport->efct;
+	unsigned long flags = 0;
+
+	api_trace(efct);
+
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+
+	debugfs_remove(lio_vport->sessions);
+
+	if (lio_vport->fc_vport)
+		fc_vport_terminate(lio_vport->fc_vport);
+
+	lio_vport->sessions = NULL;
+
+	list_for_each_entry_safe(vport, next_vport, &efct->tgt_efct.vport_list,
+				 list_entry) {
+		if (vport->lio_vport == lio_vport) {
+			list_del(&vport->list_entry);
+			kfree(vport->lio_vport);
+			kfree(vport);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+}
+
+static struct se_portal_group *
+efct_lio_make_tpg(struct se_wwn *wwn, const char *name)
+{
+	struct efct_lio_sport *lio_sport = container_of(wwn,
+					    struct efct_lio_sport, sport_wwn);
+	struct efct_lio_tpg *tpg;
+	struct efct_s *efct;
+	unsigned long n;
+	int ret;
+
+	api_trace(lio_sport->efct);
+	if (strstr(name, "tpgt_") != name)
+		return ERR_PTR(-EINVAL);
+	if (kstrtoul(name + 5, 10, &n) || n > USHRT_MAX)
+		return ERR_PTR(-EINVAL);
+
+	tpg = kzalloc(sizeof(*tpg), GFP_KERNEL);
+	if (!tpg)
+		return ERR_PTR(-ENOMEM);
+
+	tpg->sport = lio_sport;
+	tpg->tpgt = n;
+	atomic_set(&tpg->enabled, 0);
+
+	tpg->tpg_attrib.generate_node_acls = 1;
+	tpg->tpg_attrib.demo_mode_write_protect = 1;
+	tpg->tpg_attrib.cache_dynamic_acls = 1;
+	tpg->tpg_attrib.demo_mode_login_only = 1;
+	tpg->tpg_attrib.session_deletion_wait = 1;
+
+	ret = core_tpg_register(wwn, &tpg->tpg, SCSI_PROTOCOL_FCP);
+	if (ret < 0) {
+		kfree(tpg);
+		return NULL;
+	}
+	efct = lio_sport->efct;
+	efct->tgt_efct.tpg = tpg;
+	efc_log_debug(efct, "create portal group %d\n", tpg->tpgt);
+
+	return &tpg->tpg;
+}
+
+static void
+efct_lio_drop_tpg(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	efc_log_debug(tpg->sport->efct, "drop portal group %d\n", tpg->tpgt);
+	tpg->sport->efct->tgt_efct.tpg = NULL;
+	core_tpg_deregister(se_tpg);
+	kfree(tpg);
+}
+
+static struct se_portal_group *
+efct_lio_npiv_make_tpg(struct se_wwn *wwn, const char *name)
+{
+	struct efct_lio_vport *lio_vport = container_of(wwn,
+			struct efct_lio_vport, vport_wwn);
+	struct efct_lio_tpg *tpg;
+	struct efct_s *efct;
+	unsigned long n;
+	int ret;
+
+	efct = lio_vport->efct;
+	if (strstr(name, "tpgt_") != name)
+		return ERR_PTR(-EINVAL);
+	if (kstrtoul(name + 5, 10, &n) || n > USHRT_MAX)
+		return ERR_PTR(-EINVAL);
+
+	if (n != 1) {
+		efc_log_err(efct, "Invalid tpgt index: %ld provided\n", n);
+		return ERR_PTR(-EINVAL);
+	}
+
+	tpg = kzalloc(sizeof(*tpg), GFP_KERNEL);
+	if (!tpg)
+		return ERR_PTR(-ENOMEM);
+
+	tpg->vport = lio_vport;
+	tpg->tpgt = n;
+	atomic_set(&tpg->enabled, 0);
+
+	tpg->tpg_attrib.generate_node_acls = 1;
+	tpg->tpg_attrib.demo_mode_write_protect = 1;
+	tpg->tpg_attrib.cache_dynamic_acls = 1;
+	tpg->tpg_attrib.demo_mode_login_only = 1;
+	tpg->tpg_attrib.session_deletion_wait = 1;
+
+	ret = core_tpg_register(wwn, &tpg->tpg, SCSI_PROTOCOL_FCP);
+
+	if (ret < 0) {
+		kfree(tpg);
+		return NULL;
+	}
+	lio_vport->tpg = tpg;
+	efc_log_debug(efct, "create vport portal group %d\n", tpg->tpgt);
+
+	return &tpg->tpg;
+}
+
+static void
+efct_lio_npiv_drop_tpg(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	efc_log_debug(tpg->vport->efct, "drop npiv portal group %d\n",
+		       tpg->tpgt);
+	core_tpg_deregister(se_tpg);
+	kfree(tpg);
+}
+
+static int
+efct_lio_parse_wwn(const char *name, u64 *wwp, u8 npiv)
+{
+	int arr[8];
+	int amt;
+
+	if (npiv) {
+		amt = sscanf(name, "%02x%02x%02x%02x%02x%02x%02x%02x",
+			     &arr[0], &arr[1], &arr[2], &arr[3], &arr[4],
+				 &arr[5], &arr[6], &arr[7]);
+	} else {
+		amt = sscanf(name,
+			     "%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x",
+			     &arr[0], &arr[1], &arr[2], &arr[3], &arr[4],
+			     &arr[5], &arr[6], &arr[7]);
+	}
+	if (amt != 8)
+		return -EINVAL;
+	ARRAY2WWN(*wwp, arr);
+	return 0;
+}
+
+static int
+efct_lio_parse_npiv_wwn(const char *name, size_t size, u64 *wwpn, u64 *wwnn)
+{
+	unsigned int cnt = size;
+	int rc;
+
+	*wwpn = *wwnn = 0;
+	if (name[cnt - 1] == '\n' || name[cnt - 1] == 0)
+		cnt--;
+
+	/* validate we have enough characters for WWPN */
+	if ((cnt != (16 + 1 + 16)) || (name[16] != ':'))
+		return -EINVAL;
+
+	rc = efct_lio_parse_wwn(&name[0], wwpn, 1);
+	if (rc != 0)
+		return rc;
+
+	rc = efct_lio_parse_wwn(&name[17], wwnn, 1);
+	if (rc != 0)
+		return rc;
+
+	return 0;
+}
+
+static int
+efct_lio_init_nodeacl(struct se_node_acl *se_nacl, const char *name)
+{
+	struct efct_lio_nacl *nacl;
+	u64 wwnn;
+
+	if (efct_lio_parse_wwn(name, &wwnn, 0) < 0)
+		return -EINVAL;
+
+	nacl = container_of(se_nacl, struct efct_lio_nacl, se_node_acl);
+	nacl->nport_wwnn = wwnn;
+
+	FABRIC_SNPRINTF(nacl->nport_name, sizeof(nacl->nport_name), "", wwnn);
+	return 0;
+}
+
+static int efct_lio_check_demo_mode_login_only(struct se_portal_group *stpg)
+{
+	struct efct_lio_tpg *tpg = container_of(stpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_login_only;
+}
+
+static int
+efct_lio_npiv_check_demo_mode_login_only(struct se_portal_group *stpg)
+{
+	struct efct_lio_tpg *tpg = container_of(stpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_login_only;
+}
+
+/*
+ * Attribute data and functions
+ */
+/***************************************************************************
+ * Functions required by SCSI base driver API
+ */
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Initializes any target fields on the efct structure.
+ *
+ * @par Description
+ * Called by OS initialization code when a new device is discovered.
+ *
+ * @param efct Pointer to efct.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int efct_scsi_tgt_new_device(struct efct_s *efct)
+{
+	int rc = 0;
+	u32 total_ios;
+	struct efct_lio_worker_s *worker = NULL;
+
+	/* Get the max settings */
+	efct->tgt_efct.max_sge =
+			efct_scsi_get_property(efct, EFCT_SCSI_MAX_SGE);
+	efct->tgt_efct.max_sgl =
+			efct_scsi_get_property(efct, EFCT_SCSI_MAX_SGL);
+
+	/* initialize IO watermark fields */
+	atomic_set(&efct->tgt_efct.ios_in_use, 0);
+	total_ios = efct_scsi_get_property(efct, EFCT_SCSI_MAX_IOS);
+	efc_log_debug(efct, "total_ios=%d\n", total_ios);
+	efct->tgt_efct.watermark_min =
+			(total_ios * EFCT_WATERMARK_LOW_PCT) / 100;
+	efct->tgt_efct.watermark_max =
+			(total_ios * EFCT_WATERMARK_HIGH_PCT) / 100;
+	atomic_set(&efct->tgt_efct.io_high_watermark,
+		   efct->tgt_efct.watermark_max);
+	atomic_set(&efct->tgt_efct.watermark_hit, 0);
+	atomic_set(&efct->tgt_efct.initiator_count, 0);
+
+	/* Create kernel worker thread to service async requests
+	 * (new/delete initiator, new cmd/tmf). Previously, a worker thread
+	 * was needed to make upcalls into LIO because the HW completion
+	 * context ran in an interrupt context (tasklet).
+	 * This is no longer necessary now that HW completions run in a
+	 * kernel thread context. However, performance is much better when
+	 * these types of reqs have their own thread.
+	 *
+	 * Note: We've seen better performance when IO completion (non-async)
+	 * upcalls into LIO are not given an additional kernel thread.
+	 * Thus,make such upcalls directly from the HW completion kernel thread
+	 */
+
+	worker = &efct->tgt_efct.async_worker;
+	efct_mqueue_init(efct, &worker->wq);
+
+	worker->thread = kthread_create((int(*)(void *)) efct_lio_async_worker,
+					efct, "efct_lio_async_worker");
+
+	if (IS_ERR(worker->thread)) {
+		efc_log_err(efct, "kthread_create failed: %ld\n",
+			     PTR_ERR(worker->thread));
+		worker->thread = NULL;
+		return -1;
+	}
+
+	wake_up_process(worker->thread);
+
+	spin_lock_init(&efct->tgt_efct.efct_lio_lock);
+	INIT_LIST_HEAD(&efct->tgt_efct.vport_pending_enable_list);
+	INIT_LIST_HEAD(&efct->tgt_efct.vport_list);
+
+	return rc;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Tears down target members of efct structure.
+ *
+ * @par Description
+ * Called by OS code when device is removed.
+ *
+ * @param efct Pointer to efct.
+ * @param worker Pointer to the worker thread structure.
+ *
+ * @return Returns 0 on success, a negative error code value on failure.
+ */
+static int
+efct_lio_terminate_worker_thread(struct efct_s *efct,
+				 struct efct_lio_worker_s *worker)
+{
+	struct efct_lio_wq_data_s *wq_data;
+	u32 rc = 0;
+
+	api_trace(efct);
+
+	wq_data = kzalloc(sizeof(*wq_data), GFP_ATOMIC);
+
+	init_completion(&worker->done);
+
+	if (wq_data) {
+		/* send stop message */
+		wq_data->message = EFCT_LIO_WQ_STOP;
+		wq_data->ptr = NULL;
+		efct_mqueue_put(&worker->wq, wq_data);
+	} else {
+		/* mark thread for terminate (not ideal -- blocking) */
+		/* thread will stop when timeout is hit */
+		if (!worker->thread)
+			return -1;
+
+		/* Call stop */
+		kthread_stop(worker->thread);
+	}
+
+	/* Wait for worker thread to report that it has stopped */
+	rc = wait_for_completion_timeout(&worker->done,
+					 usecs_to_jiffies(10000000));
+	if (!rc)
+		efc_log_info(efct, "worker thread timed out\n");
+	efct_mqueue_free(&worker->wq);
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Tears down target members of efct structure.
+ *
+ * @par Description
+ * Called by OS code when device is removed.
+ *
+ * @param efct Pointer to efct.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int efct_scsi_tgt_del_device(struct efct_s *efct)
+{
+	int rc = 0;
+
+	api_trace(efct);
+
+	if (efct_lio_terminate_worker_thread(efct,
+					     &efct->tgt_efct.async_worker))
+		rc = -1;
+	return rc;
+}
+
+/**
+ * @brief Initialize SCSI IO.
+ *
+ * @par Description
+ * Initialize SCSI IO, this function is called once per IO during IO pool
+ * allocation so that the target server may initialize any of its own private
+ * data.
+ *
+ * @param io Pointer to SCSI IO object.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int
+efct_scsi_tgt_io_init(struct efct_io_s *io)
+{
+	return 0;
+}
+
+/**
+ * @brief Uninitialize SCSI IO.
+ *
+ * @par Description
+ * Uninitialize target server private data in a SCSI io object.
+ *
+ * @param io Pointer to SCSI IO object.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int
+efct_scsi_tgt_io_exit(struct efct_io_s *io)
+{
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Accept new domain notification.
+ *
+ * @par Description
+ * Called by base driver when new domain is discovered.  A target-server
+ * uses this call to prepare for new remote node notifications
+ * arising from efct_scsi_new_initiator().
+ * @n @n
+ * The domain context has an element <b>struct efct_scsi_tgt_domain_s
+ * tgt_domain</b> which is declared by the target-server code and is used
+ * for target-server private data.
+ * @n @n
+ * This function will only be called if the base-driver has been enabled
+ * for target capability.
+ * @n @n
+ * @b Note: This call is made to target-server backends.
+ * The efct_scsi_ini_new_domain() function is called to
+ * initiator-client backends.
+ *
+ * @param domain Pointer to domain.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int
+efct_scsi_tgt_new_domain(struct efc_lport *efc, struct efc_domain_s *domain)
+{
+	int status = 0;
+	struct efct_s *efct = domain->efc->base;
+	struct efct_lio_vport_data_t *virt_target_data, *next;
+	unsigned long flags = 0;
+
+	api_trace(efct);
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+	list_for_each_entry_safe(virt_target_data, next,
+		 &efct->tgt_efct.vport_pending_enable_list, list_entry) {
+		list_del(&virt_target_data->list_entry);
+
+		status = efc_sport_vport_new(domain,
+					     virt_target_data->vport_wwpn,
+					     virt_target_data->vport_wwnn,
+					     U32_MAX,
+					     virt_target_data->initiator_mode,
+					     virt_target_data->target_mode,
+					     virt_target_data, NULL, true);
+		if (status != 0) {
+			/* Put this back on list and try again next time */
+			efc_log_test(efct,
+				      "Could not create new vport for WWPN:0x%llx\n",
+				 virt_target_data->vport_wwpn);
+			list_add(&efct->tgt_efct.vport_pending_enable_list,
+				 &virt_target_data->list_entry);
+		} else {
+			efc_log_debug(efct,
+				       "Created new vport for WWPN: 0x%llx\n",
+				      virt_target_data->vport_wwpn);
+			kfree(virt_target_data);
+		}
+	}
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+	return status;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief accept domain lost notification.
+ *
+ * @par Description
+ * Called by the base driver when a domain goes away. A target-server
+ * uses this call to clean up all domain scoped resources.
+ * @n @n
+ * @b Note: This call is made to target-server backends.
+ * The efct_scsi_ini_del_domain() function is called to
+ * initiator-client backends.
+ *
+ * @param domain Pointer to domain.
+ *
+ * @return None.
+ */
+void
+efct_scsi_tgt_del_domain(struct efc_lport *efc, struct efc_domain_s *domain)
+{
+	struct efct_s *efct = domain->efc->base;
+
+	api_trace(efct);
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief accept new sli port notification
+ *
+ * @par Description
+ * Called by the base drive when new sli port (sport) is discovered.
+ * A target-server will use this call to prepare for new remote node
+ * notifications arising from efct_scsi_new_initiator().
+ * @n @n
+ * This function will only be called if the base-driver has been
+ * enabled for target capability.
+ * @n @n
+ * @b Note: This call is made to target-server backends. The
+ * efct_scsi_ini_new_sport() function is called to initiator-client
+ * backends.
+ *
+ * @param sport Pointer to sport.
+ *
+ * @return Returns 0 for success, or a negative error code value on failure.
+ */
+int
+efct_scsi_tgt_new_sport(struct efc_lport *efc, struct efc_sli_port_s *sport)
+{
+	struct efct_s *efct = sport->efc->base;
+
+	efc_log_debug(efct, "New SPORT: %s bound to %s\n", sport->display_name,
+		       efct->tgt_efct.lio_sport->wwpn_str);
+
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Accept sli port gone notification.
+ *
+ * @par Description
+ * Called by the base driver when a sport goes away.  A target-server
+ * uses this call to clean up all sport scoped resources.
+ * @n @n
+ * @b Note: This call is made to target-server backends.
+ * The efct_scsi_ini_del_sport() function is called to initiator-client
+ * backends.
+ *
+ * @param sport Pointer to SPORT structure.
+ *
+ * @return None.
+ */
+void
+efct_scsi_tgt_del_sport(struct efc_lport *efc, struct efc_sli_port_s *sport)
+{
+	efc_log_debug(efc, "Del SPORT: %s\n",
+		       sport->display_name);
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Validate new initiator.
+ *
+ * @par Description
+ * Sent by the base driver to validate a remote initiator.
+ * The target-server returns TRUE if this initiator should be accepted.
+ * @n @n
+ * This function is only called if the base driver is enabled for
+ * target capability.
+ *
+ * @param node Pointer to remote initiator node to validate.
+ *
+ * @return TRUE if initiator should be accepted, or FALSE if it
+ * should be rejected.
+ *
+ */
+int
+efct_scsi_validate_initiator(struct efc_lport *efc, struct efc_node_s *node)
+{
+	/*
+	 * Since LIO only supports initiator validation at thread level,
+	 * we are open minded and accept all callers.
+	 */
+	return 1;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Receive notification of a new SCSI initiator node.
+ *
+ * @par Description
+ * Sent by the base driver to notify a target-server of the presence of a new
+ * remote initiator. The target-server may use this call to prepare for
+ * inbound IO from this node.
+ * @n @n
+ * The struct efc_node_s structure has and element of type efct_scsi_tgt_node_s
+ * named tgt_node that is declared and used by a target-server for private
+ * information.
+ * @n @n
+ * @b Note: This function is only called if the base driver is enabled for
+ * target capability.
+ *
+ * @param node Pointer to new remote initiator node.
+ *
+ * @return None.
+ *
+ */
+int efct_scsi_new_initiator(struct efc_lport *efc, struct efc_node_s *node)
+{
+	struct efct_s *efct = node->efc->base;
+	struct efct_lio_wq_data_s *wq_data;
+
+	/*
+	 * Since LIO only supports initiator validation at thread level,
+	 * we are open minded and accept all callers.
+	 */
+	wq_data = kzalloc(sizeof(*wq_data), GFP_ATOMIC);
+	if (!wq_data)
+		return -ENOMEM;
+
+	wq_data->message = EFCT_LIO_WQ_NEW_INITIATOR;
+	wq_data->ptr = node;
+	efct_mqueue_put(&efct->tgt_efct.async_worker.wq, wq_data);
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Delete a SCSI initiator node.
+ *
+ * @par Description
+ * Sent by the base driver to notify a target server that a remote initiator
+ * is now gone. The base driver will have terminated all outstanding IOs and
+ * the target-server will receive appropriate completions.
+ * @n @n
+ * @b Note: This function is only called if the base driver is enabled for
+ * target capability.
+ *
+ * @param node Pointer node being deleted.
+ * @param reason Indicates whether the initiator is missing or deleted.
+ *
+ * @return None.
+ *
+ */
+int
+efct_scsi_del_initiator(struct efc_lport *efc, struct efc_node_s *node,
+			int reason)
+{
+	struct efct_s *efct = node->efc->base;
+	struct efct_lio_wq_data_s *wq_data;
+	int watermark;
+	int initiator_count;
+
+	if (reason == EFCT_SCSI_INITIATOR_MISSING)
+		return EFCT_SCSI_CALL_COMPLETE;
+
+	api_trace(efct);
+	wq_data = kmalloc(sizeof(*wq_data), GFP_ATOMIC);
+	if (!wq_data) {
+		efc_log_err(efct, "failed to allocate work queue entry\n");
+		return EFCT_SCSI_CALL_COMPLETE;
+	}
+	memset(wq_data, 0, sizeof(*wq_data));
+	wq_data->message = EFCT_LIO_WQ_UNREG_SESSION;
+	wq_data->ptr = node;
+	efct_mqueue_put(&efct->tgt_efct.async_worker.wq, wq_data);
+
+	/*
+	 * update IO watermark: decrement initiator count
+	 */
+	initiator_count =
+		atomic_sub_return(1, &efct->tgt_efct.initiator_count);
+	watermark = (efct->tgt_efct.watermark_max -
+			initiator_count * EFCT_IO_WATERMARK_PER_INITIATOR);
+	watermark = (efct->tgt_efct.watermark_min > watermark) ?
+			efct->tgt_efct.watermark_min : watermark;
+	atomic_set(&efct->tgt_efct.io_high_watermark, watermark);
+
+	return EFCT_SCSI_CALL_ASYNC;
+}
+
+const char *efct_lio_get_msg_name(enum efct_lio_wq_msg_s msg)
+{
+	switch (msg) {
+	case EFCT_LIO_WQ_SUBMIT_CMD:
+		return "EFCT_LIO_WQ_SUBMIT_CMD";
+	case EFCT_LIO_WQ_UNREG_SESSION:
+		return "EFCT_LIO_WQ_UNREG_SESSION";
+	case EFCT_LIO_WQ_NEW_INITIATOR:
+		return "EFCT_LIO_WQ_NEW_INITIATOR";
+	case EFCT_LIO_WQ_STOP:
+		return "EFCT_LIO_WQ_STOP";
+	default:
+		break;
+	}
+	return "unknown";
+}
+
+static int efct_session_cb(struct se_portal_group *se_tpg,
+			   struct se_session *se_sess, void *private)
+{
+	struct efc_node_s *node = private;
+	struct efct_scsi_tgt_node_s *tgt_node = NULL;
+
+	tgt_node = kzalloc(sizeof(*tgt_node), GFP_KERNEL);
+	if (!tgt_node)
+		return -1;
+
+	tgt_node->session = se_sess;
+	node->tgt_node = tgt_node;
+
+	return 0;
+}
+
+/**
+ * @brief Worker thread for LIO commands.
+ *
+ * @par Description
+ * This thread is used to make LIO upcalls associated with
+ * asynchronous requests (i.e. new commands received, register
+ * sessions, unregister sessions).
+ *
+ * @param mythread Pointer to the thread object.
+ *
+ * @return Always returns 0.
+ */
+static int efct_lio_async_worker(struct efct_s *efct)
+{
+	struct efct_lio_wq_data_s *wq_data;
+	struct efc_node_s *node;
+	struct se_session *se_sess;
+	int done = 0;
+	bool free_data = true;
+	struct efct_scsi_tgt_io_s *ocp;
+	int dir, rc = 0;
+	struct efct_io_s *io;
+	struct efct_io_s *tmfio;
+	struct efct_scsi_tgt_node_s *tgt_node = NULL;
+
+	while (!done) {
+		/* Poll with a timeout, to keep the kernel from complaining
+		 * of not periodically running
+		 */
+		wq_data = efct_mqueue_get(&efct->tgt_efct.async_worker.wq,
+					  10000000);
+		if (kthread_should_stop())
+			break;
+
+		if (!wq_data)
+			continue;
+
+		switch (wq_data->message) {
+		case EFCT_LIO_WQ_UNREG_SESSION:
+			node = wq_data->ptr;
+			tgt_node = node->tgt_node;
+			se_sess = tgt_node->session;
+
+			if (!se_sess) {
+				/* base driver has sent back-to-back requests
+				 * to unreg session with no intervening
+				 * register
+				 */
+				efc_log_test(efct,
+					      "unreg session for NULL session\n");
+				efc_scsi_del_initiator_complete(node->efc,
+								node);
+				break;
+			}
+
+			efc_log_debug(efct, "unreg session se_sess=%p node=%p\n",
+				       se_sess, node);
+
+			/* first flag all session commands to complete */
+			target_sess_cmd_list_set_waiting(se_sess);
+
+			/* now wait for session commands to complete */
+			target_wait_for_sess_cmds(se_sess);
+			target_remove_session(se_sess);
+
+			kfree(node->tgt_node);
+
+			node->tgt_node = NULL;
+			efc_scsi_del_initiator_complete(node->efc, node);
+			break;
+		case EFCT_LIO_WQ_NEW_INITIATOR: {
+			char wwpn[FABRIC_SNPRINTF_LEN];
+			struct efct_lio_tpg *tpg = NULL;
+			struct se_portal_group *se_tpg;
+			struct se_session *se_sess;
+			int watermark;
+			int initiator_count;
+
+			/*
+			 * Find the sport
+			 */
+			node = wq_data->ptr;
+			/* Check to see if it's belongs to vport,
+			 * if not get physical port
+			 */
+			tpg = efct_get_vport_tpg(node);
+			if (tpg) {
+				se_tpg = &tpg->tpg;
+			} else if (efct->tgt_efct.tpg) {
+				tpg = efct->tgt_efct.tpg;
+				se_tpg = &tpg->tpg;
+			} else {
+				efc_log_err(efct, "failed to init session\n");
+				break;
+			}
+
+			/*
+			 * Format the FCP Initiator port_name into colon
+			 * separated values to match the format by our explicit
+			 * ConfigFS NodeACLs.
+			 */
+			FABRIC_SNPRINTF(wwpn, sizeof(wwpn), "",
+					efc_node_get_wwpn(node));
+
+			se_sess = target_setup_session(se_tpg, 0, 0,
+						       TARGET_PROT_NORMAL,
+						       wwpn, node,
+						       efct_session_cb);
+			if (IS_ERR(se_sess)) {
+				efc_log_err(efct, "failed to setup session\n");
+				break;
+			}
+
+			efc_log_debug(efct, "new initiator se_sess=%p node=%p\n",
+				      se_sess, node);
+
+			/* update IO watermark: increment initiator count */
+			initiator_count =
+			atomic_add_return(1, &efct->tgt_efct.initiator_count);
+			watermark = (efct->tgt_efct.watermark_max -
+			     initiator_count * EFCT_IO_WATERMARK_PER_INITIATOR);
+			watermark = (efct->tgt_efct.watermark_min > watermark) ?
+				efct->tgt_efct.watermark_min : watermark;
+			atomic_set(&efct->tgt_efct.io_high_watermark,
+				   watermark);
+
+			break;
+		}
+		case EFCT_LIO_WQ_STOP:
+			done = 1;
+			break;
+
+		case EFCT_LIO_WQ_SUBMIT_CMD:
+			free_data = false;
+			ocp = wq_data->ptr;
+			io = container_of(ocp, struct efct_io_s, tgt_io);
+			switch (ocp->ddir) {
+			case DDIR_TO_INITIATOR:
+				dir = DMA_FROM_DEVICE;
+				break;
+			case DDIR_FROM_INITIATOR:
+				dir = DMA_TO_DEVICE;
+				break;
+			case DDIR_BIDIR:
+				dir = DMA_BIDIRECTIONAL;
+				break;
+			case DDIR_NONE:
+			default:
+				dir = DMA_NONE;
+				break;
+			}
+			tgt_node = io->node->tgt_node;
+
+			se_sess = tgt_node->session;
+			if (se_sess) {
+				rc = target_submit_cmd(&io->tgt_io.cmd,
+						       se_sess,
+						ocp->cdb,
+						&io->tgt_io.sense_buffer[0],
+						ocp->lun, io->exp_xfer_len,
+						ocp->task_attr, dir,
+						TARGET_SCF_ACK_KREF);
+				efct_lio_io_state_trace(io,
+						EFCT_LIO_STATE_TGT_SUBMIT_CMD);
+
+				/* This can actually happen if IOs are going
+				 * one when efctLio.py --delete is performed!!!
+				 */
+				WARN_ON(rc && (rc != -ESHUTDOWN));
+			}
+			break;
+		case EFCT_LIO_WQ_SUBMIT_TMF:
+			free_data = false;
+			ocp = wq_data->ptr;
+			tmfio = container_of(ocp, struct efct_io_s, tgt_io);
+
+			tgt_node = tmfio->node->tgt_node;
+
+			se_sess = tgt_node->session;
+			if (se_sess) {
+				rc = target_submit_tmr(&ocp->cmd,
+						se_sess, &ocp->sense_buffer[0],
+						ocp->lun,
+						&ocp->io_to_abort->tgt_io.cmd,
+						ocp->tmf, GFP_KERNEL,
+						tmfio->init_task_tag,
+						TARGET_SCF_ACK_KREF);
+				efct_lio_io_state_trace(tmfio,
+						EFCT_LIO_STATE_TGT_SUBMIT_TMR);
+				if (rc) {
+					efct_scsi_send_tmf_resp(tmfio,
+						EFCT_SCSI_TMF_FUNCTION_REJECTED,
+						NULL, efct_lio_null_tmf_done,
+						NULL);
+				}
+			}
+			break;
+		default:
+			efc_log_test(efct, "UNKNOWN message=%d\n",
+				      wq_data->message);
+			break;
+		}
+		if (free_data)
+			kfree(wq_data);
+	}
+
+	complete(&efct->tgt_efct.async_worker.done);
+
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Receive FCP SCSI command.
+ *
+ * @par Description
+ * Called by the base driver when a new SCSI command has been received.
+ * The target server will process the command, and issue data and/or
+ * response phase requests to the base driver.
+ * @n @n
+ * The IO context (struct efct_io_s) structure has an element of type
+ * struct efct_scsi_tgt_io_s named tgt_io that is declared and used by
+ * a target server for private information.
+ *
+ * @param io Pointer to IO context.
+ * @param lun LUN for this IO.
+ * @param cdb Pointer to SCSI CDB.
+ * @param cdb_len Length of CDB in bytes.
+ * @param flags Command flags.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int efct_scsi_recv_cmd(struct efct_io_s *io, uint64_t lun, u8 *cdb,
+		       u32 cdb_len, u32 flags)
+{
+	struct efct_scsi_tgt_io_s *ocp = &io->tgt_io;
+	struct efct_s *efct = io->efct;
+	struct efct_lio_wq_data_s *wq_data;
+	char *ddir;
+
+	memset(ocp, 0, sizeof(struct efct_scsi_tgt_io_s));
+	efct_lio_io_state_trace(io, EFCT_LIO_STATE_SCSI_RECV_CMD);
+	atomic_add_return(1, &efct->tgt_efct.ios_in_use);
+
+	/* check against watermark and send TASK_SET_FULL? */
+
+	/* set target timeout */
+	io->timeout = efct->target_io_timer_sec;
+
+	if (flags & EFCT_SCSI_CMD_SIMPLE)
+		ocp->task_attr = TCM_SIMPLE_TAG;
+	else if (flags & EFCT_SCSI_CMD_HEAD_OF_QUEUE)
+		ocp->task_attr = TCM_HEAD_TAG;
+	else if (flags & EFCT_SCSI_CMD_ORDERED)
+		ocp->task_attr = TCM_ORDERED_TAG;
+	else if (flags & EFCT_SCSI_CMD_ACA)
+		ocp->task_attr = TCM_ACA_TAG;
+
+	switch (flags & (EFCT_SCSI_CMD_DIR_IN | EFCT_SCSI_CMD_DIR_OUT)) {
+	case EFCT_SCSI_CMD_DIR_IN:
+		ddir = "FROM_INITIATOR";
+		ocp->ddir = DDIR_FROM_INITIATOR;
+		break;
+	case EFCT_SCSI_CMD_DIR_OUT:
+		ddir = "TO_INITIATOR";
+		ocp->ddir = DDIR_TO_INITIATOR;
+		break;
+	case EFCT_SCSI_CMD_DIR_IN | EFCT_SCSI_CMD_DIR_OUT:
+		ddir = "BIDIR";
+		ocp->ddir = DDIR_BIDIR;
+		break;
+	default:
+		ddir = "NONE";
+		ocp->ddir = DDIR_NONE;
+		break;
+	}
+	ocp->cdb = cdb;
+	ocp->lun = lun;
+	efct_lio_io_trace(io, "new cmd=0x%x ddir=%s dl=%u\n",
+			  cdb[0], ddir, io->exp_xfer_len);
+	wq_data = &ocp->wq_data;
+	wq_data->message = EFCT_LIO_WQ_SUBMIT_CMD;
+	wq_data->ptr = ocp;
+	efct_mqueue_put(&efct->tgt_efct.async_worker.wq, wq_data);
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Receive FCP SCSI Command with first burst data.
+ *
+ * @par Description
+ * Receive a new FCP SCSI command from the base driver with first burst data.
+ *
+ * @param io Pointer to IO context.
+ * @param lun LUN for this IO.
+ * @param cdb Pointer to SCSI CDB.
+ * @param cdb_len Length of CDB in bytes.
+ * @param flags Command flags.
+ * @param first_burst_buffers First burst buffers.
+ * @param first_burst_bytes Number of bytes received in the first burst.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+
+int efct_scsi_recv_cmd_first_burst(struct efct_io_s *io, uint64_t lun,
+				   u8 *cdb, u32 cdb_len, u32 flags,
+	struct efc_dma_s first_burst_buffers[], u32 first_burst_bytes)
+{
+	api_trace(io->efct);
+	return -1;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief receive a TMF command IO
+ *
+ * @par Description
+ * Called by the base driver when a SCSI TMF command has been received.
+ * The target server will process the command, aborting commands as
+ * needed, and post a response using efct_scsi_send_resp().
+ * @n @n
+ * The IO context (struct efct_io_s) structure has an element of type
+ * struct efct_scsi_tgt_io_s named tgt_io that is declared and used by
+ * a target-server for private information.
+ * @n @n
+ * If the target-server walks the nodes active_ios linked list, and
+ * starts IO abort processing, the code <b>must</b> be sure not to abort
+ * the IO passed into the efct_scsi_recv_tmf() command.
+ *
+ * @param tmfio Pointer to IO context.
+ * @param lun Logical unit value.
+ * @param cmd Command request.
+ * @param io_to_abort Pointer to IO object to abort for TASK_ABORT
+ *	(NULL for all other TMF).
+ * @param flags Flags.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int
+efct_scsi_recv_tmf(struct efct_io_s *tmfio, u32 lun,
+		   enum efct_scsi_tmf_cmd_e cmd,
+		  struct efct_io_s *io_to_abort, u32 flags)
+{
+	unsigned char tmr_func;
+	struct efct_lio_wq_data_s *wq_data;
+	struct efct_s *efct = tmfio->efct;
+	struct efct_scsi_tgt_io_s *ocp = &tmfio->tgt_io;
+
+	memset(ocp, 0, sizeof(struct efct_scsi_tgt_io_s));
+	efct_lio_io_state_trace(tmfio, EFCT_LIO_STATE_SCSI_RECV_TMF);
+	atomic_add_return(1, &efct->tgt_efct.ios_in_use);
+	efct_lio_tmfio_printf(tmfio, "%s: new tmf %x lun=%u\n",
+			      tmfio->display_name, cmd, lun);
+
+	switch (cmd) {
+	case EFCT_SCSI_TMF_ABORT_TASK:
+		tmr_func = TMR_ABORT_TASK;
+		break;
+	case EFCT_SCSI_TMF_ABORT_TASK_SET:
+		tmr_func = TMR_ABORT_TASK_SET;
+		break;
+	case EFCT_SCSI_TMF_CLEAR_TASK_SET:
+		tmr_func = TMR_CLEAR_TASK_SET;
+		break;
+	case EFCT_SCSI_TMF_LOGICAL_UNIT_RESET:
+		tmr_func = TMR_LUN_RESET;
+		break;
+	case EFCT_SCSI_TMF_CLEAR_ACA:
+		tmr_func = TMR_CLEAR_ACA;
+		break;
+	case EFCT_SCSI_TMF_TARGET_RESET:
+		tmr_func = TMR_TARGET_WARM_RESET;
+		break;
+	case EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT:
+	case EFCT_SCSI_TMF_QUERY_TASK_SET:
+	default:
+		efct_scsi_send_tmf_resp(tmfio, EFCT_SCSI_TMF_FUNCTION_REJECTED,
+					NULL, efct_lio_null_tmf_done, NULL);
+		return 0;
+	}
+
+	/* queue work to async worker */
+	tmfio->tgt_io.tmf = tmr_func;
+	tmfio->tgt_io.lun = lun;
+	tmfio->tgt_io.io_to_abort = io_to_abort;
+	wq_data = &tmfio->tgt_io.wq_data;
+	wq_data->message = EFCT_LIO_WQ_SUBMIT_TMF;
+	wq_data->ptr = &tmfio->tgt_io;
+	efct_mqueue_put(&efct->tgt_efct.async_worker.wq, wq_data);
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Driver-wide initialization for target-server.
+ *
+ * @par Description
+ * Called by OS initialization prior to PCI device discovery.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int efct_scsi_tgt_driver_init(void)
+{
+	int rc;
+
+	/* Register the top level struct config_item_type with TCM core */
+	rc = target_register_template(&efct_lio_ops);
+	if (rc < 0) {
+		pr_err("target_fabric_configfs_register failed with %d\n", rc);
+		return rc;
+	}
+	rc = target_register_template(&efct_lio_npiv_ops);
+	if (rc < 0) {
+		pr_err("target_fabric_configfs_register failed with %d\n", rc);
+		target_unregister_template(&efct_lio_ops);
+		return rc;
+	}
+	return 0;
+}
+
+/**
+ * @ingroup scsi_api_target
+ * @brief Driver-wide cleanup for target server.
+ *
+ * @par Description
+ * Called by OS driver-wide exit/cleanup.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int efct_scsi_tgt_driver_exit(void)
+{
+	target_unregister_template(&efct_lio_ops);
+	target_unregister_template(&efct_lio_npiv_ops);
+	fabric = NULL;
+	npiv_fabric = NULL;
+	return 0;
+}
+
+int
+efct_scsi_get_block_vaddr(struct efct_io_s *io, uint64_t blocknumber,
+			  struct efct_scsi_vaddr_len_s addrlen[],
+	u32 max_addrlen, void **dif_vaddr)
+{
+	return -1;
+}
diff --git a/drivers/scsi/elx/efct/efct_lio.h b/drivers/scsi/elx/efct/efct_lio.h
new file mode 100644
index 000000000000..90bbd6c3759e
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_lio.h
@@ -0,0 +1,371 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_LIO_H__)
+#define __EFCT_LIO_H__
+
+#define EFCT_INCLUDE_LIO
+
+#include "efct_scsi.h"
+#include <target/target_core_base.h>
+
+enum efct_lio_wq_msg_s {
+	EFCT_LIO_WQ_SUBMIT_CMD,
+	EFCT_LIO_WQ_SUBMIT_TMF,
+	EFCT_LIO_WQ_UNREG_SESSION,
+	EFCT_LIO_WQ_NEW_INITIATOR,
+	EFCT_LIO_WQ_STOP,
+};
+
+struct efct_lio_wq_data_s {
+	enum efct_lio_wq_msg_s message;
+	void *ptr;
+	struct {
+		struct efct_io_s *tmfio;
+		u64 lun;
+		enum efct_scsi_tmf_cmd_e cmd;
+		struct efct_io_s *abortio;
+		u32 flags;
+	} tmf;
+};
+
+/**
+ * @brief EFCT message queue object
+ *
+ * The EFCT message queue may be used to pass
+ * messages between two threads (or an ISR and thread).
+ * A message is defined here as a pointer to an instance
+ * of application specific data (message data).
+ * The message queue allocates a message header,
+ * saves the message data pointer, and places the
+ * header on the message queue's linked list.
+ * completion is used to synchronize access
+ * to the message queue consumer.
+ *
+ */
+
+struct efct_mqueue_s {
+	struct efct_s *efct;
+	spinlock_t lock;		/* message queue lock */
+	struct completion prod;		/* producer*/
+	struct list_head queue;
+};
+
+struct efct_lio_worker_s {
+	struct task_struct *thread;
+	struct efct_mqueue_s wq;
+	struct completion done;
+};
+
+/**
+ * @brief target private efct structure
+ */
+struct efct_scsi_tgt_s {
+	u32 max_sge;
+	u32 max_sgl;
+
+	/*
+	 * Variables used to send task set full. We are using a high watermark
+	 * method to send task set full. We will reserve a fixed number of IOs
+	 * per initiator plus a fudge factor. Once we reach this number,
+	 * then the target will start sending task set full/busy responses.
+	 */
+	atomic_t initiator_count;	/**< count of initiators */
+	atomic_t ios_in_use;	/**< num of IOs in use */
+	atomic_t io_high_watermark;	/**< used to send task set full */
+	/**< used to track how often IO pool almost empty */
+	atomic_t watermark_hit;
+	int watermark_min;		/**< lower limit for watermark */
+	int watermark_max;		/**< upper limit for watermark */
+
+	struct efct_lio_sport *lio_sport;
+	struct efct_lio_tpg *tpg;
+	/**< list of VPORTS waiting to be created */
+	struct list_head vport_pending_enable_list;
+	struct list_head vport_list;		/**< list of existing VPORTS*/
+	/* Protects vport list*/
+	spinlock_t	efct_lio_lock;
+
+	u64 wwnn;
+
+	/* worker thread for making upcalls related to asynchronous
+	 * events e.g. node (session) found, node (session) deleted,
+	 * new command received
+	 */
+	struct efct_lio_worker_s async_worker;
+};
+
+/**
+ * @brief target private domain structure
+ */
+
+struct efct_scsi_tgt_domain_s {
+	/* efct_lio decls */
+	;
+};
+
+/**
+ * @brief target private sport structure
+ */
+
+struct efct_scsi_tgt_sport_s {
+	struct efct_lio_sport *lio_sport;
+};
+
+/**
+ * @brief target private node structure
+ */
+
+#define SCSI_TRANSPORT_ID_FCP   0
+
+struct efct_scsi_tgt_node_s {
+	struct se_session *session;
+};
+
+/**
+ * @brief target private IO structure
+ */
+
+struct efct_scsi_tgt_io_s {
+	struct se_cmd cmd;
+	unsigned char sense_buffer[TRANSPORT_SENSE_BUFFER];
+	enum {
+		DDIR_NONE, DDIR_FROM_INITIATOR, DDIR_TO_INITIATOR, DDIR_BIDIR
+	} ddir;
+	int task_attr;
+	struct completion done;			/* for synchronizing aborts */
+	u64 lun;
+
+#define EFCT_LIO_STATE_SCSI_RECV_CMD		(1 << 0)
+#define EFCT_LIO_STATE_TGT_SUBMIT_CMD		(1 << 1)
+#define EFCT_LIO_STATE_TFO_QUEUE_DATA_IN	(1 << 2)
+#define EFCT_LIO_STATE_TFO_WRITE_PENDING	(1 << 3)
+#define EFCT_LIO_STATE_TGT_EXECUTE_CMD		(1 << 4)
+#define EFCT_LIO_STATE_SCSI_SEND_RD_DATA	(1 << 5)
+#define EFCT_LIO_STATE_TFO_CHK_STOP_FREE	(1 << 6)
+#define EFCT_LIO_STATE_SCSI_DATA_DONE		(1 << 7)
+#define EFCT_LIO_STATE_TFO_QUEUE_STATUS		(1 << 8)
+#define EFCT_LIO_STATE_SCSI_SEND_RSP		(1 << 9)
+#define EFCT_LIO_STATE_SCSI_RSP_DONE		(1 << 10)
+#define EFCT_LIO_STATE_TGT_GENERIC_FREE		(1 << 11)
+#define EFCT_LIO_STATE_SCSI_RECV_TMF		(1 << 12)
+#define EFCT_LIO_STATE_TGT_SUBMIT_TMR		(1 << 13)
+#define EFCT_LIO_STATE_TFO_WRITE_PEND_STATUS	(1 << 14)
+#define EFCT_LIO_STATE_TGT_GENERIC_REQ_FAILURE  (1 << 15)
+
+#define EFCT_LIO_STATE_TFO_ABORTED_TASK		(1 << 29)
+#define EFCT_LIO_STATE_TFO_RELEASE_CMD		(1 << 30)
+#define EFCT_LIO_STATE_SCSI_CMPL_CMD		(1 << 31)
+	u32 state;
+	u8 *cdb;
+	u8 tmf;
+	struct efct_io_s *io_to_abort;
+	u32 cdb_len;
+	u32 seg_map_cnt;	/* current number of segments mapped for dma */
+	u32 seg_cnt;	/* total segment count for i/o */
+	u32 cur_seg;	/* current segment counter */
+	enum efct_scsi_io_status_e err;	/* current error */
+	/* context associated with thread work queue request */
+	struct efct_lio_wq_data_s wq_data;
+	bool	aborting;  /* IO is in process of being aborted */
+	bool	rsp_sent; /* a response has been sent for this IO */
+	uint32_t transferred_len;
+};
+
+/* Handler return codes */
+enum {
+	SCSI_HANDLER_DATAPHASE_STARTED = 1,
+	SCSI_HANDLER_RESP_STARTED,
+	SCSI_HANDLER_VALIDATED_DATAPHASE_STARTED,
+	SCSI_CMD_NOT_SUPPORTED,
+	};
+
+#define scsi_pack_result(key, code, qualifier) (((key & 0xff) << 16) | \
+				((code && 0xff) << 8) | (qualifier & 0xff))
+
+int efct_scsi_tgt_driver_init(void);
+int efct_scsi_tgt_driver_exit(void);
+int scsi_dataphase_cb(struct efct_io_s *io,
+		      enum efct_scsi_io_status_e scsi_status,
+		      u32 flags, void *arg);
+const char *efct_lio_get_msg_name(enum efct_lio_wq_msg_s msg);
+
+#define FABRIC_SNPRINTF_LEN     32
+struct efct_lio_vport {
+	u64 wwpn;
+	u64 npiv_wwpn;
+	u64 npiv_wwnn;
+	unsigned char wwpn_str[FABRIC_SNPRINTF_LEN];
+	struct se_wwn vport_wwn;
+	struct efct_lio_tpg *tpg;
+	struct efct_s *efct;
+	struct dentry *sessions;
+	struct Scsi_Host *shost;
+	struct fc_vport *fc_vport;
+	atomic_t enable;
+};
+
+/***************************************************************************
+ * Message Queues
+ *
+ */
+
+/**
+ * @brief EFCT message queue message
+ *
+ */
+
+struct efct_mqueue_hdr_s {
+	struct list_head list_entry;
+	void *msgdata;				/**< message data (payload) */
+};
+
+/*
+ * Define a structure used to pass to the interrupt handlers and the tasklets.
+ */
+struct efct_os_intr_context_s {
+	struct efct_s *efct;
+	u32 index;
+	struct completion done;
+	struct task_struct *thread;
+};
+
+/**
+ * @brief initialize an EFCT message queue
+ *
+ * The elements of the message queue  are initialized
+ *
+ * @param os OS handle
+ * @param q pointer to message queue
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+
+static inline int
+efct_mqueue_init(struct efct_s *efct, struct efct_mqueue_s *q)
+{
+	memset(q, 0, sizeof(*q));
+	q->efct = efct;
+	spin_lock_init(&q->lock);
+	init_completion(&q->prod);
+	INIT_LIST_HEAD(&q->queue);
+	return 0;
+}
+
+/**
+ * @brief put a message in a message queue
+ *
+ * A message header is allocated, it's payload set to point to the
+ * requested message data, and the
+ * header posted to the message queue.
+ *
+ * @param q pointer to message queue
+ * @param msgdata pointer to message data
+ *
+ * @return returns 0 for success, a negative error code value for failure.
+ */
+
+static inline int
+efct_mqueue_put(struct efct_mqueue_s *q, void *msgdata)
+{
+	struct efct_mqueue_hdr_s *hdr = NULL;
+	unsigned long flags = 0;
+
+	hdr = kmalloc(sizeof(*hdr), GFP_ATOMIC);
+	if (!hdr)
+		return -1;
+
+	memset(hdr, 0, sizeof(*hdr));
+	hdr->msgdata = msgdata;
+
+	/* lock the queue wide lock, add to tail of linked list
+	 * and wake up the completion.
+	 */
+	spin_lock_irqsave(&q->lock, flags);
+		INIT_LIST_HEAD(&hdr->list_entry);
+		list_add_tail(&hdr->list_entry, &q->queue);
+	spin_unlock_irqrestore(&q->lock, flags);
+	complete(&q->prod);
+	return 0;
+}
+
+/**
+ * @brief read next message
+ *
+ * Reads next message header from the message queue, or times out.
+ * The timeout_usec value
+ * if zero will try one time, if negative will try forever, and if positive
+ * will try for that many micro-seconds.
+ *
+ * @param q pointer to message queue
+ * @param timeout_usec timeout
+ * (0 - try once, < 0 try forever, > 0 try micro-seconds)
+ *
+ * @return returns pointer to next message, or NULL
+ */
+
+static inline void *
+efct_mqueue_get(struct efct_mqueue_s *q, int timeout_usec)
+{
+	int rc;
+	struct efct_mqueue_hdr_s *hdr = NULL;
+	void *msgdata = NULL;
+	unsigned long flags = 0;
+
+	if (!q) {
+		pr_err("%s: q is NULL\n", __func__);
+		return NULL;
+	}
+
+	rc = wait_for_completion_timeout(&q->prod,
+					 usecs_to_jiffies(timeout_usec));
+	if (!rc)
+		return NULL;
+
+	spin_lock_irqsave(&q->lock, flags);
+	if (!list_empty(&q->queue)) {
+		hdr = list_first_entry(&q->queue,
+				       struct efct_mqueue_hdr_s, list_entry);
+		list_del(&hdr->list_entry);
+	}
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	if (hdr) {
+		msgdata = hdr->msgdata;
+		kfree(hdr);
+	}
+	return msgdata;
+}
+
+/**
+ * @brief free an EFCT message queue
+ *
+ * The message queue and its resources are free'd.
+ * In this case, the message queue is
+ * drained, and all the messages free'd
+ *
+ * @param q pointer to message queue
+ *
+ * @return none
+ */
+
+static inline void
+efct_mqueue_free(struct efct_mqueue_s *q)
+{
+	struct efct_mqueue_hdr_s *hdr;
+	struct efct_mqueue_hdr_s *next;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&q->lock, flags);
+	list_for_each_entry_safe(hdr, next, &q->queue, list_entry) {
+		pr_err("Warning: freeing queue, payload %p may leak\n",
+			    hdr->msgdata);
+		kfree(hdr);
+	}
+	spin_unlock_irqrestore(&q->lock, flags);
+}
+
+#endif /*__EFCT_LIO_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 25/32] elx: efct: Hardware IO submission routines
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (23 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 24/32] elx: efct: LIO backend interface routines James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 26/32] elx: efct: link statistics and SFP data James Smart
                   ` (7 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines that write IO to Work queue, send SRRs and raw frames.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c | 723 +++++++++++++++++++++++++++++++++++++++-
 drivers/scsi/elx/efct/efct_hw.h |  19 ++
 2 files changed, 741 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 9ce31326ce38..5e0ecd621f91 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -59,6 +59,8 @@ static void
 efct_hw_wq_process_io(void *arg, u8 *cqe, int status);
 static void
 efct_hw_wq_process_abort(void *arg, u8 *cqe, int status);
+static void
+hw_wq_submit_pending(struct hw_wq_s *wq, u32 update_free_count);
 
 static enum efct_hw_rtn_e
 efct_hw_link_event_init(struct efct_hw_s *hw)
@@ -3774,7 +3776,7 @@ efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
 	    ext == SLI4_FC_LOCAL_REJECT_NO_XRI &&
 		io->done) {
 		efct_hw_done_t done = io->done;
-		void		*arg = io->arg;
+		void *arg = io->arg;
 
 		io->done = NULL;
 
@@ -3903,3 +3905,722 @@ efct_hw_flush(struct efct_hw_s *hw)
 
 	return 0;
 }
+
+/**
+ * @brief Write a HW IO to a work queue.
+ *
+ * @par Description
+ * A HW IO is written to a work queue.
+ *
+ * @param wq Pointer to work queue.
+ * @param wqe Pointer to WQ entry.
+ *
+ * @n @b Note: Assumes the SLI-4 queue lock is held.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+static int
+_efct_hw_wq_write(struct hw_wq_s *wq, struct efct_hw_wqe_s *wqe)
+{
+	int rc;
+	int queue_rc;
+
+	/* Every so often, set the wqec bit to generate comsummed completions */
+	if (wq->wqec_count)
+		wq->wqec_count--;
+
+	if (wq->wqec_count == 0) {
+		struct sli4_generic_wqe_s *genwqe = (void *)wqe->wqebuf;
+
+		genwqe->cmdtype_wqec_byte |= SLI4_GEN_WQE_WQEC;
+		wq->wqec_count = wq->wqec_set_count;
+	}
+
+	/* Decrement WQ free count */
+	wq->free_count--;
+
+	queue_rc = sli_wq_write(&wq->hw->sli, wq->queue, wqe->wqebuf);
+
+	if (queue_rc < 0)
+		rc = -1;
+	else
+		rc = 0;
+
+	return rc;
+}
+
+/**
+ * @brief Write a HW IO to a work queue.
+ *
+ * @par Description
+ * A HW IO is written to a work queue.
+ *
+ * @param wq Pointer to work queue.
+ * @param wqe Pointer to WQE entry.
+ *
+ * @n @b Note: Takes the SLI-4 queue lock.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int
+efct_hw_wq_write(struct hw_wq_s *wq, struct efct_hw_wqe_s *wqe)
+{
+	int rc = 0;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&wq->queue->lock, flags);
+	if (!list_empty(&wq->pending_list)) {
+		INIT_LIST_HEAD(&wqe->list_entry);
+		list_add_tail(&wqe->list_entry, &wq->pending_list);
+		wq->wq_pending_count++;
+		while ((wq->free_count > 0) &&
+		       ((wqe = list_first_entry(&wq->pending_list,
+					struct efct_hw_wqe_s, list_entry))
+			 != NULL)) {
+			list_del(&wqe->list_entry);
+			rc = _efct_hw_wq_write(wq, wqe);
+			if (rc < 0)
+				break;
+			if (wqe->abort_wqe_submit_needed) {
+				wqe->abort_wqe_submit_needed = false;
+				sli_abort_wqe(&wq->hw->sli,
+					      wqe->wqebuf,
+					      wq->hw->sli.wqe_size,
+					      SLI_ABORT_XRI,
+					      wqe->send_abts, wqe->id,
+					      0, wqe->abort_reqtag,
+					      SLI4_CQ_DEFAULT);
+
+				INIT_LIST_HEAD(&wqe->list_entry);
+				list_add_tail(&wqe->list_entry,
+					      &wq->pending_list);
+				wq->wq_pending_count++;
+			}
+		}
+	} else {
+		if (wq->free_count > 0) {
+			rc = _efct_hw_wq_write(wq, wqe);
+		} else {
+			INIT_LIST_HEAD(&wqe->list_entry);
+			list_add_tail(&wqe->list_entry, &wq->pending_list);
+			wq->wq_pending_count++;
+		}
+	}
+
+	spin_unlock_irqrestore(&wq->queue->lock, flags);
+
+	return rc;
+}
+
+/**
+ * @brief Update free count and submit any pending HW IOs
+ *
+ * @par Description
+ * The WQ free count is updated, and any pending HW IOs are submitted that
+ * will fit in the queue.
+ *
+ * @param wq Pointer to work queue.
+ * @param update_free_count Value added to WQs free count.
+ *
+ * @return None.
+ */
+static void
+hw_wq_submit_pending(struct hw_wq_s *wq, u32 update_free_count)
+{
+	struct efct_hw_wqe_s *wqe;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&wq->queue->lock, flags);
+
+	/* Update free count with value passed in */
+	wq->free_count += update_free_count;
+
+	while ((wq->free_count > 0) && (!list_empty(&wq->pending_list))) {
+		wqe = list_first_entry(&wq->pending_list,
+				       struct efct_hw_wqe_s, list_entry);
+		list_del(&wqe->list_entry);
+		_efct_hw_wq_write(wq, wqe);
+
+		if (wqe->abort_wqe_submit_needed) {
+			wqe->abort_wqe_submit_needed = false;
+			sli_abort_wqe(&wq->hw->sli, wqe->wqebuf,
+				      wq->hw->sli.wqe_size,
+				      SLI_ABORT_XRI, wqe->send_abts, wqe->id,
+				      0, wqe->abort_reqtag, SLI4_CQ_DEFAULT);
+					  INIT_LIST_HEAD(&wqe->list_entry);
+			list_add_tail(&wqe->list_entry, &wq->pending_list);
+			wq->wq_pending_count++;
+		}
+	}
+
+	spin_unlock_irqrestore(&wq->queue->lock, flags);
+}
+
+/**
+ * @ingroup io
+ * @brief Send a Single Request/Response Sequence (SRRS).
+ *
+ * @par Description
+ * This routine supports communication sequences consisting of a single
+ * request and single response between two endpoints. Examples include:
+ *  - Sending an ELS request.
+ *  - Sending an ELS response - To send an ELS response, the caller must provide
+ * the OX_ID from the received request.
+ *  - Sending a FC Common Transport (FC-CT) request - To send a FC-CT request,
+ * the caller must provide the R_CTL, TYPE, and DF_CTL
+ * values to place in the FC frame header.
+ *  .
+ * @n @b Note: The caller is expected to provide both send and receive
+ * buffers for requests. In the case of sending a response, no receive buffer
+ * is necessary and the caller may pass in a NULL pointer.
+ *
+ * @param hw Hardware context.
+ * @param type Type of sequence (ELS request/response, FC-CT).
+ * @param io Previously-allocated HW IO object.
+ * @param send DMA memory holding data to send (for example, ELS request, BLS
+ * response).
+ * @param len Length, in bytes, of data to send.
+ * @param receive Optional DMA memory to hold a response.
+ * @param rnode Destination of data (that is, a remote node).
+ * @param iparam IO parameters (ELS response and FC-CT).
+ * @param cb Function call upon completion of sending the data (may be NULL).
+ * @param arg Argument to pass to IO completion function.
+ *
+ * @return Returns 0 on success, or a non-zero on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_srrs_send(struct efct_hw_s *hw, enum efct_hw_io_type_e type,
+		  struct efct_hw_io_s *io,
+		  struct efc_dma_s *send, u32 len,
+		  struct efc_dma_s *receive, struct efc_remote_node_s *rnode,
+		  union efct_hw_io_param_u *iparam,
+		  efct_hw_srrs_cb_t cb, void *arg)
+{
+	struct sli4_sge_s	*sge = NULL;
+	enum efct_hw_rtn_e	rc = EFCT_HW_RTN_SUCCESS;
+	u16	local_flags = 0;
+	u32 sge0_flags;
+	u32 sge1_flags;
+
+	if (!io || !rnode || !iparam) {
+		pr_err("bad parm hw=%p io=%p s=%p r=%p rn=%p iparm=%p\n",
+			hw, io, send, receive, rnode, iparam);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_test(hw->os,
+			      "cannot send SRRS, HW state=%d\n", hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	io->rnode = rnode;
+	io->type  = type;
+	io->done = cb;
+	io->arg  = arg;
+
+	sge = io->sgl->virt;
+
+	/* clear both SGE */
+	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge_s));
+
+	sge0_flags = sge[0].dw2_flags;
+	sge1_flags = sge[1].dw2_flags;
+	if (send) {
+		sge[0].buffer_address_high =
+			cpu_to_le32(upper_32_bits(send->phys));
+		sge[0].buffer_address_low  =
+			cpu_to_le32(lower_32_bits(send->phys));
+
+		sge0_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+
+		sge[0].buffer_length = cpu_to_le32(len);
+	}
+
+	if (type == EFCT_HW_ELS_REQ || type == EFCT_HW_FC_CT) {
+		sge[1].buffer_address_high =
+			cpu_to_le32(upper_32_bits(receive->phys));
+		sge[1].buffer_address_low  =
+			cpu_to_le32(lower_32_bits(receive->phys));
+
+		sge1_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+		sge1_flags |= SLI4_SGE_LAST;
+
+		sge[1].buffer_length = cpu_to_le32(receive->size);
+	} else {
+		sge0_flags |= SLI4_SGE_LAST;
+	}
+
+	sge[0].dw2_flags = cpu_to_le32(sge0_flags);
+	sge[1].dw2_flags = cpu_to_le32(sge1_flags);
+
+	switch (type) {
+	case EFCT_HW_ELS_REQ:
+		if (!send ||
+		    sli_els_request64_wqe(&hw->sli, io->wqe.wqebuf,
+					  hw->sli.wqe_size, io->sgl,
+					*((u8 *)send->virt),
+					len, receive->size,
+					iparam->els.timeout,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT, rnode->indicator,
+					rnode->sport->indicator,
+					rnode->node_group, rnode->attached,
+					rnode->fc_id, rnode->sport->fc_id)) {
+			efc_log_err(hw->os, "REQ WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_ELS_RSP:
+		if (!send ||
+		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, send, len,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT, iparam->els.ox_id,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->node_group, rnode->attached,
+					rnode->fc_id,
+					local_flags, U32_MAX)) {
+			efc_log_err(hw->os, "RSP WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_ELS_RSP_SID:
+		if (!send ||
+		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, send, len,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT,
+					iparam->els_sid.ox_id,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->node_group, rnode->attached,
+					rnode->fc_id,
+					local_flags, iparam->els_sid.s_id)) {
+			efc_log_err(hw->os, "RSP (SID) WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_FC_CT:
+		if (!send ||
+		    sli_gen_request64_wqe(&hw->sli, io->wqe.wqebuf,
+					  hw->sli.wqe_size, io->sgl,
+					len, receive->size,
+					iparam->fc_ct.timeout, io->indicator,
+					io->reqtag, SLI4_CQ_DEFAULT,
+					rnode->node_group, rnode->fc_id,
+					rnode->indicator,
+					iparam->fc_ct.r_ctl,
+					iparam->fc_ct.type,
+					iparam->fc_ct.df_ctl)) {
+			efc_log_err(hw->os, "GEN WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_FC_CT_RSP:
+		if (!send ||
+		    sli_xmit_sequence64_wqe(&hw->sli, io->wqe.wqebuf,
+					    hw->sli.wqe_size, io->sgl,
+					len, iparam->fc_ct_rsp.timeout,
+					iparam->fc_ct_rsp.ox_id,
+					io->indicator, io->reqtag,
+					rnode->node_group, rnode->fc_id,
+					rnode->indicator,
+					iparam->fc_ct_rsp.r_ctl,
+					iparam->fc_ct_rsp.type,
+					iparam->fc_ct_rsp.df_ctl)) {
+			efc_log_err(hw->os, "XMIT SEQ WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_BLS_ACC:
+	case EFCT_HW_BLS_RJT:
+	{
+		struct sli_bls_payload_s	bls;
+
+		if (type == EFCT_HW_BLS_ACC) {
+			bls.type = SLI4_SLI_BLS_ACC;
+			memcpy(&bls.u.acc, iparam->bls.payload,
+			       sizeof(bls.u.acc));
+		} else {
+			bls.type = SLI4_SLI_BLS_RJT;
+			memcpy(&bls.u.rjt, iparam->bls.payload,
+			       sizeof(bls.u.rjt));
+		}
+
+		bls.ox_id = cpu_to_le16(iparam->bls.ox_id);
+		bls.rx_id = cpu_to_le16(iparam->bls.rx_id);
+
+		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, &bls,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT,
+					rnode->attached, rnode->node_group,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->fc_id, rnode->sport->fc_id,
+					U32_MAX)) {
+			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_BLS_ACC_SID:
+	{
+		struct sli_bls_payload_s	bls;
+
+		bls.type = SLI4_SLI_BLS_ACC;
+		memcpy(&bls.u.acc, iparam->bls_sid.payload,
+		       sizeof(bls.u.acc));
+
+		bls.ox_id = cpu_to_le16(iparam->bls_sid.ox_id);
+		bls.rx_id = cpu_to_le16(iparam->bls_sid.rx_id);
+
+		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, &bls,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT,
+					rnode->attached, rnode->node_group,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->fc_id, rnode->sport->fc_id,
+					iparam->bls_sid.s_id)) {
+			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE SID error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	default:
+		efc_log_err(hw->os, "bad SRRS type %#x\n", type);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (rc == EFCT_HW_RTN_SUCCESS) {
+		if (!io->wq)
+			io->wq = efct_hw_queue_next_wq(hw, io);
+
+		io->xbusy = true;
+
+		/*
+		 * Add IO to active io wqe list before submitting, in case the
+		 * wcqe processing preempts this thread.
+		 */
+		io->wq->use_count++;
+		efct_hw_add_io_timed_wqe(hw, io);
+		rc = efct_hw_wq_write(io->wq, &io->wqe);
+		if (rc >= 0) {
+			/* non-negative return is success */
+			rc = 0;
+		} else {
+			/* failed to write wqe, remove from active wqe list */
+			efc_log_err(hw->os,
+				     "sli_queue_write failed: %d\n", rc);
+			io->xbusy = false;
+			efct_hw_remove_io_timed_wqe(hw, io);
+		}
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup io
+ * @brief Send a read, write, or response IO.
+ *
+ * @par Description
+ * This routine supports sending a higher-level IO (for example, FCP) between
+ * two endpoints as a target or initiator. Examples include:
+ *  - Sending read data and good response (target).
+ *  - Sending a response (target with no data or after receiving write data).
+ *  .
+ * This routine assumes all IOs use the SGL associated with the HW IO. Prior to
+ * calling this routine, the data should be loaded using efct_hw_io_add_sge().
+ *
+ * @param hw Hardware context.
+ * @param type Type of IO (target read, target response, and so on).
+ * @param io Previously-allocated HW IO object.
+ * @param len Length, in bytes, of data to send.
+ * @param iparam IO parameters.
+ * @param rnode Destination of data (that is, a remote node).
+ * @param cb Function call upon completion of sending data (may be NULL).
+ * @param arg Argument to pass to IO completion function.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ *
+ */
+enum efct_hw_rtn_e
+efct_hw_io_send(struct efct_hw_s *hw, enum efct_hw_io_type_e type,
+		struct efct_hw_io_s *io,
+		u32 len, union efct_hw_io_param_u *iparam,
+		struct efc_remote_node_s *rnode, void *cb, void *arg)
+{
+	enum efct_hw_rtn_e	rc = EFCT_HW_RTN_SUCCESS;
+	u32	rpi;
+	bool send_wqe = true;
+
+	if (!io || !rnode || !iparam) {
+		pr_err("bad parm hw=%p io=%p iparam=%p rnode=%p\n",
+			hw, io, iparam, rnode);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_err(hw->os, "cannot send IO, HW state=%d\n",
+			     hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	rpi = rnode->indicator;
+
+	/*
+	 * Save state needed during later stages
+	 */
+	io->rnode = rnode;
+	io->type  = type;
+	io->done  = cb;
+	io->arg   = arg;
+
+	/*
+	 * Format the work queue entry used to send the IO
+	 */
+	switch (type) {
+	case EFCT_HW_IO_TARGET_WRITE: {
+		u16 flags = iparam->fcp_tgt.flags;
+		struct fcp_txrdy *xfer = io->xfer_rdy.virt;
+
+		/*
+		 * Fill in the XFER_RDY for IF_TYPE 0 devices
+		 */
+		xfer->ft_data_ro = cpu_to_be32(iparam->fcp_tgt.offset);
+		xfer->ft_burst_len = cpu_to_be32(len);
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
+
+		if (sli_fcp_treceive64_wqe(&hw->sli,
+					   io->wqe.wqebuf,
+					   hw->sli.wqe_size,
+					   &io->def_sgl,
+					   io->first_data_sge,
+					   iparam->fcp_tgt.offset, len,
+					   io->indicator, io->reqtag,
+					   SLI4_CQ_DEFAULT,
+					   iparam->fcp_tgt.ox_id, rpi,
+					   rnode->node_group,
+					   rnode->fc_id, flags,
+					   iparam->fcp_tgt.dif_oper,
+					   iparam->fcp_tgt.blk_size,
+					   iparam->fcp_tgt.cs_ctl,
+					   iparam->fcp_tgt.app_id)) {
+			efc_log_err(hw->os, "TRECEIVE WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_IO_TARGET_READ: {
+		u16 flags = iparam->fcp_tgt.flags;
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
+		if (sli_fcp_tsend64_wqe(&hw->sli, io->wqe.wqebuf,
+					hw->sli.wqe_size, &io->def_sgl,
+					io->first_data_sge,
+					iparam->fcp_tgt.offset, len,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT, iparam->fcp_tgt.ox_id,
+					rpi, rnode->node_group,
+					rnode->fc_id, flags,
+					iparam->fcp_tgt.dif_oper,
+					iparam->fcp_tgt.blk_size,
+					iparam->fcp_tgt.cs_ctl,
+					iparam->fcp_tgt.app_id)) {
+			efc_log_err(hw->os, "TSEND WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_IO_TARGET_RSP: {
+		u16 flags = iparam->fcp_tgt.flags;
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
+		if (sli_fcp_trsp64_wqe(&hw->sli, io->wqe.wqebuf,
+				       hw->sli.wqe_size, &io->def_sgl,
+				       len, io->indicator, io->reqtag,
+				       SLI4_CQ_DEFAULT, iparam->fcp_tgt.ox_id,
+					rpi, rnode->node_group, rnode->fc_id,
+					flags, iparam->fcp_tgt.cs_ctl,
+				       0, iparam->fcp_tgt.app_id)) {
+			efc_log_err(hw->os, "TRSP WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+
+		break;
+	}
+	default:
+		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (send_wqe && rc == EFCT_HW_RTN_SUCCESS) {
+		if (!io->wq)
+			io->wq = efct_hw_queue_next_wq(hw, io);
+
+		io->xbusy = true;
+
+		/*
+		 * Add IO to active io wqe list before submitting, in case the
+		 * wcqe processing preempts this thread.
+		 */
+		hw->tcmd_wq_submit[io->wq->instance]++;
+		io->wq->use_count++;
+		efct_hw_add_io_timed_wqe(hw, io);
+		rc = efct_hw_wq_write(io->wq, &io->wqe);
+		if (rc >= 0) {
+			/* non-negative return is success */
+			rc = 0;
+		} else {
+			/* failed to write wqe, remove from active wqe list */
+			efc_log_err(hw->os,
+				     "sli_queue_write failed: %d\n", rc);
+			io->xbusy = false;
+			efct_hw_remove_io_timed_wqe(hw, io);
+		}
+	}
+
+	return rc;
+}
+
+/**
+ * @brief Send a raw frame
+ *
+ * @par Description
+ * Using the SEND_FRAME_WQE, a frame consisting of header and payload is sent.
+ *
+ * @param hw Pointer to HW object.
+ * @param hdr Pointer to a little endian formatted FC header.
+ * @param sof Value to use as the frame SOF.
+ * @param eof Value to use as the frame EOF.
+ * @param payload Pointer to payload DMA buffer.
+ * @param ctx Pointer to caller provided send frame context.
+ * @param callback Callback function.
+ * @param arg Callback function argument.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_send_frame(struct efct_hw_s *hw, struct fc_frame_header *hdr,
+		   u8 sof, u8 eof, struct efc_dma_s *payload,
+		   struct efct_hw_send_frame_context_s *ctx,
+		   void (*callback)(void *arg, u8 *cqe, int status),
+		   void *arg)
+{
+	int rc;
+	struct efct_hw_wqe_s *wqe;
+	u32 xri;
+	struct hw_wq_s *wq;
+
+	wqe = &ctx->wqe;
+
+	/* populate the callback object */
+	ctx->hw = hw;
+
+	/* Fetch and populate request tag */
+	ctx->wqcb = efct_hw_reqtag_alloc(hw, callback, arg);
+	if (!ctx->wqcb) {
+		efc_log_err(hw->os, "can't allocate request tag\n");
+		return EFCT_HW_RTN_NO_RESOURCES;
+	}
+
+	/* Choose a work queue, first look for a class[1] wq, otherwise just
+	 * use wq[0]
+	 */
+	wq = efct_varray_iter_next(hw->wq_class_array[1]);
+	if (!wq)
+		wq = hw->hw_wq[0];
+
+	/* Set XRI and RX_ID in the header based on which WQ, and which
+	 * send_frame_io we are using
+	 */
+	xri = wq->send_frame_io->indicator;
+
+	/* Build the send frame WQE */
+	rc = sli_send_frame_wqe(&hw->sli, wqe->wqebuf,
+				hw->sli.wqe_size, sof, eof,
+				(u32 *)hdr, payload, payload->len,
+				EFCT_HW_SEND_FRAME_TIMEOUT, xri,
+				ctx->wqcb->instance_index);
+	if (rc) {
+		efc_log_err(hw->os, "sli_send_frame_wqe failed: %d\n",
+			     rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* Write to WQ */
+	rc = efct_hw_wq_write(wq, wqe);
+	if (rc) {
+		efc_log_err(hw->os, "efct_hw_wq_write failed: %d\n", rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	wq->use_count++;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @brief Called to obtain the count for the specified type.
+ *
+ * @param hw Hardware context.
+ * @param io_count_type IO count type (inuse, free, wait_free).
+ *
+ * @return Returns the number of IOs on the specified list type.
+ */
+u32
+efct_hw_io_get_count(struct efct_hw_s *hw,
+		     enum efct_hw_io_count_type_e io_count_type)
+{
+	struct efct_hw_io_s *io = NULL;
+	u32 count = 0;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+
+	switch (io_count_type) {
+	case EFCT_HW_IO_INUSE_COUNT:
+		list_for_each_entry(io, &hw->io_inuse, list_entry) {
+			count = count + 1;
+		}
+		break;
+	case EFCT_HW_IO_FREE_COUNT:
+		list_for_each_entry(io, &hw->io_free, list_entry) {
+			count = count + 1;
+		}
+		break;
+	case EFCT_HW_IO_WAIT_FREE_COUNT:
+		list_for_each_entry(io, &hw->io_wait_free, list_entry) {
+			count = count + 1;
+		}
+		break;
+	case EFCT_HW_IO_N_TOTAL_IO_COUNT:
+		count = hw->config.n_io;
+		break;
+	}
+
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	return count;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 8a487df2338d..7f1c4091d91a 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -1112,4 +1112,23 @@ efct_hw_process(struct efct_hw_s *hw, u32 vector, u32 max_isr_time_msec);
 extern int
 efct_hw_queue_hash_find(struct efct_queue_hash_s *hash, u16 id);
 
+int efct_hw_wq_write(struct hw_wq_s *wq, struct efct_hw_wqe_s *wqe);
+enum efct_hw_rtn_e
+efct_hw_send_frame(struct efct_hw_s *hw, struct fc_frame_header *hdr,
+		   u8 sof, u8 eof, struct efc_dma_s *payload,
+		struct efct_hw_send_frame_context_s *ctx,
+		void (*callback)(void *arg, u8 *cqe, int status),
+		void *arg);
+typedef int(*efct_hw_srrs_cb_t)(struct efct_hw_io_s *io,
+				struct efc_remote_node_s *rnode, u32 length,
+				int status, u32 ext_status, void *arg);
+extern enum efct_hw_rtn_e
+efct_hw_srrs_send(struct efct_hw_s *hw, enum efct_hw_io_type_e type,
+		  struct efct_hw_io_s *io,
+		  struct efc_dma_s *send, u32 len,
+		  struct efc_dma_s *receive, struct efc_remote_node_s *rnode,
+		  union efct_hw_io_param_u *iparam,
+		  efct_hw_srrs_cb_t cb,
+		  void *arg);
+
 #endif /* __EFCT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 26/32] elx: efct: link statistics and SFP data
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (24 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 25/32] elx: efct: Hardware IO submission routines James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 27/32] elx: efct: xport and hardware teardown routines James Smart
                   ` (6 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to retrieve link stats and SFP transceiver data.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c | 593 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  39 +++
 2 files changed, 632 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 5e0ecd621f91..f01a54d874b1 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -14,6 +14,50 @@
 
 #define EFCT_HW_REQUE_XRI_REGTAG	65534
 
+struct efct_hw_sfp_cb_arg {
+	void (*cb)(int status, u32 bytes_written,
+		   u32 *data, void *arg);
+	void *arg;
+	struct efc_dma_s payload;
+};
+
+struct efct_hw_temp_cb_arg {
+	void (*cb)(int status, u32 curr_temp,
+		   u32 crit_temp_thrshld,
+		   u32 warn_temp_thrshld,
+		   u32 norm_temp_thrshld,
+		   u32 fan_off_thrshld,
+		   u32 fan_on_thrshld,
+		   void *arg);
+	void *arg;
+};
+
+struct efct_hw_link_stat_cb_arg {
+	void (*cb)(int status,
+		   u32 num_counters,
+		struct efct_hw_link_stat_counts_s *counters,
+		void *arg);
+	void *arg;
+};
+
+struct efct_hw_host_stat_cb_arg {
+	void (*cb)(int status,
+		   u32 num_counters,
+		struct efct_hw_host_stat_counts_s *counters,
+		void *arg);
+	void *arg;
+};
+
+static int
+efct_hw_cb_sfp(struct efct_hw_s *, int, u8 *, void  *);
+static int
+efct_hw_cb_temp(struct efct_hw_s *, int, u8 *, void  *);
+static int
+efct_hw_cb_link_stat(struct efct_hw_s *, int, u8 *, void  *);
+static int
+efct_hw_cb_host_stat(struct efct_hw_s *hw, int status,
+		     u8 *mqe, void  *arg);
+
 /* HW global data */
 struct efct_hw_global_s hw_global;
 static void
@@ -4624,3 +4668,552 @@ efct_hw_io_get_count(struct efct_hw_s *hw,
 
 	return count;
 }
+
+/**
+ * @brief Called when the READ_TRANSCEIVER_DATA command completes.
+ *
+ * @par Description
+ * Get the number of bytes read out of the response, free the mailbox that was
+ * malloc'd by efct_hw_get_sfp(), then call the callback and pass the status
+ * and bytes written.
+ *
+ * @param hw Hardware context.
+ * @param status Status field from the mbox completion.
+ * @param mqe Mailbox response structure.
+ * @param arg Pointer to a callback function that signals the caller that the
+ * command is done.
+ * The callback function prototype is
+ * void cb(int status, u32 bytes_written, u32 *data, void *arg).
+ *
+ * @return Returns 0.
+ */
+static int
+efct_hw_cb_sfp(struct efct_hw_s *hw, int status, u8 *mqe, void  *arg)
+{
+	struct efct_hw_sfp_cb_arg *cb_arg = arg;
+	struct efc_dma_s *payload = &cb_arg->payload;
+	struct sli4_rsp_cmn_read_transceiver_data_s *mbox_rsp;
+	struct efct_s *efct = hw->os;
+	u32 bytes_written;
+
+	mbox_rsp =
+	(struct sli4_rsp_cmn_read_transceiver_data_s *)payload->virt;
+	bytes_written = le32_to_cpu(mbox_rsp->hdr.response_length);
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (!status && mbox_rsp->hdr.status)
+				status = mbox_rsp->hdr.status;
+			cb_arg->cb(status, bytes_written, mbox_rsp->page_data,
+				   cb_arg->arg);
+		}
+
+		dma_free_coherent(&efct->pcidev->dev,
+				  cb_arg->payload.size, cb_arg->payload.virt,
+				  cb_arg->payload.phys);
+		memset(&cb_arg->payload, 0, sizeof(struct efc_dma_s));
+		kfree(cb_arg);
+	}
+
+	kfree(mqe);
+	return 0;
+}
+
+/**
+ * @ingroup io
+ * @brief Function to retrieve the SFP information.
+ *
+ * @param hw Hardware context.
+ * @param page The page of SFP data to retrieve (0xa0 or 0xa2).
+ * @param cb Function call upon completion of sending the data (may be NULL).
+ * @param arg Argument to pass to IO completion function.
+ *
+ * @return Returns EFCT_HW_RTN_SUCCESS, EFCT_HW_RTN_ERROR, or
+ * EFCT_HW_RTN_NO_MEMORY.
+ */
+enum efct_hw_rtn_e
+efct_hw_get_sfp(struct efct_hw_s *hw, u16 page,
+		void (*cb)(int, u32, u32 *, void *), void *arg)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_sfp_cb_arg *cb_arg;
+	u8 *mbxdata;
+	struct efct_s *efct = hw->os;
+	struct efc_dma_s *dma;
+
+	/* mbxdata holds the header of the command */
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+	/*
+	 * cb_arg holds the data that will be passed to the callback on
+	 * completion
+	 */
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+	memset(cb_arg, 0, sizeof(struct efct_hw_sfp_cb_arg));
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* payload holds the non-embedded portion */
+	dma = &cb_arg->payload;
+	dma->size = sizeof(struct sli4_rsp_cmn_read_transceiver_data_s);
+	dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+				       dma->size, &dma->phys, GFP_DMA);
+	if (!dma->virt) {
+		kfree(cb_arg);
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	/* Send the HW command */
+	if (!sli_cmd_common_read_transceiver_data(&hw->sli, mbxdata,
+						 SLI4_BMBX_SIZE, page,
+						 &cb_arg->payload))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_sfp, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os,
+			      "READ_TRANSCEIVER_DATA failed with status %d\n",
+			     rc);
+		dma_free_coherent(&efct->pcidev->dev,
+				  cb_arg->payload.size, cb_arg->payload.virt,
+				  cb_arg->payload.phys);
+		memset(&cb_arg->payload, 0, sizeof(struct efc_dma_s));
+		kfree(cb_arg);
+		kfree(mbxdata);
+	}
+
+	return rc;
+}
+
+/**
+ * @brief Function to retrieve the temperature information.
+ *
+ * @param hw Hardware context.
+ * @param cb Function call upon completion of sending the data (may be NULL).
+ * @param arg Argument to pass to IO completion function.
+ *
+ * @return Returns EFCT_HW_RTN_SUCCESS, EFCT_HW_RTN_ERROR, or
+ * EFCT_HW_RTN_NO_MEMORY.
+ */
+enum efct_hw_rtn_e
+efct_hw_get_temperature(struct efct_hw_s *hw,
+			void (*cb)(int status,
+				   u32 curr_temp,
+				u32 crit_temp_thrshld,
+				u32 warn_temp_thrshld,
+				u32 norm_temp_thrshld,
+				u32 fan_off_thrshld,
+				u32 fan_on_thrshld,
+				void *arg),
+			void *arg)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_temp_cb_arg *cb_arg;
+	u8 *mbxdata;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_dump_type4(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
+			       SLI4_WKI_TAG_SAT_TEM))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_temp, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "DUMP_TYPE4 failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+/**
+ * @brief Called when the DUMP command completes.
+ *
+ * @par Description
+ * Get the temperature data out of the response, free the mailbox that was
+ * malloc'd by efct_hw_get_temperature(), then call the callback and pass the
+ * status and data.
+ *
+ * @param hw Hardware context.
+ * @param status Status field from the mbox completion.
+ * @param mqe Mailbox response structure.
+ * @param arg Pointer to a callback function that signals the caller that the
+ * command is done.
+ * The callback function prototype is defined by efct_hw_temp_cb_t.
+ *
+ * @return Returns 0.
+ */
+static int
+efct_hw_cb_temp(struct efct_hw_s *hw, int status, u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_dump4_s *mbox_rsp = (struct sli4_cmd_dump4_s *)mqe;
+	struct efct_hw_temp_cb_arg *cb_arg = arg;
+	u32 curr_temp = le32_to_cpu(mbox_rsp->resp_data[0]); /* word 5 */
+	u32 crit_temp_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[1]); /* word 6 */
+	u32 warn_temp_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[2]); /* word 7 */
+	u32 norm_temp_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[3]); /* word 8 */
+	u32 fan_off_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[4]);   /* word 9 */
+	u32 fan_on_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[5]);    /* word 10 */
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status,
+				   curr_temp,
+				   crit_temp_thrshld,
+				   warn_temp_thrshld,
+				   norm_temp_thrshld,
+				   fan_off_thrshld,
+				   fan_on_thrshld,
+				   cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+	kfree(mqe);
+
+	return 0;
+}
+
+/**
+ * @brief Function to retrieve the link statistics.
+ *
+ * @param hw Hardware context.
+ * @param req_ext_counters If TRUE, then the extended counters will be
+ * requested.
+ * @param clear_overflow_flags If TRUE, then overflow flags will be cleared.
+ * @param clear_all_counters If TRUE, the counters will be cleared.
+ * @param cb Function call upon completion of sending the data (may be NULL).
+ * @param arg Argument to pass to IO completion function.
+ *
+ * @return Returns EFCT_HW_RTN_SUCCESS, EFCT_HW_RTN_ERROR, ori
+ * EFCT_HW_RTN_NO_MEMORY.
+ */
+enum efct_hw_rtn_e
+efct_hw_get_link_stats(struct efct_hw_s *hw,
+		       u8 req_ext_counters,
+		       u8 clear_overflow_flags,
+		       u8 clear_all_counters,
+		       void (*cb)(int status,
+				  u32 num_counters,
+			struct efct_hw_link_stat_counts_s *counters,
+			void *arg),
+		       void *arg)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_link_stat_cb_arg *cb_arg;
+	u8 *mbxdata;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_ATOMIC);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_read_link_stats(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
+				    req_ext_counters,
+				    clear_overflow_flags,
+				    clear_all_counters))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_link_stat, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+/**
+ * @brief Called when the READ_LINK_STAT command completes.
+ *
+ * @par Description
+ * Get the counters out of the response, free the mailbox that was malloc'd
+ * by efct_hw_get_link_stats(), then call the callback and pass the status and
+ * data.
+ *
+ * @param hw Hardware context.
+ * @param status Status field from the mbox completion.
+ * @param mqe Mailbox response structure.
+ * @param arg Pointer to a callback function that signals the caller that the
+ * command is done.
+ * The callback function prototype is defined by efct_hw_link_stat_cb_t.
+ *
+ * @return Returns 0.
+ */
+static int
+efct_hw_cb_link_stat(struct efct_hw_s *hw, int status,
+		     u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_read_link_stats_s *mbox_rsp;
+	struct efct_hw_link_stat_cb_arg *cb_arg = arg;
+	struct efct_hw_link_stat_counts_s counts[EFCT_HW_LINK_STAT_MAX];
+	u32 num_counters;
+	u32 mbox_rsp_flags = 0;
+
+	mbox_rsp = (struct sli4_cmd_read_link_stats_s *)mqe;
+	mbox_rsp_flags = le32_to_cpu(mbox_rsp->dw1_flags);
+	num_counters = (mbox_rsp_flags & SLI4_READ_LNKSTAT_GEC) ? 20 : 13;
+	memset(counts, 0, sizeof(struct efct_hw_link_stat_counts_s) *
+				 EFCT_HW_LINK_STAT_MAX);
+
+	counts[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W02OF);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W03OF);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W04OF);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W05OF);
+	counts[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W06OF);
+	counts[EFCT_HW_LINK_STAT_CRC_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W07OF);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W08OF);
+	counts[EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W09OF);
+	counts[EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W10OF);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W11OF);
+	counts[EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W12OF);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W13OF);
+	counts[EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W14OF);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFA_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W15OF);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W16OF);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W17OF);
+	counts[EFCT_HW_LINK_STAT_RCV_SOFF_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W18OF);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W19OF);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W20OF);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W21OF);
+	counts[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->linkfail_errcnt);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->losssync_errcnt);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->losssignal_errcnt);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->primseq_errcnt);
+	counts[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->inval_txword_errcnt);
+	counts[EFCT_HW_LINK_STAT_CRC_COUNT].counter =
+		le32_to_cpu(mbox_rsp->crc_errcnt);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT].counter =
+		le32_to_cpu(mbox_rsp->primseq_eventtimeout_cnt);
+	counts[EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->elastic_bufoverrun_errcnt);
+	counts[EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->arbit_fc_al_timeout_cnt);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->adv_rx_buftor_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->curr_rx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->adv_tx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->curr_tx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFA_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofa_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofdti_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofni_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_SOFF_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_soff_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_aer_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_avail_rpi_rescnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_avail_xri_rescnt);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status, num_counters, counts, cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+	kfree(mqe);
+
+	return 0;
+}
+
+/**
+ * @brief Function to retrieve the link and host statistics.
+ *
+ * @param hw Hardware context.
+ * @param cc clear counters, if TRUE all counters will be cleared.
+ * @param cb Function call upon completion of receiving the data.
+ * @param arg Argument to pass to pointer fc hosts statistics structure.
+ *
+ * @return Returns EFCT_HW_RTN_SUCCESS, EFCT_HW_RTN_ERROR, or
+ * EFCT_HW_RTN_NO_MEMORY.
+ */
+enum efct_hw_rtn_e
+efct_hw_get_host_stats(struct efct_hw_s *hw, u8 cc,
+		       void (*cb)(int status,
+				  u32 num_counters,
+				  struct efct_hw_host_stat_counts_s *counters,
+				  void *arg),
+		       void *arg)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_host_stat_cb_arg *cb_arg;
+	u8 *mbxdata;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_ATOMIC);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	 cb_arg->cb = cb;
+	 cb_arg->arg = arg;
+
+	 /* Send the HW command to get the host stats */
+	if (!sli_cmd_read_status(&hw->sli, mbxdata, SLI4_BMBX_SIZE, cc))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_host_stat, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "READ_HOST_STATS failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+/**
+ * @brief Called when the READ_STATUS command completes.
+ *
+ * @par Description
+ * Get the counters out of the response, free the mailbox that was malloc'd
+ * by efct_hw_get_host_stats(), then call the callback and pass
+ * the status and data.
+ *
+ * @param hw Hardware context.
+ * @param status Status field from the mbox completion.
+ * @param mqe Mailbox response structure.
+ * @param arg Pointer to a callback function that signals the caller that the
+ * command is done.
+ * The callback function prototype is defined by
+ * efct_hw_host_stat_cb_t.
+ *
+ * @return Returns 0.
+ */
+static int
+efct_hw_cb_host_stat(struct efct_hw_s *hw, int status,
+		     u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_read_status_s *mbox_rsp =
+					(struct sli4_cmd_read_status_s *)mqe;
+	struct efct_hw_host_stat_cb_arg *cb_arg = arg;
+	struct efct_hw_host_stat_counts_s counts[EFCT_HW_HOST_STAT_MAX];
+	u32 num_counters = EFCT_HW_HOST_STAT_MAX;
+
+	memset(counts, 0, sizeof(struct efct_hw_host_stat_counts_s) *
+		   EFCT_HW_HOST_STAT_MAX);
+
+	counts[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_kbyte_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_kbyte_cnt);
+	counts[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_frame_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_frame_cnt);
+	counts[EFCT_HW_HOST_STAT_TX_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_seq_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_seq_cnt);
+	counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG].counter =
+		 le32_to_cpu(mbox_rsp->tot_exchanges_orig);
+	counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP].counter =
+		 le32_to_cpu(mbox_rsp->tot_exchanges_resp);
+	counts[EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_p_bsy_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_F_BSY_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_f_bsy_cnt);
+	counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->no_rq_buf_dropped_frames_cnt);
+	counts[EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->empty_rq_timeout_cnt);
+	counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->no_xri_dropped_frames_cnt);
+	counts[EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->empty_xri_pool_cnt);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status, num_counters, counts, cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+	kfree(mqe);
+
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 7f1c4091d91a..b372250c4408 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -1130,5 +1130,44 @@ efct_hw_srrs_send(struct efct_hw_s *hw, enum efct_hw_io_type_e type,
 		  union efct_hw_io_param_u *iparam,
 		  efct_hw_srrs_cb_t cb,
 		  void *arg);
+/* Function for retrieving SFP data */
+extern enum efct_hw_rtn_e
+efct_hw_get_sfp(struct efct_hw_s *hw, u16 page,
+		void (*cb)(int, u32, u32 *, void *), void *arg);
+
+/* Function for retrieving temperature data */
+extern enum efct_hw_rtn_e
+efct_hw_get_temperature(struct efct_hw_s *hw,
+			void (*efct_hw_temp_cb_t)(int status,
+						  u32 curr_temp,
+				u32 crit_temp_thrshld,
+				u32 warn_temp_thrshld,
+				u32 norm_temp_thrshld,
+				u32 fan_off_thrshld,
+				u32 fan_on_thrshld,
+				void *arg),
+			void *arg);
+
+/* Function for retrieving link statistics */
+extern enum efct_hw_rtn_e
+efct_hw_get_link_stats(struct efct_hw_s *hw,
+		       u8 req_ext_counters,
+		u8 clear_overflow_flags,
+		u8 clear_all_counters,
+		void (*efct_hw_link_stat_cb_t)(int status,
+					       u32 num_counters,
+			struct efct_hw_link_stat_counts_s *counters,
+			void *arg),
+		void *arg);
+/* Function for retrieving host statistics */
+extern enum efct_hw_rtn_e
+efct_hw_get_host_stats(struct efct_hw_s *hw,
+		       u8 cc,
+		void (*efct_hw_host_stat_cb_t)(int status,
+					       u32 num_counters,
+			struct efct_hw_host_stat_counts_s *counters,
+			void *arg),
+		void *arg);
+
 
 #endif /* __EFCT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 27/32] elx: efct: xport and hardware teardown routines
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (25 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 26/32] elx: efct: link statistics and SFP data James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 28/32] elx: efct: IO timeout handling routines James Smart
                   ` (5 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to detach xport and hardware objects.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c    | 499 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h    |  31 +++
 drivers/scsi/elx/efct/efct_xport.c | 483 +++++++++++++++++++++++++++++++++++
 3 files changed, 1013 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index f01a54d874b1..48cdbeebd058 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -5217,3 +5217,502 @@ efct_hw_cb_host_stat(struct efct_hw_s *hw, int status,
 
 	return 0;
 }
+
+/**
+ * @brief Called when the port control command completes.
+ *
+ * @par Description
+ * We only need to free the mailbox command buffer.
+ *
+ * @param hw Hardware context.
+ * @param status Status field from the mbox completion.
+ * @param mqe Mailbox response structure.
+ * @param arg Pointer to a callback function that signals the caller that the
+ * command is done.
+ *
+ * @return Returns 0.
+ */
+static int
+efct_hw_cb_port_control(struct efct_hw_s *hw, int status, u8 *mqe,
+			void  *arg)
+{
+	kfree(mqe);
+	return 0;
+}
+
+/**
+ * @ingroup port
+ * @brief Control a port (initialize, shutdown, or set link configuration).
+ *
+ * @par Description
+ * This function controls a port depending on the @c ctrl parameter:
+ * - @b EFCT_HW_PORT_INIT -
+ * Issues the CONFIG_LINK and INIT_LINK commands for the specified port.
+ * The HW generates an EFC_HW_DOMAIN_FOUND event when the link comes up.
+ * .
+ * - @b EFCT_HW_PORT_SHUTDOWN -
+ * Issues the DOWN_LINK command for the specified port.
+ * The HW generates an EFC_HW_DOMAIN_LOST event when the link is down.
+ * .
+ * - @b EFCT_HW_PORT_SET_LINK_CONFIG -
+ * Sets the link configuration.
+ *
+ * @param hw Hardware context.
+ * @param ctrl Specifies the operation:
+ * - EFCT_HW_PORT_INIT
+ * - EFCT_HW_PORT_SHUTDOWN
+ * - EFCT_HW_PORT_SET_LINK_CONFIG
+ *
+ * @param value Operation-specific value.
+ * - EFCT_HW_PORT_INIT - Selective reset AL_PA
+ * - EFCT_HW_PORT_SHUTDOWN - N/A
+ *
+ * @param cb Callback function to invoke the following operation.
+ * - EFCT_HW_PORT_INIT/EFCT_HW_PORT_SHUTDOWN - NULL (link events
+ * are handled by the EFCT_HW_CB_DOMAIN callbacks).
+ *
+ * @param arg Callback argument invoked after the command completes.
+ * - EFCT_HW_PORT_INIT/EFCT_HW_PORT_SHUTDOWN - NULL (link events
+ * are handled by the EFCT_HW_CB_DOMAIN callbacks).
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_port_control(struct efct_hw_s *hw, enum efct_hw_port_e ctrl,
+		     uintptr_t value,
+		void (*cb)(int status, uintptr_t value, void *arg),
+		void *arg)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_ERROR;
+
+	switch (ctrl) {
+	case EFCT_HW_PORT_INIT:
+	{
+		u8	*init_link;
+		u32 speed = 0;
+		u8 reset_alpa = 0;
+
+		u8	*cfg_link;
+
+		cfg_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!cfg_link)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		if (!sli_cmd_config_link(&hw->sli, cfg_link,
+					SLI4_BMBX_SIZE))
+			rc = efct_hw_command(hw, cfg_link,
+					     EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control,
+					     NULL);
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			kfree(cfg_link);
+			efc_log_err(hw->os, "CONFIG_LINK failed\n");
+			break;
+		}
+		speed = hw->config.speed;
+		reset_alpa = (u8)(value & 0xff);
+
+		/* Allocate a new buffer for the init_link command */
+		init_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!init_link)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		rc = EFCT_HW_RTN_ERROR;
+		if (!sli_cmd_init_link(&hw->sli, init_link, SLI4_BMBX_SIZE,
+				      speed, reset_alpa))
+			rc = efct_hw_command(hw, init_link, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control, NULL);
+		/* Free buffer on error, since no callback is coming */
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			kfree(init_link);
+			efc_log_err(hw->os, "INIT_LINK failed\n");
+		}
+		break;
+	}
+	case EFCT_HW_PORT_SHUTDOWN:
+	{
+		u8	*down_link;
+
+		down_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!down_link)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		if (!sli_cmd_down_link(&hw->sli, down_link, SLI4_BMBX_SIZE))
+			rc = efct_hw_command(hw, down_link, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control, NULL);
+		/* Free buffer on error, since no callback is coming */
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			kfree(down_link);
+			efc_log_err(hw->os, "DOWN_LINK failed\n");
+		}
+		break;
+	}
+	default:
+		efc_log_test(hw->os, "unhandled control %#x\n", ctrl);
+		break;
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup devInitShutdown
+ * @brief Tear down the Hardware Abstraction Layer module.
+ *
+ * @par Description
+ * Frees memory structures needed by the device, and shuts down the device.
+ * Does not free the HW context memory (which is done by the caller).
+ *
+ * @param hw Hardware context allocated by the caller.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_teardown(struct efct_hw_s *hw)
+{
+	u32	i = 0;
+	u32	iters = 10;
+	u32	max_rpi;
+	u32 destroy_queues;
+	u32 free_memory;
+	struct efc_dma_s *dma;
+	struct efct_s *efct = hw->os;
+
+	destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE);
+	free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED);
+
+	/* shutdown target wqe timer */
+	shutdown_target_wqe_timer(hw);
+
+	/* Cancel watchdog timer if enabled */
+	if (hw->watchdog_timeout) {
+		hw->watchdog_timeout = 0;
+		efct_hw_config_watchdog_timer(hw);
+	}
+
+	/* Cancel Sliport Healthcheck */
+	if (hw->sliport_healthcheck) {
+		hw->sliport_healthcheck = 0;
+		efct_hw_config_sli_port_health_check(hw, 0, 0);
+	}
+
+	if (hw->state != EFCT_HW_STATE_QUEUES_ALLOCATED) {
+		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
+
+		efct_hw_flush(hw);
+
+		/*
+		 * If there are outstanding commands, wait for them to complete
+		 */
+		while (!list_empty(&hw->cmd_head) && iters) {
+			mdelay(10);
+			efct_hw_flush(hw);
+			iters--;
+		}
+
+		if (list_empty(&hw->cmd_head))
+			efc_log_debug(hw->os,
+				       "All commands completed on MQ queue\n");
+		else
+			efc_log_debug(hw->os,
+				       "Some cmds still pending on MQ queue\n");
+
+		/* Cancel any remaining commands */
+		efct_hw_command_cancel(hw);
+	} else {
+		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
+	}
+
+	max_rpi = hw->sli.qinfo.max_qcount[SLI_RSRC_RPI];
+	if (hw->rpi_ref) {
+		for (i = 0; i < max_rpi; i++) {
+			u32 count;
+
+			count = atomic_read(&hw->rpi_ref[i].rpi_count);
+			if (count)
+				efc_log_debug(hw->os,
+					       "non-zero ref [%d]=%d\n",
+					       i, count);
+		}
+		kfree(hw->rpi_ref);
+		hw->rpi_ref = NULL;
+	}
+
+	dma_free_coherent(&efct->pcidev->dev,
+			  hw->rnode_mem.size, hw->rnode_mem.virt,
+			  hw->rnode_mem.phys);
+	memset(&hw->rnode_mem, 0, sizeof(struct efc_dma_s));
+
+	if (hw->io) {
+		for (i = 0; i < hw->config.n_io; i++) {
+			if (hw->io[i] && hw->io[i]->sgl &&
+			    hw->io[i]->sgl->virt) {
+				dma_free_coherent(&efct->pcidev->dev,
+						  hw->io[i]->sgl->size,
+						  hw->io[i]->sgl->virt,
+						  hw->io[i]->sgl->phys);
+				memset(&hw->io[i]->sgl, 0,
+				       sizeof(struct efc_dma_s));
+			}
+			kfree(hw->io[i]);
+			hw->io[i] = NULL;
+		}
+		kfree(hw->io);
+		hw->io = NULL;
+		kfree(hw->wqe_buffs);
+		hw->wqe_buffs = NULL;
+	}
+
+	dma = &hw->xfer_rdy;
+	dma_free_coherent(&efct->pcidev->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma_s));
+
+	dma = &hw->dump_sges;
+	dma_free_coherent(&efct->pcidev->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma_s));
+
+	dma = &hw->loop_map;
+	dma_free_coherent(&efct->pcidev->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma_s));
+
+	for (i = 0; i < hw->wq_count; i++)
+		sli_queue_free(&hw->sli, &hw->wq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->rq_count; i++)
+		sli_queue_free(&hw->sli, &hw->rq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->mq_count; i++)
+		sli_queue_free(&hw->sli, &hw->mq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->cq_count; i++)
+		sli_queue_free(&hw->sli, &hw->cq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->eq_count; i++)
+		sli_queue_free(&hw->sli, &hw->eq[i], destroy_queues,
+			       free_memory);
+
+	efct_hw_qtop_free(hw->qtop);
+
+	/* Free rq buffers */
+	efct_hw_rx_free(hw);
+
+	efct_hw_queue_teardown(hw);
+
+	if (sli_teardown(&hw->sli))
+		efc_log_err(hw->os, "SLI teardown failed\n");
+
+	/* record the fact that the queues are non-functional */
+	hw->state = EFCT_HW_STATE_UNINITIALIZED;
+
+	/* free sequence free pool */
+	efct_array_free(hw->seq_pool);
+	hw->seq_pool = NULL;
+
+	/* free hw_wq_callback pool */
+	efct_pool_free(hw->wq_reqtag_pool);
+
+	/* Mark HW setup as not having been called */
+	hw->hw_setup_called = false;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static enum efct_hw_rtn_e
+efct_hw_sli_reset(struct efct_hw_s *hw, enum efct_hw_reset_e reset,
+		  enum efct_hw_state_e prev_state)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+
+	switch (reset) {
+	case EFCT_HW_RESET_FUNCTION:
+		efc_log_debug(hw->os, "issuing function level reset\n");
+		if (sli_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_RESET_FIRMWARE:
+		efc_log_debug(hw->os, "issuing firmware reset\n");
+		if (sli_fw_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_soft_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		/*
+		 * Because the FW reset leaves the FW in a non-running state,
+		 * follow that with a regular reset.
+		 */
+		efc_log_debug(hw->os, "issuing function level reset\n");
+		if (sli_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	default:
+		efc_log_err(hw->os,
+			     "unknown reset type - no reset performed\n");
+		hw->state = prev_state;
+		rc = EFCT_HW_RTN_INVALID_ARG;
+		break;
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn_e
+efct_hw_reset(struct efct_hw_s *hw, enum efct_hw_reset_e reset)
+{
+	u32	i;
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+	u32	iters;
+	enum efct_hw_state_e prev_state = hw->state;
+	unsigned long flags = 0;
+	struct efct_hw_io_s *temp;
+	u32 destroy_queues;
+	u32 free_memory;
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE)
+		efc_log_test(hw->os,
+			      "HW state %d is not active\n", hw->state);
+
+	destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE);
+	free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED);
+	hw->state = EFCT_HW_STATE_RESET_IN_PROGRESS;
+
+	/*
+	 * If the prev_state is already reset/teardown in progress,
+	 * don't continue further
+	 */
+	if (prev_state == EFCT_HW_STATE_RESET_IN_PROGRESS ||
+	    prev_state == EFCT_HW_STATE_TEARDOWN_IN_PROGRESS)
+		return efct_hw_sli_reset(hw, reset, prev_state);
+
+	/* shutdown target wqe timer */
+	shutdown_target_wqe_timer(hw);
+
+	if (prev_state != EFCT_HW_STATE_UNINITIALIZED) {
+		efct_hw_flush(hw);
+
+		/*
+		 * If an mailbox command requiring a DMA is outstanding
+		 * (SFP/DDM), then the FW will UE when the reset is issued.
+		 * So attempt to complete all mailbox commands.
+		 */
+		iters = 10;
+		while (!list_empty(&hw->cmd_head) && iters) {
+			mdelay(10);
+			efct_hw_flush(hw);
+			iters--;
+		}
+
+		if (list_empty(&hw->cmd_head))
+			efc_log_debug(hw->os,
+				       "All commands completed on MQ queue\n");
+		else
+			efc_log_debug(hw->os,
+				       "Some commands still pending on MQ queue\n");
+	}
+
+	/* Reset the chip */
+	rc = efct_hw_sli_reset(hw, reset, prev_state);
+	if (rc == EFCT_HW_RTN_INVALID_ARG)
+		return EFCT_HW_RTN_ERROR;
+
+	/* Not safe to walk command/io lists unless they've been initialized */
+	if (prev_state != EFCT_HW_STATE_UNINITIALIZED) {
+		efct_hw_command_cancel(hw);
+
+		/* Try to clean up the io_inuse list */
+		efct_hw_io_cancel(hw);
+
+		efct_hw_link_event_init(hw);
+
+		spin_lock_irqsave(&hw->io_lock, flags);
+			/*
+			 * The io lists should be empty, but remove any that
+			 * didn't get cleaned up.
+			 */
+			while (!list_empty(&hw->io_timed_wqe)) {
+				temp = list_first_entry(&hw->io_timed_wqe,
+							struct efct_hw_io_s,
+							list_entry);
+				list_del(&temp->wqe_link);
+			}
+
+			while (!list_empty(&hw->io_free)) {
+				temp = list_first_entry(&hw->io_free,
+							struct efct_hw_io_s,
+							list_entry);
+				list_del(&temp->list_entry);
+			}
+
+			while (!list_empty(&hw->io_wait_free)) {
+				temp = list_first_entry(&hw->io_wait_free,
+							struct efct_hw_io_s,
+							list_entry);
+				list_del(&temp->list_entry);
+			}
+		spin_unlock_irqrestore(&hw->io_lock, flags);
+
+		for (i = 0; i < hw->wq_count; i++)
+			sli_queue_free(&hw->sli, &hw->wq[i],
+				       destroy_queues, free_memory);
+
+		for (i = 0; i < hw->rq_count; i++)
+			sli_queue_free(&hw->sli, &hw->rq[i],
+				       destroy_queues, free_memory);
+
+		for (i = 0; i < hw->hw_rq_count; i++) {
+			struct hw_rq_s *rq = hw->hw_rq[i];
+
+			if (rq->rq_tracker) {
+				u32 j;
+
+				for (j = 0; j < rq->entry_count; j++)
+					rq->rq_tracker[j] = NULL;
+			}
+		}
+
+		for (i = 0; i < hw->mq_count; i++)
+			sli_queue_free(&hw->sli, &hw->mq[i],
+				       destroy_queues, free_memory);
+
+		for (i = 0; i < hw->cq_count; i++)
+			sli_queue_free(&hw->sli, &hw->cq[i],
+				       destroy_queues, free_memory);
+
+		for (i = 0; i < hw->eq_count; i++)
+			sli_queue_free(&hw->sli, &hw->eq[i],
+				       destroy_queues, free_memory);
+
+		/* Free rq buffers */
+		efct_hw_rx_free(hw);
+
+		/* Teardown the HW queue topology */
+		efct_hw_queue_teardown(hw);
+
+		/*
+		 * Reset the request tag pool, the HW IO request tags
+		 * are reassigned in efct_hw_setup_io()
+		 */
+		efct_hw_reqtag_reset(hw);
+	} else {
+		/* Free rq buffers */
+		efct_hw_rx_free(hw);
+	}
+
+	return rc;
+}
+
+int
+efct_hw_get_num_eq(struct efct_hw_s *hw)
+{
+	return hw->eq_count;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index b372250c4408..6910dca917a4 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -1169,5 +1169,36 @@ efct_hw_get_host_stats(struct efct_hw_s *hw,
 			void *arg),
 		void *arg);
 
+struct hw_eq_s *efct_hw_new_eq(struct efct_hw_s *hw, u32 entry_count);
+struct hw_cq_s *efct_hw_new_cq(struct hw_eq_s *eq, u32 entry_count);
+extern u32
+efct_hw_new_cq_set(struct hw_eq_s *eqs[], struct hw_cq_s *cqs[],
+		   u32 num_cqs, u32 entry_count);
+struct hw_mq_s *efct_hw_new_mq(struct hw_cq_s *cq, u32 entry_count);
+extern struct hw_wq_s
+*efct_hw_new_wq(struct hw_cq_s *cq, u32 entry_count,
+		u32 class, u32 ulp);
+extern struct hw_rq_s
+*efct_hw_new_rq(struct hw_cq_s *cq, u32 entry_count, u32 ulp);
+extern u32
+efct_hw_new_rq_set(struct hw_cq_s *cqs[], struct hw_rq_s *rqs[],
+		   u32 num_rq_pairs, u32 entry_count);
+void efct_hw_del_eq(struct hw_eq_s *eq);
+void efct_hw_del_cq(struct hw_cq_s *cq);
+void efct_hw_del_mq(struct hw_mq_s *mq);
+void efct_hw_del_wq(struct hw_wq_s *wq);
+void efct_hw_del_rq(struct hw_rq_s *rq);
+void efct_hw_queue_dump(struct efct_hw_s *hw);
+void efct_hw_queue_teardown(struct efct_hw_s *hw);
+enum efct_hw_rtn_e efct_hw_teardown(struct efct_hw_s *hw);
+enum efct_hw_rtn_e
+efct_hw_reset(struct efct_hw_s *hw, enum efct_hw_reset_e reset);
+int efct_hw_get_num_eq(struct efct_hw_s *hw);
+
+extern enum efct_hw_rtn_e
+efct_hw_port_control(struct efct_hw_s *hw, enum efct_hw_port_e ctrl,
+		     uintptr_t value,
+		void (*cb)(int status, uintptr_t value, void *arg),
+		void *arg);
 
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
index 83782794225f..d43027c57732 100644
--- a/drivers/scsi/elx/efct/efct_xport.c
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -663,3 +663,486 @@ efct_scsi_release_fc_transport(void)
 
 	return 0;
 }
+
+/**
+ * @brief Detaches the transport from the device.
+ *
+ * @par Description
+ * Performs the functions required to shut down a device.
+ *
+ * @param xport Pointer to transport object.
+ *
+ * @return Returns 0 on success or a non-zero value on failure.
+ */
+int
+efct_xport_detach(struct efct_xport_s *xport)
+{
+	struct efct_s *efct = xport->efct;
+
+	/* free resources associated with target-server and initiator-client */
+	efct_scsi_tgt_del_device(efct);
+
+	efct_scsi_del_device(efct);
+
+	/*Shutdown FC Statistics timer*/
+	del_timer(&efct->xport->stats_timer);
+
+	efct_hw_teardown(&efct->hw);
+
+	efct_xport_delete_debugfs(efct);
+
+	return 0;
+}
+
+/**
+ * @brief domain list empty callback
+ *
+ * @par Description
+ * Function is invoked when the domain is freed. By convention
+ * @c arg points to an struct completion instance, that is incremented.
+ *
+ * @param efct Pointer to device object.
+ * @param arg Pointer to completion instance.
+ *
+ * @return None.
+ */
+static void
+efct_xport_domain_free_cb(struct efc_lport *efc, void *arg)
+{
+	struct completion *done = arg;
+
+	complete(done);
+}
+
+/**
+ * @brief post node event callback
+ *
+ * @par Description
+ * This function is called from the mailbox completion interrupt context to
+ * post an event to a node object. By doing this in the interrupt context,
+ * it has the benefit of only posting events in the interrupt context,
+ * deferring the need to create a per event node lock.
+ *
+ * @param hw Pointer to HW structure.
+ * @param status Completion status for mailbox command.
+ * @param mqe Mailbox queue completion entry.
+ * @param arg Callback argument.
+ *
+ * @return Returns 0 on success, a negative error code value on failure.
+ */
+
+static int
+efct_xport_post_node_event_cb(struct efct_hw_s *hw, int status,
+			      u8 *mqe, void *arg)
+{
+	struct efct_xport_post_node_event_s *payload = arg;
+
+	if (payload) {
+		efc_node_post_shutdown(payload->node, payload->evt,
+				       payload->context);
+		complete(&payload->done);
+		if (atomic_sub_and_test(1, &payload->refcnt))
+			kfree(payload);
+	}
+	return 0;
+}
+
+/**
+ * @brief Initiate force free.
+ *
+ * @par Description
+ * Perform force free of EFCT.
+ *
+ * @param xport Pointer to transport object.
+ *
+ * @return None.
+ */
+
+static void
+efct_xport_force_free(struct efct_xport_s *xport)
+{
+	struct efct_s *efct = xport->efct;
+	struct efc_lport *efc = efct->efcport;
+
+	efc_log_debug(efct, "reset required, do force shutdown\n");
+
+	if (!efc->domain) {
+		efc_log_err(efct, "Domain is already freed\n");
+		return;
+	}
+
+	efc_domain_force_free(efc->domain);
+}
+
+/**
+ * @brief Perform transport attach function.
+ *
+ * @par Description
+ * Perform the attach function, which for the FC transport makes a HW call
+ * to bring up the link.
+ *
+ * @param xport pointer to transport object.
+ * @param cmd command to execute.
+ *
+ * efct_xport_control(struct efct_xport_s *xport, EFCT_XPORT_PORT_ONLINE)
+ * efct_xport_control(struct efct_xport_s *xport, EFCT_XPORT_PORT_OFFLINE)
+ * efct_xport_control(struct efct_xport_s *xport, EFCT_XPORT_PORT_SHUTDOWN)
+ * efct_xport_control(struct efct_xport_s *xport, EFCT_XPORT_POST_NODE_EVENT,
+ *		     struct efct_node_s *node, efc_sm_event_e, void *context)
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+
+int
+efct_xport_control(struct efct_xport_s *xport, enum efct_xport_ctrl_e cmd, ...)
+{
+	u32 rc = 0;
+	struct efct_s *efct = NULL;
+	va_list argp;
+
+	efct = xport->efct;
+
+	switch (cmd) {
+	case EFCT_XPORT_PORT_ONLINE: {
+		/* Bring the port on-line */
+		rc = efct_hw_port_control(&efct->hw, EFCT_HW_PORT_INIT, 0,
+					  NULL, NULL);
+		if (rc)
+			efc_log_err(efct,
+				     "%s: Can't init port\n", efct->desc);
+		else
+			xport->configured_link_state = cmd;
+		break;
+	}
+	case EFCT_XPORT_PORT_OFFLINE: {
+		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
+					 NULL, NULL))
+			efc_log_err(efct, "port shutdown failed\n");
+		else
+			xport->configured_link_state = cmd;
+		break;
+	}
+
+	case EFCT_XPORT_SHUTDOWN: {
+		struct completion done;
+		u32 reset_required;
+		unsigned long timeout;
+
+		/* if a PHYSDEV reset was performed (e.g. hw dump), will affect
+		 * all PCI functions; orderly shutdown won't work,
+		 * just force free
+		 */
+		if (efct_hw_get(&efct->hw, EFCT_HW_RESET_REQUIRED,
+				&reset_required) != EFCT_HW_RTN_SUCCESS)
+			reset_required = 0;
+
+		if (reset_required) {
+			efc_log_debug(efct,
+				       "reset required, do force shutdown\n");
+			efct_xport_force_free(xport);
+			break;
+		}
+		init_completion(&done);
+
+		efc_register_domain_free_cb(efct->efcport,
+					efct_xport_domain_free_cb, &done);
+
+		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
+					 NULL, NULL)) {
+			efc_log_debug(efct,
+				       "port shutdown failed, do force shutdown\n");
+			efct_xport_force_free(xport);
+		} else {
+			efc_log_debug(efct,
+				       "Waiting %d seconds for domain shutdown.\n",
+			(EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC / 1000000));
+
+			timeout = usecs_to_jiffies(
+					EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC);
+			if (!wait_for_completion_timeout(&done, timeout)) {
+				efc_log_debug(efct,
+					       "Domain shutdown timed out!!\n");
+				efct_xport_force_free(xport);
+			}
+		}
+
+		efc_register_domain_free_cb(efct->efcport, NULL, NULL);
+
+		/* Free up any saved virtual ports */
+		efc_vport_del_all(efct->efcport);
+		break;
+	}
+
+	/*
+	 * POST_NODE_EVENT:  post an event to a node object
+	 *
+	 * This transport function is used to post an event to a node object.
+	 * It does this by submitting a NOP mailbox command to defer execution
+	 * to the interrupt context (thereby enforcing the serialized execution
+	 * of event posting to the node state machine instances)
+	 */
+	case EFCT_XPORT_POST_NODE_EVENT: {
+		struct efc_node_s *node;
+		u32	evt;
+		void *context;
+		struct efct_xport_post_node_event_s *payload = NULL;
+		struct efct_s *efct;
+		struct efct_hw_s *hw;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		node = va_arg(argp, struct efc_node_s *);
+		evt = va_arg(argp, u32);
+		context = va_arg(argp, void *);
+		va_end(argp);
+
+		payload = kmalloc(sizeof(*payload), GFP_KERNEL);
+		if (!payload)
+			return -1;
+
+		memset(payload, 0, sizeof(*payload));
+
+		efct = node->efc->base;
+		hw = &efct->hw;
+
+		/* if node's state machine is disabled,
+		 * don't bother continuing
+		 */
+		if (!node->sm.current_state) {
+			efc_log_test(efct, "node %p state machine disabled\n",
+				      node);
+			kfree(payload);
+			rc = -1;
+			break;
+		}
+
+		/* Setup payload */
+		init_completion(&payload->done);
+
+		/* one for self and one for callback */
+		atomic_set(&payload->refcnt, 2);
+		payload->node = node;
+		payload->evt = evt;
+		payload->context = context;
+
+		if (efct_hw_async_call(hw, efct_xport_post_node_event_cb,
+				       payload)) {
+			efc_log_test(efct, "efct_hw_async_call failed\n");
+			kfree(payload);
+			rc = -1;
+			break;
+		}
+
+		/* Wait for completion */
+		if (wait_for_completion_interruptible(&payload->done)) {
+			efc_log_test(efct,
+				      "POST_NODE_EVENT: completion failed\n");
+			rc = -1;
+		}
+		if (atomic_sub_and_test(1, &payload->refcnt))
+			kfree(payload);
+
+		break;
+	}
+	/*
+	 * Set wwnn for the port. This will be used instead of the default
+	 * provided by FW.
+	 */
+	case EFCT_XPORT_WWNN_SET: {
+		u64 wwnn;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		wwnn = va_arg(argp, uint64_t);
+		va_end(argp);
+
+		efc_log_debug(efct, " WWNN %016llx\n", wwnn);
+		xport->req_wwnn = wwnn;
+
+		break;
+	}
+	/*
+	 * Set wwpn for the port. This will be used instead of the default
+	 * provided by FW.
+	 */
+	case EFCT_XPORT_WWPN_SET: {
+		u64 wwpn;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		wwpn = va_arg(argp, uint64_t);
+		va_end(argp);
+
+		efc_log_debug(efct, " WWPN %016llx\n", wwpn);
+		xport->req_wwpn = wwpn;
+
+		break;
+	}
+
+	default:
+		break;
+	}
+	return rc;
+}
+
+
+
+static void
+efct_xport_link_stats_cb(int status, u32 num_counters,
+			 struct efct_hw_link_stat_counts_s *counters, void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.link_stats.link_failure_error_count =
+		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
+	result->stats.link_stats.loss_of_sync_error_count =
+		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
+	result->stats.link_stats.primitive_sequence_error_count =
+		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
+	result->stats.link_stats.invalid_transmission_word_error_count =
+		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
+	result->stats.link_stats.crc_error_count =
+		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
+
+	complete(&result->stats.done);
+}
+
+static void
+efct_xport_host_stats_cb(int status, u32 num_counters,
+			 struct efct_hw_host_stat_counts_s *counters, void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.host_stats.transmit_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
+	result->stats.host_stats.receive_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
+	result->stats.host_stats.transmit_frame_count =
+		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
+	result->stats.host_stats.receive_frame_count =
+		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
+
+	complete(&result->stats.done);
+}
+
+static void
+efct_xport_async_link_stats_cb(int status, u32 num_counters,
+			       struct efct_hw_link_stat_counts_s *counters,
+			       void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.link_stats.link_failure_error_count =
+		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
+	result->stats.link_stats.loss_of_sync_error_count =
+		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
+	result->stats.link_stats.primitive_sequence_error_count =
+		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
+	result->stats.link_stats.invalid_transmission_word_error_count =
+		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
+	result->stats.link_stats.crc_error_count =
+		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
+}
+
+static void
+efct_xport_async_host_stats_cb(int status, u32 num_counters,
+			       struct efct_hw_host_stat_counts_s *counters,
+			       void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.host_stats.transmit_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
+	result->stats.host_stats.receive_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
+	result->stats.host_stats.transmit_frame_count =
+		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
+	result->stats.host_stats.receive_frame_count =
+		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
+}
+
+
+
+/**
+ * @brief Free a transport object.
+ *
+ * @par Description
+ * The transport object is freed.
+ *
+ * @param xport Pointer to transport object.
+ *
+ * @return None.
+ */
+
+void
+efct_xport_free(struct efct_xport_s *xport)
+{
+	if (xport) {
+		efct_io_pool_free(xport->io_pool);
+
+		kfree(xport);
+	}
+}
+
+void
+efct_release_fc_transport(struct scsi_transport_template *transport_template)
+{
+	if (transport_template)
+		pr_err("releasing transport layer\n");
+
+	/* Releasing FC transport */
+	fc_release_transport(transport_template);
+}
+
+/**
+ * @ingroup scsi_api_initiator
+ * @brief Remove host from transport.
+ *
+ * @par Description
+ * Function called by the SCSI mid-layer module to terminate any
+ * transport-related elements for a SCSI host.
+ *
+ * @return None.
+ */
+static void
+efct_xport_remove_host(struct Scsi_Host *shost)
+{
+	/*
+	 * Remove host from FC Transport layer
+	 *
+	 * 1. fc_remove_host()
+	 * a. for each vport: queue vport_delete_work (fc_vport_sched_delete())
+	 *	b. for each rport: queue rport_delete_work
+	 *		(fc_rport_final_delete())
+	 *	c. scsi_flush_work()
+	 * 2. fc_rport_final_delete()
+	 * a. fc_terminate_rport_io
+	 *		i. call LLDD's terminate_rport_io()
+	 *		ii. scsi_target_unblock()
+	 *	b. fc_starget_delete()
+	 *		i. fc_terminate_rport_io()
+	 *			1. call LLDD's terminate_rport_io()
+	 *			2. scsi_target_unblock()
+	 *		ii. scsi_remove_target()
+	 *      c. invoke LLDD devloss callback
+	 *      d. transport_remove_device(&rport->dev)
+	 *      e. device_del(&rport->dev)
+	 *      f. transport_destroy_device(&rport->dev)
+	 *      g. put_device(&shost->shost_gendev) (for fc_host->rport list)
+	 *      h. put_device(&rport->dev)
+	 */
+	fc_remove_host(shost);
+}
+
+int efct_scsi_del_device(struct efct_s *efct)
+{
+	if (efct->shost) {
+		efc_log_debug(efct, "Unregistering with Transport Layer\n");
+		efct_xport_remove_host(efct->shost);
+		efc_log_debug(efct, "Unregistering with SCSI Midlayer\n");
+		scsi_remove_host(efct->shost);
+		scsi_host_put(efct->shost);
+		efct->shost = NULL;
+	}
+	return 0;
+}
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 28/32] elx: efct: IO timeout handling routines
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (26 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 27/32] elx: efct: xport and hardware teardown routines James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 29/32] elx: efct: Firmware update, async link processing James Smart
                   ` (4 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Add support for a WQE timer to handle the wqe and IO timeouts.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c | 209 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 209 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 48cdbeebd058..751edbd2ddf9 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -5716,3 +5716,212 @@ efct_hw_get_num_eq(struct efct_hw_s *hw)
 {
 	return hw->eq_count;
 }
+
+/**
+ * @brief HW async call context structure.
+ */
+struct efct_hw_async_call_ctx_s {
+	efct_hw_async_cb_t callback;
+	void *arg;
+	u8 cmd[SLI4_BMBX_SIZE];
+};
+
+/**
+ * @brief HW async callback handler
+ *
+ * @par Description
+ * This function is called when the NOP mbox cmd completes.  The callback stored
+ * in the requesting context is invoked.
+ *
+ * @param hw Pointer to HW object.
+ * @param status Completion status.
+ * @param mqe Pointer to mailbox completion queue entry.
+ * @param arg Caller-provided argument.
+ *
+ * @return None.
+ */
+static void
+efct_hw_async_cb(struct efct_hw_s *hw, int status, u8 *mqe, void *arg)
+{
+	struct efct_hw_async_call_ctx_s *ctx = arg;
+
+	if (ctx) {
+		if (ctx->callback)
+			(*ctx->callback)(hw, status, mqe, ctx->arg);
+
+		kfree(ctx);
+	}
+}
+
+/**
+ * @brief Make an async callback using NOP mailbox command
+ *
+ * @par Description
+ * Post a NOP mbox cmd; the callback with argument is invoked upon completion
+ * while in the event processing context.
+ *
+ * @param hw Pointer to HW object.
+ * @param callback Pointer to callback function.
+ * @param arg Caller-provided callback.
+ *
+ * @return Returns 0 on success, or a negative error code value on failure.
+ */
+int
+efct_hw_async_call(struct efct_hw_s *hw,
+		   efct_hw_async_cb_t callback, void *arg)
+{
+	int rc = 0;
+	struct efct_hw_async_call_ctx_s *ctx;
+
+	/*
+	 * Allocate a callback context (which includes the mbox cmd buffer),
+	 * we need this to be persistent as the mbox cmd submission may be
+	 * queued and executed later execution.
+	 */
+	ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
+	if (!ctx)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(ctx, 0, sizeof(*ctx));
+	ctx->callback = callback;
+	ctx->arg = arg;
+
+	/* Build and send a NOP mailbox command */
+	if (!sli_cmd_common_nop(&hw->sli, ctx->cmd,
+			       sizeof(ctx->cmd), 0) == 0) {
+		efc_log_err(hw->os, "COMMON_NOP format failure\n");
+		kfree(ctx);
+		rc = -1;
+	}
+
+	if (efct_hw_command(hw, ctx->cmd, EFCT_CMD_NOWAIT, efct_hw_async_cb,
+			    ctx)) {
+		efc_log_err(hw->os, "COMMON_NOP command failure\n");
+		kfree(ctx);
+		rc = -1;
+	}
+	return rc;
+}
+
+static int
+target_wqe_timer_nop_cb(struct efct_hw_s *hw, int status,
+			u8 *mqe, void *arg)
+{
+	struct efct_hw_io_s *io = NULL;
+	struct efct_hw_io_s *io_next = NULL;
+	u64 ticks_current = jiffies_64;
+	u32 sec_elapsed;
+	struct sli4_mbox_command_header_s *hdr =
+				(struct sli4_mbox_command_header_s *)mqe;
+	unsigned long flags = 0;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status st=%x hdr=%x\n",
+			       status,
+			       le16_to_cpu(hdr->status));
+		/* go ahead and proceed with wqe timer checks... */
+	}
+
+	/* loop through active WQE list and check for timeouts */
+	spin_lock_irqsave(&hw->io_lock, flags);
+	list_for_each_entry_safe(io, io_next, &hw->io_timed_wqe, wqe_link) {
+		sec_elapsed = ((u32)(ticks_current - io->submit_ticks) / HZ);
+
+		/*
+		 * If elapsed time > timeout, abort it. No need to check type
+		 * since it wouldn't be on this list unless it was a target WQE
+		 */
+		if (sec_elapsed > io->tgt_wqe_timeout) {
+			efc_log_test(hw->os,
+				      "IO timeout xri=0x%x tag=0x%x type=%d\n",
+				     io->indicator, io->reqtag, io->type);
+
+			/*
+			 * remove from active_wqe list so won't try to abort
+			 * again
+			 */
+			list_del(&io->list_entry);
+
+			/* save status of timed_out for when abort completes */
+			io->status_saved = true;
+			io->saved_status =
+					 SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT;
+			io->saved_ext = 0;
+			io->saved_len = 0;
+
+			/* now abort outstanding IO */
+			efct_hw_io_abort(hw, io, false, NULL, NULL);
+		}
+		/*
+		 * need to go through entire list since each IO could have a
+		 * different timeout value
+		 */
+	}
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	/* if we're not in the middle of shutting down, schedule next timer */
+	if (!hw->active_wqe_timer_shutdown) {
+		timer_setup(&hw->wqe_timer,
+			    &target_wqe_timer_cb, 0);
+
+		mod_timer(&hw->wqe_timer,
+			  jiffies +
+			  msecs_to_jiffies(EFCT_HW_WQ_TIMER_PERIOD_MS));
+	}
+	hw->in_active_wqe_timer = false;
+	return 0;
+}
+
+static void
+target_wqe_timer_cb(struct timer_list *t)
+{
+	struct efct_hw_s *hw = from_timer(hw, t, wqe_timer);
+
+	/*
+	 * delete existing timer; will kick off new timer after checking wqe
+	 * timeouts
+	 */
+	hw->in_active_wqe_timer = true;
+	del_timer(&hw->wqe_timer);
+
+	/*
+	 * Forward timer callback to execute in the mailbox completion
+	 * processing context
+	 */
+	if (efct_hw_async_call(hw, target_wqe_timer_nop_cb, hw))
+		efc_log_test(hw->os, "efct_hw_async_call failed\n");
+}
+
+static void
+shutdown_target_wqe_timer(struct efct_hw_s *hw)
+{
+	u32	iters = 100;
+
+	if (hw->config.emulate_tgt_wqe_timeout) {
+		/*
+		 * request active wqe timer shutdown, then wait for it to
+		 * complete
+		 */
+		hw->active_wqe_timer_shutdown = true;
+
+		/*
+		 * delete WQE timer and wait for timer handler to complete
+		 * (if necessary)
+		 */
+		del_timer(&hw->wqe_timer);
+
+		/* now wait for timer handler to complete (if necessary) */
+		while (hw->in_active_wqe_timer && iters) {
+			/*
+			 * if we happen to have just sent NOP mbox cmn, make
+			 * sure completions are being processed
+			 */
+			efct_hw_flush(hw);
+			iters--;
+		}
+
+		if (iters == 0)
+			efc_log_test(hw->os,
+				      "Failed to shutdown active wqe timer\n");
+	}
+}
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 29/32] elx: efct: Firmware update, async link processing
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (27 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 28/32] elx: efct: IO timeout handling routines James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 30/32] elx: efct: scsi_transport_fc host interface support James Smart
                   ` (3 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Handling of async link event.
Registrations for VFI, VPI and RPI.
Add Firmware update helper routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c | 1939 +++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |   71 ++
 2 files changed, 2010 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 751edbd2ddf9..d285d42db187 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -48,6 +48,12 @@ struct efct_hw_host_stat_cb_arg {
 	void *arg;
 };
 
+struct efct_hw_fw_wr_cb_arg {
+	void (*cb)(int status, u32 bytes_written,
+		   u32 change_status, void *arg);
+	void *arg;
+};
+
 static int
 efct_hw_cb_sfp(struct efct_hw_s *, int, u8 *, void  *);
 static int
@@ -106,6 +112,42 @@ efct_hw_wq_process_abort(void *arg, u8 *cqe, int status);
 static void
 hw_wq_submit_pending(struct hw_wq_s *wq, u32 update_free_count);
 
+static int
+efct_hw_cb_link(void *, void *);
+static int
+__efct_read_topology_cb(struct efct_hw_s *, int, u8 *, void *);
+static enum efct_hw_rtn_e
+efct_hw_firmware_write_sli4_intf_2(struct efct_hw_s *hw, struct efc_dma_s *dma,
+				   u32 size, u32 offset, int last,
+			void (*cb)(int status, u32 bytes_written,
+				   u32 change_status, void *arg),
+			void *arg);
+static int
+efct_hw_cb_fw_write(struct efct_hw_s *, int, u8 *, void  *);
+
+static int
+efct_hw_cb_node_attach(struct efct_hw_s *, int, u8 *, void *);
+static int
+efct_hw_cb_node_free(struct efct_hw_s *, int, u8 *, void *);
+static int
+efct_hw_cb_node_free_all(struct efct_hw_s *, int, u8 *, void *);
+
+static void
+efct_hw_port_alloc_read_sparm64(struct efc_sli_port_s *sport, void *data);
+static void
+efct_hw_port_alloc_init_vpi(struct efc_sli_port_s *sport, void *data);
+static void
+efct_hw_port_attach_reg_vpi(struct efc_sli_port_s *sport, void *data);
+static void
+efct_hw_port_free_unreg_vpi(struct efc_sli_port_s *sport, void *data);
+
+static void
+efct_hw_domain_alloc_init_vfi(struct efc_domain_s *domain, void *data);
+static void
+efct_hw_domain_attach_reg_vfi(struct efc_domain_s *domain, void *data);
+static void
+efct_hw_domain_free_unreg_vfi(struct efc_domain_s *domain, void *data);
+
 static enum efct_hw_rtn_e
 efct_hw_link_event_init(struct efct_hw_s *hw)
 {
@@ -5925,3 +5967,1900 @@ shutdown_target_wqe_timer(struct efct_hw_s *hw)
 				      "Failed to shutdown active wqe timer\n");
 	}
 }
+
+/**
+ * @ingroup port
+ * @brief Allocate a port object.
+ *
+ * @par Description
+ * This function allocates a VPI object for the port and stores it in the
+ * indicator field of the port object.
+ *
+ * @param hw Hardware context.
+ * @param sport SLI port object used to connect to the domain.
+ * @param domain Domain object associated with this port (may be NULL).
+ * @param wwpn Port's WWPN in big-endian order, or NULL to use default.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_port_alloc(struct efc_lport *efc, struct efc_sli_port_s *sport,
+		   struct efc_domain_s *domain, u8 *wwpn)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+
+	u8	*cmd = NULL;
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+	u32 index;
+
+	sport->indicator = U32_MAX;
+	sport->hw = hw;
+	sport->free_req_pending = false;
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (wwpn)
+		memcpy(&sport->sli_wwpn, wwpn, sizeof(sport->sli_wwpn));
+
+	if (sli_resource_alloc(&hw->sli, SLI_RSRC_VPI,
+			       &sport->indicator, &index)) {
+		efc_log_err(hw->os, "VPI allocation failure\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (domain) {
+		cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!cmd) {
+			rc = EFCT_HW_RTN_NO_MEMORY;
+			goto efct_hw_port_alloc_out;
+		}
+		memset(cmd, 0, SLI4_BMBX_SIZE);
+
+		/*
+		 * If the WWPN is NULL, fetch the default
+		 * WWPN and WWNN before initializing the VPI
+		 */
+		if (!wwpn)
+			efct_hw_port_alloc_read_sparm64(sport, cmd);
+		else
+			efct_hw_port_alloc_init_vpi(sport, cmd);
+	} else if (!wwpn) {
+		/* This is the convention for the HW, not SLI */
+		efc_log_test(hw->os, "need WWN for physical port\n");
+		rc = EFCT_HW_RTN_ERROR;
+	}
+	/* domain NULL and wwpn non-NULL */
+	// no-op;
+
+efct_hw_port_alloc_out:
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		kfree(cmd);
+
+		sli_resource_free(&hw->sli, SLI_RSRC_VPI,
+				  sport->indicator);
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup port
+ * @brief Attach a physical/virtual SLI port to a domain.
+ *
+ * @par Description
+ * This function registers a previously-allocated VPI with the
+ * device.
+ *
+ * @param hw Hardware context.
+ * @param sport Pointer to the SLI port object.
+ * @param fc_id Fibre Channel ID to associate with this port.
+ *
+ * @return Returns EFCT_HW_RTN_SUCCESS on success, or an error code on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_port_attach(struct efc_lport *efc, struct efc_sli_port_s *sport,
+		    u32 fc_id)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+
+	u8	*buf = NULL;
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!sport) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p sport=%p\n", hw,
+			sport);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+	sport->fc_id = fc_id;
+	efct_hw_port_attach_reg_vpi(sport, buf);
+	return rc;
+}
+
+/**
+ * @ingroup port
+ * @brief Free port resources.
+ *
+ * @par Description
+ * Issue the UNREG_VPI command to free the assigned VPI context.
+ *
+ * @param hw Hardware context.
+ * @param sport SLI port object used to connect to the domain.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_port_free(struct efc_lport *efc, struct efc_sli_port_s *sport)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!sport) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p sport=%p\n", hw,
+			sport);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (sport->attached)
+		efct_hw_port_free_unreg_vpi(sport, NULL);
+	else
+		sport->free_req_pending = true;
+
+	return rc;
+}
+
+/**
+ * @ingroup domain
+ * @brief Allocate a fabric domain object.
+ *
+ * @par Description
+ * This function starts a series of commands needed to connect to the domain,
+ * including
+ *   - REG_FCFI
+ *   - INIT_VFI
+ *   - READ_SPARMS
+ *   .
+ * @b Note: Not all SLI interface types use all of the above commands.
+ * @n @n Upon successful allocation, the HW generates a EFC_HW_DOMAIN_ALLOC_OK
+ * event. On failure, it generates a EFC_HW_DOMAIN_ALLOC_FAIL event.
+ *
+ * @param hw Hardware context.
+ * @param domain Pointer to the domain object.
+ * @param fcf FCF index.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_domain_alloc(struct efc_lport *efc, struct efc_domain_s *domain,
+		     u32 fcf)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+	u8 *cmd = NULL;
+	u32 index;
+
+	if (!domain || !domain->sport) {
+		efc_log_err(efct,
+			     "bad parameter(s) hw=%p domain=%p sport=%p\n",
+			    hw, domain, domain ? domain->sport : NULL);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(efct,
+			     "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!cmd)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(cmd, 0, SLI4_BMBX_SIZE);
+
+	/* allocate memory for the service parameters */
+	domain->dma.size = 112;
+	domain->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+					      domain->dma.size,
+					      &domain->dma.phys, GFP_DMA);
+	if (!domain->dma.virt) {
+		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
+		kfree(cmd);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	domain->hw = hw;
+	domain->fcf = fcf;
+	domain->fcf_indicator = U32_MAX;
+	domain->indicator = U32_MAX;
+
+	if (sli_resource_alloc(&hw->sli,
+			       SLI_RSRC_VFI, &domain->indicator,
+				    &index)) {
+		efc_log_err(hw->os, "VFI allocation failure\n");
+
+		kfree(cmd);
+		dma_free_coherent(&efct->pcidev->dev,
+				  domain->dma.size, domain->dma.virt,
+				  domain->dma.phys);
+		memset(&domain->dma, 0, sizeof(struct efc_dma_s));
+
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	efct_hw_domain_alloc_init_vfi(domain, cmd);
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @ingroup domain
+ * @brief Attach a SLI port to a domain.
+ *
+ * @param hw Hardware context.
+ * @param domain Pointer to the domain object.
+ * @param fc_id Fibre Channel ID to associate with this port.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_domain_attach(struct efc_lport *efc,
+		      struct efc_domain_s *domain, u32 fc_id)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+
+	u8	*buf = NULL;
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!domain) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p domain=%p\n",
+			hw, domain);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+	domain->sport->fc_id = fc_id;
+	efct_hw_domain_attach_reg_vfi(domain, buf);
+	return rc;
+}
+
+/**
+ * @ingroup domain
+ * @brief Free a fabric domain object.
+ *
+ * @par Description
+ * Free both the driver and SLI port resources associated with the domain.
+ *
+ * @param hw Hardware context.
+ * @param domain Pointer to the domain object.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_domain_free(struct efc_lport *efc, struct efc_domain_s *domain)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+
+	enum efct_hw_rtn_e	rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!domain) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p domain=%p\n",
+			hw, domain);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	efct_hw_domain_free_unreg_vfi(domain, NULL);
+	return rc;
+}
+
+/**
+ * @ingroup domain
+ * @brief Free a fabric domain object.
+ *
+ * @par Description
+ * Free the driver resources associated with the domain. The difference between
+ * this call and efct_hw_domain_free() is that this call assumes resources no
+ * longer exist on the SLI port, due to a reset or after some error conditions.
+ *
+ * @param hw Hardware context.
+ * @param domain Pointer to the domain object.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_domain_force_free(struct efc_lport *efc, struct efc_domain_s *domain)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+
+	if (!domain) {
+		efc_log_err(efct,
+			     "bad parameter(s) hw=%p domain=%p\n", hw, domain);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	dma_free_coherent(&efct->pcidev->dev,
+			  domain->dma.size, domain->dma.virt, domain->dma.phys);
+	memset(&domain->dma, 0, sizeof(struct efc_dma_s));
+	sli_resource_free(&hw->sli, SLI_RSRC_VFI,
+			  domain->indicator);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @ingroup node
+ * @brief Allocate a remote node object.
+ *
+ * @param hw Hardware context.
+ * @param rnode Allocated remote node object to initialize.
+ * @param fc_addr FC address of the remote node.
+ * @param sport SLI port used to connect to remote node.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_node_alloc(struct efc_lport *efc, struct efc_remote_node_s *rnode,
+		   u32 fc_addr, struct efc_sli_port_s *sport)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+
+	/* Check for invalid indicator */
+	if (rnode->indicator != U32_MAX) {
+		efc_log_err(hw->os,
+			     "RPI allocation failure addr=%#x rpi=%#x\n",
+			    fc_addr, rnode->indicator);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* NULL SLI port indicates an unallocated remote node */
+	rnode->sport = NULL;
+
+	if (sli_resource_alloc(&hw->sli, SLI_RSRC_RPI,
+			       &rnode->indicator, &rnode->index)) {
+		efc_log_err(hw->os, "RPI allocation failure addr=%#x\n",
+			     fc_addr);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	rnode->fc_id = fc_addr;
+	rnode->sport = sport;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * @ingroup node
+ * @brief Update a remote node object with the remote port's service parameters.
+ *
+ * @param hw Hardware context.
+ * @param rnode Allocated remote node object to initialize.
+ * @param sparms DMA buffer containing the remote port's service parameters.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_node_attach(struct efc_lport *efc, struct efc_remote_node_s *rnode,
+		    struct efc_dma_s *sparms)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+
+	enum efct_hw_rtn_e	rc = EFCT_HW_RTN_ERROR;
+	u8		*buf = NULL;
+	u32	count = 0;
+
+	if (!hw || !rnode || !sparms) {
+		efc_log_err(efct,
+			     "bad parameter(s) hw=%p rnode=%p sparms=%p\n",
+			    hw, rnode, sparms);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+	/*
+	 * If the attach count is non-zero, this RPI has already been reg'd.
+	 * Otherwise, register the RPI
+	 */
+	if (rnode->index == U32_MAX) {
+		efc_log_err(efct, "bad parameter rnode->index invalid\n");
+		kfree(buf);
+		return EFCT_HW_RTN_ERROR;
+	}
+	count = atomic_add_return(1, &hw->rpi_ref[rnode->index].rpi_count);
+	count--;
+	if (count) {
+		/*
+		 * Can't attach multiple FC_ID's to a node unless High Login
+		 * Mode is enabled
+		 */
+		if (!hw->sli.high_login_mode) {
+			efc_log_test(hw->os,
+				      "attach to attached node HLM=%d cnt=%d\n",
+				     hw->sli.high_login_mode, count);
+			rc = EFCT_HW_RTN_SUCCESS;
+		} else {
+			rnode->node_group = true;
+			rnode->attached =
+			 atomic_read(&hw->rpi_ref[rnode->index].rpi_attached);
+			rc = rnode->attached  ? EFCT_HW_RTN_SUCCESS_SYNC :
+							 EFCT_HW_RTN_SUCCESS;
+		}
+	} else {
+		rnode->node_group = false;
+
+		if (!sli_cmd_reg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
+				    rnode->fc_id,
+				    rnode->indicator, rnode->sport->indicator,
+				    sparms, 0, 0))
+			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_node_attach, rnode);
+	}
+
+	if (count || rc) {
+		if (rc < EFCT_HW_RTN_SUCCESS) {
+			atomic_sub_return(1,
+					  &hw->rpi_ref[rnode->index].rpi_count);
+			efc_log_err(hw->os,
+				     "%s error\n", count ? "HLM" : "REG_RPI");
+		}
+		kfree(buf);
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup node
+ * @brief Free a remote node resource.
+ *
+ * @param hw Hardware context.
+ * @param rnode Remote node object to free.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_node_free_resources(struct efc_lport *efc,
+			    struct efc_remote_node_s *rnode)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+	enum efct_hw_rtn_e	rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!hw || !rnode) {
+		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
+			     hw, rnode);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (rnode->sport) {
+		if (rnode->attached) {
+			efc_log_err(hw->os, "Err: rnode is still attached\n");
+			return EFCT_HW_RTN_ERROR;
+		}
+		if (rnode->indicator != U32_MAX) {
+			if (sli_resource_free(&hw->sli, SLI_RSRC_RPI,
+					      rnode->indicator)) {
+				efc_log_err(hw->os,
+					     "RPI free fail RPI %d addr=%#x\n",
+					    rnode->indicator,
+					    rnode->fc_id);
+				rc = EFCT_HW_RTN_ERROR;
+			} else {
+				rnode->node_group = false;
+				rnode->indicator = U32_MAX;
+				rnode->index = U32_MAX;
+				rnode->free_group = false;
+			}
+		}
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup node
+ * @brief Free a remote node object.
+ *
+ * @param hw Hardware context.
+ * @param rnode Remote node object to free.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_node_detach(struct efc_lport *efc, struct efc_remote_node_s *rnode)
+{
+	struct efct_s *efct = efc->base;
+	struct efct_hw_s *hw = &efct->hw;
+	u8	*buf = NULL;
+	enum efct_hw_rtn_e	rc = EFCT_HW_RTN_SUCCESS_SYNC;
+	u32	index = U32_MAX;
+
+	if (!hw || !rnode) {
+		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
+			     hw, rnode);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	index = rnode->index;
+
+	if (rnode->sport) {
+		u32	count = 0;
+		u32	fc_id;
+
+		if (!rnode->attached)
+			return EFCT_HW_RTN_SUCCESS_SYNC;
+
+		buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!buf)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		memset(buf, 0, SLI4_BMBX_SIZE);
+		count = atomic_sub_return(1, &hw->rpi_ref[index].rpi_count);
+		count++;
+		if (count <= 1) {
+			/*
+			 * There are no other references to this RPI so
+			 * unregister it
+			 */
+			fc_id = U32_MAX;
+			/* and free the resource */
+			rnode->node_group = false;
+			rnode->free_group = true;
+		} else {
+			if (!hw->sli.high_login_mode)
+				efc_log_test(hw->os,
+					      "Inval cnt with HLM off, cnt=%d\n",
+					     count);
+			fc_id = rnode->fc_id & 0x00ffffff;
+		}
+
+		rc = EFCT_HW_RTN_ERROR;
+
+		if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
+				      rnode->indicator,
+				      SLI_RSRC_RPI, fc_id))
+			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_node_free, rnode);
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "UNREG_RPI failed\n");
+			kfree(buf);
+			rc = EFCT_HW_RTN_ERROR;
+		}
+	}
+
+	return rc;
+}
+
+/**
+ * @ingroup node
+ * @brief Free all remote node objects.
+ *
+ * @param hw Hardware context.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_node_free_all(struct efct_hw_s *hw)
+{
+	u8	*buf = NULL;
+	enum efct_hw_rtn_e	rc = EFCT_HW_RTN_ERROR;
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE, 0xffff,
+			      SLI_RSRC_FCFI, U32_MAX))
+		rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_node_free_all,
+				     NULL);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(hw->os, "UNREG_RPI failed\n");
+		kfree(buf);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
+
+struct efct_hw_get_nvparms_cb_arg_s {
+	void (*cb)(int status,
+		   u8 *wwpn, u8 *wwnn,
+		u8 hard_alpa, u32 preferred_d_id,
+		void *arg);
+	void *arg;
+};
+
+/**
+ * @brief Called for the completion of get_nvparms for a
+ *        user request.
+ *
+ * @param hw Hardware context.
+ * @param status The status from the MQE.
+ * @param mqe Pointer to mailbox command buffer.
+ * @param arg Pointer to a callback argument.
+ *
+ * @return 0 on success, non-zero otherwise
+ */
+static int
+efct_hw_get_nvparms_cb(struct efct_hw_s *hw, int status,
+		       u8 *mqe, void *arg)
+{
+	struct efct_hw_get_nvparms_cb_arg_s *cb_arg = arg;
+	struct sli4_cmd_read_nvparms_s *mbox_rsp =
+			(struct sli4_cmd_read_nvparms_s *)mqe;
+	u8 hard_alpa;
+	u32 preferred_d_id;
+
+	hard_alpa = le32_to_cpu(mbox_rsp->hard_alpa_d_id) &
+				SLI4_READ_NVPARAMS_HARD_ALPA;
+	preferred_d_id = (le32_to_cpu(mbox_rsp->hard_alpa_d_id) &
+			  SLI4_READ_NVPARAMS_PREFERRED_D_ID) >> 8;
+	if (cb_arg->cb)
+		cb_arg->cb(status, mbox_rsp->wwpn, mbox_rsp->wwnn,
+			   hard_alpa, preferred_d_id,
+			   cb_arg->arg);
+
+	kfree(mqe);
+	kfree(cb_arg);
+
+	return 0;
+}
+
+/**
+ * @ingroup io
+ * @brief  Read non-volatile parms.
+ * @par Description
+ * Issues a SLI-4 READ_NVPARMS mailbox. When the
+ * command completes the provided mgmt callback function is
+ * called.
+ *
+ * @param hw Hardware context.
+ * @param cb Callback function to be called when the
+ *	  command completes.
+ * @param ul_arg An argument that is passed to the callback
+ *	  function.
+ *
+ * @return
+ * - EFCT_HW_RTN_SUCCESS on success.
+ * - EFCT_HW_RTN_NO_MEMORY if a malloc fails.
+ * - EFCT_HW_RTN_NO_RESOURCES if unable to get a command
+ *   context.
+ * - EFCT_HW_RTN_ERROR on any other error.
+ */
+int
+efct_hw_get_nvparms(struct efct_hw_s *hw,
+		    void (*cb)(int status, u8 *wwpn,
+			       u8 *wwnn, u8 hard_alpa,
+			       u32 preferred_d_id, void *arg),
+		    void *ul_arg)
+{
+	u8 *mbxdata;
+	struct efct_hw_get_nvparms_cb_arg_s *cb_arg;
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+
+	/* mbxdata holds the header of the command */
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	/*
+	 * cb_arg holds the data that will be passed to the callback on
+	 * completion
+	 */
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = ul_arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_read_nvparms(&hw->sli, mbxdata, SLI4_BMBX_SIZE))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_get_nvparms_cb, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "READ_NVPARMS failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+struct efct_hw_set_nvparms_cb_arg_s {
+	void (*cb)(int status, void *arg);
+	void *arg;
+};
+
+/**
+ * @brief Called for the completion of set_nvparms for a
+ *        user request.
+ *
+ * @param hw Hardware context.
+ * @param status The status from the MQE.
+ * @param mqe Pointer to mailbox command buffer.
+ * @param arg Pointer to a callback argument.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+static int
+efct_hw_set_nvparms_cb(struct efct_hw_s *hw, int status,
+		       u8 *mqe, void *arg)
+{
+	struct efct_hw_set_nvparms_cb_arg_s *cb_arg = arg;
+
+	if (cb_arg->cb)
+		cb_arg->cb(status, cb_arg->arg);
+
+	kfree(mqe);
+	kfree(cb_arg);
+
+	return 0;
+}
+
+/**
+ * @ingroup io
+ * @brief  Write non-volatile parms.
+ * @par Description
+ * Issues a SLI-4 WRITE_NVPARMS mailbox. When the
+ * command completes the provided mgmt callback function is
+ * called.
+ *
+ * @param hw Hardware context.
+ * @param cb Callback function to be called when the
+ *	  command completes.
+ * @param wwpn Port's WWPN in big-endian order, or NULL to use default.
+ * @param wwnn Port's WWNN in big-endian order, or NULL to use default.
+ * @param hard_alpa A hard AL_PA address setting used during loop
+ * initialization. If no hard AL_PA is required, set to 0.
+ * @param preferred_d_id A preferred D_ID address setting
+ * that may be overridden with the CONFIG_LINK mailbox command.
+ * If there is no preference, set to 0.
+ * @param ul_arg An argument that is passed to the callback
+ *	  function.
+ *
+ * @return
+ * - EFCT_HW_RTN_SUCCESS on success.
+ * - EFCT_HW_RTN_NO_MEMORY if a malloc fails.
+ * - EFCT_HW_RTN_NO_RESOURCES if unable to get a command
+ *   context.
+ * - EFCT_HW_RTN_ERROR on any other error.
+ */
+int
+efct_hw_set_nvparms(struct efct_hw_s *hw,
+		    void (*cb)(int status, void *arg),
+		u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		u32 preferred_d_id,
+		void *ul_arg)
+{
+	u8 *mbxdata;
+	struct efct_hw_set_nvparms_cb_arg_s *cb_arg;
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_SUCCESS;
+
+	/* mbxdata holds the header of the command */
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	/*
+	 * cb_arg holds the data that will be passed to the callback on
+	 * completion
+	 */
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = ul_arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_write_nvparms(&hw->sli, mbxdata, SLI4_BMBX_SIZE, wwpn,
+				  wwnn, hard_alpa, preferred_d_id))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_set_nvparms_cb, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "SET_NVPARMS failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+/**
+ * @brief Callback function for the SLI link events.
+ *
+ * @par Description
+ * This function allocates memory which must be freed in its callback.
+ *
+ * @param ctx Hardware context pointer (that is, struct efct_hw_s *).
+ * @param e Event structure pointer (that is, struct sli4_link_event_s *).
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+static int
+efct_hw_cb_link(void *ctx, void *e)
+{
+	struct efct_hw_s	*hw = ctx;
+	struct sli4_link_event_s *event = e;
+	struct efc_domain_s	*d = NULL;
+	int		rc = EFCT_HW_RTN_ERROR;
+	struct efct_s	*efct = hw->os;
+	struct efc_dma_s *dma;
+
+	efct_hw_link_event_init(hw);
+
+	switch (event->status) {
+	case SLI_LINK_STATUS_UP:
+
+		hw->link = *event;
+		efct->efcport->link_status = EFC_LINK_STATUS_UP;
+
+		if (event->topology == SLI_LINK_TOPO_NPORT) {
+			struct efc_domain_record_s drec = {0};
+
+			efc_log_info(hw->os, "Link Up, NPORT, speed is %d\n",
+				      event->speed);
+			drec.speed = event->speed;
+			drec.fc_id = event->fc_id;
+			drec.is_nport = true;
+			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND,
+				      &drec);
+		} else if (event->topology == SLI_LINK_TOPO_LOOP) {
+			u8	*buf = NULL;
+
+			efc_log_info(hw->os, "Link Up, LOOP, speed is %d\n",
+				      event->speed);
+			dma = &hw->loop_map;
+			dma->size = SLI4_MIN_LOOP_MAP_BYTES;
+			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+						       dma->size, &dma->phys,
+						       GFP_DMA);
+			if (!dma->virt)
+				efc_log_err(hw->os, "efct_dma_alloc_fail\n");
+
+			buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+			if (!buf)
+				break;
+
+			if (!sli_cmd_read_topology(&hw->sli, buf,
+						  SLI4_BMBX_SIZE,
+						       &hw->loop_map)) {
+				rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+						     __efct_read_topology_cb,
+						     NULL);
+			}
+
+			if (rc != EFCT_HW_RTN_SUCCESS) {
+				efc_log_test(hw->os, "READ_TOPOLOGY failed\n");
+				kfree(buf);
+			}
+		} else {
+			efc_log_info(hw->os, "%s(%#x), speed is %d\n",
+				      "Link Up, unsupported topology ",
+				     event->topology, event->speed);
+		}
+		break;
+	case SLI_LINK_STATUS_DOWN:
+		efc_log_info(hw->os, "Link down\n");
+
+		hw->link.status = event->status;
+		efct->efcport->link_status = EFC_LINK_STATUS_DOWN;
+
+		d = hw->domain;
+		if (d)
+			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_LOST, d);
+		break;
+	default:
+		efc_log_test(hw->os, "unhandled link status %#x\n",
+			      event->status);
+		break;
+	}
+
+	return 0;
+}
+
+static int
+efct_hw_cb_node_attach(struct efct_hw_s *hw, int status,
+		       u8 *mqe, void *arg)
+{
+	struct efc_remote_node_s *rnode = arg;
+	struct sli4_mbox_command_header_s *hdr =
+				(struct sli4_mbox_command_header_s *)mqe;
+	enum efc_hw_remote_node_event_e	evt = 0;
+
+	struct efct_s   *efct = hw->os;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+		atomic_sub_return(1, &hw->rpi_ref[rnode->index].rpi_count);
+		rnode->attached = false;
+		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 0);
+		evt = EFC_HW_NODE_ATTACH_FAIL;
+	} else {
+		rnode->attached = true;
+		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 1);
+		evt = EFC_HW_NODE_ATTACH_OK;
+	}
+
+	efc_remote_node_cb(efct->efcport, evt, rnode);
+
+	kfree(mqe);
+
+	return 0;
+}
+
+static int
+efct_hw_cb_node_free(struct efct_hw_s *hw,
+		     int status, u8 *mqe, void *arg)
+{
+	struct efc_remote_node_s *rnode = arg;
+	struct sli4_mbox_command_header_s *hdr =
+				(struct sli4_mbox_command_header_s *)mqe;
+	enum efc_hw_remote_node_event_e evt = EFC_HW_NODE_FREE_FAIL;
+	int		rc = 0;
+	struct efct_s   *efct = hw->os;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+
+		/*
+		 * In certain cases, a non-zero MQE status is OK (all must be
+		 * true):
+		 *   - node is attached
+		 *   - if High Login Mode is enabled, node is part of a node
+		 * group
+		 *   - status is 0x1400
+		 */
+		if (!rnode->attached ||
+		    (hw->sli.high_login_mode && !rnode->node_group) ||
+				(le16_to_cpu(hdr->status) !=
+				 MBX_STATUS_RPI_NOT_REG))
+			rc = -1;
+	}
+
+	if (rc == 0) {
+		rnode->node_group = false;
+		rnode->attached = false;
+
+		if (atomic_read(&hw->rpi_ref[rnode->index].rpi_count) == 0)
+			atomic_set(&hw->rpi_ref[rnode->index].rpi_attached,
+				   0);
+		 evt = EFC_HW_NODE_FREE_OK;
+	}
+
+	efc_remote_node_cb(efct->efcport, evt, rnode);
+
+	kfree(mqe);
+
+	return rc;
+}
+
+static int
+efct_hw_cb_node_free_all(struct efct_hw_s *hw, int status, u8 *mqe,
+			 void *arg)
+{
+	struct sli4_mbox_command_header_s *hdr =
+				(struct sli4_mbox_command_header_s *)mqe;
+	enum efc_hw_remote_node_event_e evt = EFC_HW_NODE_FREE_FAIL;
+	int		rc = 0;
+	u32	i;
+	struct efct_s   *efct = hw->os;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+	} else {
+		evt = EFC_HW_NODE_FREE_ALL_OK;
+	}
+
+	if (evt == EFC_HW_NODE_FREE_ALL_OK) {
+		for (i = 0; i < hw->sli.extent[SLI_RSRC_RPI].size;
+		     i++)
+			atomic_set(&hw->rpi_ref[i].rpi_count, 0);
+
+		if (sli_resource_reset(&hw->sli, SLI_RSRC_RPI)) {
+			efc_log_test(hw->os, "RPI free all failure\n");
+			rc = -1;
+		}
+	}
+
+	efc_remote_node_cb(efct->efcport, evt, NULL);
+
+	kfree(mqe);
+
+	return rc;
+}
+
+static int
+__efct_read_topology_cb(struct efct_hw_s *hw, int status,
+			u8 *mqe, void *arg)
+{
+	struct sli4_cmd_read_topology_s *read_topo =
+				(struct sli4_cmd_read_topology_s *)mqe;
+	u8 speed;
+	struct efc_domain_record_s drec = {0};
+	struct efct_s *efct = hw->os;
+
+	if (status || le16_to_cpu(read_topo->hdr.status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n",
+			       status,
+			       le16_to_cpu(read_topo->hdr.status));
+		kfree(mqe);
+		return -1;
+	}
+
+	switch (le32_to_cpu(read_topo->dw2_attentype) &
+		SLI4_READTOPO_ATTEN_TYPE) {
+	case SLI4_READ_TOPOLOGY_LINK_UP:
+		hw->link.status = SLI_LINK_STATUS_UP;
+		break;
+	case SLI4_READ_TOPOLOGY_LINK_DOWN:
+		hw->link.status = SLI_LINK_STATUS_DOWN;
+		break;
+	case SLI4_READ_TOPOLOGY_LINK_NO_ALPA:
+		hw->link.status = SLI_LINK_STATUS_NO_ALPA;
+		break;
+	default:
+		hw->link.status = SLI_LINK_STATUS_MAX;
+		break;
+	}
+
+	switch (read_topo->topology) {
+	case SLI4_READ_TOPOLOGY_NPORT:
+		hw->link.topology = SLI_LINK_TOPO_NPORT;
+		break;
+	case SLI4_READ_TOPOLOGY_FC_AL:
+		hw->link.topology = SLI_LINK_TOPO_LOOP;
+		if (hw->link.status == SLI_LINK_STATUS_UP)
+			hw->link.loop_map = hw->loop_map.virt;
+		hw->link.fc_id = read_topo->acquired_al_pa;
+		break;
+	default:
+		hw->link.topology = SLI_LINK_TOPO_MAX;
+		break;
+	}
+
+	hw->link.medium = SLI_LINK_MEDIUM_FC;
+
+	speed = (le32_to_cpu(read_topo->currlink_state) &
+		 SLI4_READTOPO_LINKSTATE_SPEED) >> 8;
+	switch (speed) {
+	case SLI4_READ_TOPOLOGY_SPEED_1G:
+		hw->link.speed =  1 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_2G:
+		hw->link.speed =  2 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_4G:
+		hw->link.speed =  4 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_8G:
+		hw->link.speed =  8 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_16G:
+		hw->link.speed = 16 * 1000;
+		hw->link.loop_map = NULL;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_32G:
+		hw->link.speed = 32 * 1000;
+		hw->link.loop_map = NULL;
+		break;
+	}
+
+	kfree(mqe);
+
+	drec.speed = hw->link.speed;
+	drec.fc_id = hw->link.fc_id;
+	drec.is_nport = true;
+	efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND, &drec);
+
+	return 0;
+}
+
+static int
+efct_hw_port_get_mbox_status(struct efc_sli_port_s *sport,
+			     u8 *mqe, int status)
+{
+	struct efct_hw_s *hw = sport->hw;
+	struct sli4_mbox_command_header_s *hdr =
+			(struct sli4_mbox_command_header_s *)mqe;
+	int rc = 0;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status vpi=%#x st=%x hdr=%x\n",
+			       sport->indicator, status,
+			       le16_to_cpu(hdr->status));
+		rc = -1;
+	}
+
+	return rc;
+}
+
+static void
+efct_hw_port_free_resources(struct efc_sli_port_s *sport, int evt, void *data)
+{
+	struct efct_hw_s *hw = sport->hw;
+	struct efct_s *efct = hw->os;
+
+	/* Clear the sport attached flag */
+	sport->attached = false;
+
+	/* Free the service parameters buffer */
+	if (sport->dma.virt) {
+		dma_free_coherent(&efct->pcidev->dev,
+				  sport->dma.size, sport->dma.virt,
+				  sport->dma.phys);
+		memset(&sport->dma, 0, sizeof(struct efc_dma_s));
+	}
+
+	/* Free the command buffer */
+	kfree(data);
+
+	/* Free the SLI resources */
+	sli_resource_free(&hw->sli, SLI_RSRC_VPI, sport->indicator);
+
+	efc_lport_cb(efct->efcport, evt, sport);
+}
+
+static void
+efct_hw_port_send_evt(struct efc_sli_port_s *sport, int evt, void *data)
+{
+	struct efct_hw_s *hw = sport->hw;
+	struct efct_s *efct = hw->os;
+
+	/* Free the mbox buffer */
+	kfree(data);
+
+	/* Now inform the registered callbacks */
+	efc_lport_cb(efct->efcport, evt, sport);
+
+	/* Set the sport attached flag */
+	if (evt == EFC_HW_PORT_ATTACH_OK)
+		sport->attached = true;
+
+	/* If there is a pending free request, then handle it now */
+	if (sport->free_req_pending)
+		efct_hw_port_free_unreg_vpi(sport, NULL);
+}
+
+static int
+efct_hw_port_alloc_init_vpi_cb(struct efct_hw_s *hw,
+			       int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port_s *sport = arg;
+	int rc;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, mqe);
+		return -1;
+	}
+
+	efct_hw_port_send_evt(sport, EFC_HW_PORT_ALLOC_OK, mqe);
+	return 0;
+}
+
+static void
+efct_hw_port_alloc_init_vpi(struct efc_sli_port_s *sport, void *data)
+{
+	struct efct_hw_s *hw = sport->hw;
+	int rc;
+
+	/* If there is a pending free request, then handle it now */
+	if (sport->free_req_pending) {
+		efct_hw_port_free_resources(sport, EFC_HW_PORT_FREE_OK, data);
+		return;
+	}
+
+	rc = sli_cmd_init_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
+			      sport->indicator, sport->domain->indicator);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VPI format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_alloc_init_vpi_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VPI command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+	}
+}
+
+static int
+efct_hw_port_alloc_read_sparm64_cb(struct efct_hw_s *hw,
+				   int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port_s *sport = arg;
+	u8 *payload = NULL;
+	struct efct_s *efct = hw->os;
+	int rc;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, mqe);
+		return -1;
+	}
+
+	payload = sport->dma.virt;
+
+	memcpy(&sport->sli_wwpn,
+	       payload + SLI4_READ_SPARM64_WWPN_OFFSET,
+		sizeof(sport->sli_wwpn));
+	memcpy(&sport->sli_wwnn,
+	       payload + SLI4_READ_SPARM64_WWNN_OFFSET,
+		sizeof(sport->sli_wwnn));
+
+	dma_free_coherent(&efct->pcidev->dev,
+			  sport->dma.size, sport->dma.virt, sport->dma.phys);
+	memset(&sport->dma, 0, sizeof(struct efc_dma_s));
+	efct_hw_port_alloc_init_vpi(sport, mqe);
+	return 0;
+}
+
+static void
+efct_hw_port_alloc_read_sparm64(struct efc_sli_port_s *sport, void *data)
+{
+	struct efct_hw_s *hw = sport->hw;
+	struct efct_s *efct = hw->os;
+	int rc;
+
+	/* Allocate memory for the service parameters */
+	sport->dma.size = 112;
+	sport->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+					     sport->dma.size, &sport->dma.phys,
+					     GFP_DMA);
+	if (!sport->dma.virt) {
+		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
+				  &sport->dma, sport->indicator);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_alloc_read_sparm64_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+	}
+}
+
+static int
+efct_hw_port_attach_reg_vpi_cb(struct efct_hw_s *hw,
+			       int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port_s *sport = arg;
+	int rc;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ATTACH_FAIL, mqe);
+		return -1;
+	}
+
+	efct_hw_port_send_evt(sport, EFC_HW_PORT_ATTACH_OK, mqe);
+	return 0;
+}
+
+static void
+efct_hw_port_attach_reg_vpi(struct efc_sli_port_s *sport, void *data)
+{
+	struct efct_hw_s *hw = sport->hw;
+	int rc;
+
+	if (!sli_cmd_reg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
+			    sport->fc_id, sport->sli_wwpn,
+			sport->indicator, sport->domain->indicator,
+			false) == 0) {
+		efc_log_err(hw->os, "REG_VPI format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ATTACH_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_attach_reg_vpi_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "REG_VPI command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ATTACH_FAIL, data);
+	}
+}
+
+static int
+efct_hw_port_free_unreg_vpi_cb(struct efct_hw_s *hw,
+			       int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port_s *sport = arg;
+	int evt = EFC_HW_PORT_FREE_OK;
+	int rc = 0;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		evt = EFC_HW_PORT_FREE_FAIL;
+		rc = -1;
+	}
+
+	efct_hw_port_free_resources(sport, evt, mqe);
+	return rc;
+}
+
+static void
+efct_hw_port_free_unreg_vpi(struct efc_sli_port_s *sport, void *data)
+{
+	struct efct_hw_s *hw = sport->hw;
+	int rc;
+
+	/* Allocate memory and send unreg_vpi */
+	if (!data) {
+		data = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!data) {
+			efct_hw_port_free_resources(sport,
+						    EFC_HW_PORT_FREE_FAIL,
+						    data);
+			return;
+		}
+		memset(data, 0, SLI4_BMBX_SIZE);
+	}
+
+	rc = sli_cmd_unreg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
+			       sport->indicator, SLI4_UNREG_TYPE_PORT);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VPI format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_FREE_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_free_unreg_vpi_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VPI command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_FREE_FAIL, data);
+	}
+}
+
+static int
+efct_hw_domain_get_mbox_status(struct efc_domain_s *domain,
+			       u8 *mqe, int status)
+{
+	struct efct_hw_s *hw = domain->hw;
+	struct sli4_mbox_command_header_s *hdr =
+			(struct sli4_mbox_command_header_s *)mqe;
+	int rc = 0;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status vfi=%#x st=%x hdr=%x\n",
+			       domain->indicator, status,
+			       le16_to_cpu(hdr->status));
+		rc = -1;
+	}
+
+	return rc;
+}
+
+static void
+efct_hw_domain_free_resources(struct efc_domain_s *domain,
+			      int evt, void *data)
+{
+	struct efct_hw_s *hw = domain->hw;
+	struct efct_s *efct = hw->os;
+
+	/* Free the service parameters buffer */
+	if (domain->dma.virt) {
+		dma_free_coherent(&efct->pcidev->dev,
+				  domain->dma.size, domain->dma.virt,
+				  domain->dma.phys);
+		memset(&domain->dma, 0, sizeof(struct efc_dma_s));
+	}
+
+	/* Free the command buffer */
+	kfree(data);
+
+	/* Free the SLI resources */
+	sli_resource_free(&hw->sli, SLI_RSRC_VFI, domain->indicator);
+
+	efc_domain_cb(efct->efcport, evt, domain);
+}
+
+static void
+efct_hw_domain_send_sport_evt(struct efc_domain_s *domain,
+			      int port_evt, int domain_evt, void *data)
+{
+	struct efct_hw_s *hw = domain->hw;
+	struct efct_s *efct = hw->os;
+
+	/* Free the mbox buffer */
+	kfree(data);
+
+	/* Send alloc/attach ok to the physical sport */
+	efct_hw_port_send_evt(domain->sport, port_evt, NULL);
+
+	/* Now inform the registered callbacks */
+	efc_domain_cb(efct->efcport, domain_evt, domain);
+}
+
+static int
+efct_hw_domain_alloc_read_sparm64_cb(struct efct_hw_s *hw,
+				     int status, u8 *mqe, void *arg)
+{
+	struct efc_domain_s *domain = arg;
+	int rc;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
+		return -1;
+	}
+
+	hw->domain = domain;
+	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ALLOC_OK,
+				      EFC_HW_DOMAIN_ALLOC_OK, mqe);
+	return 0;
+}
+
+static void
+efct_hw_domain_alloc_read_sparm64(struct efc_domain_s *domain, void *data)
+{
+	struct efct_hw_s *hw = domain->hw;
+	int rc;
+
+	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
+				  &domain->dma, SLI4_READ_SPARM64_VPI_DEFAULT);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_alloc_read_sparm64_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+	}
+}
+
+static int
+efct_hw_domain_alloc_init_vfi_cb(struct efct_hw_s *hw,
+				 int status, u8 *mqe, void *arg)
+{
+	struct efc_domain_s *domain = arg;
+	int rc;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
+		return -1;
+	}
+
+	efct_hw_domain_alloc_read_sparm64(domain, mqe);
+	return 0;
+}
+
+static void
+efct_hw_domain_alloc_init_vfi(struct efc_domain_s *domain, void *data)
+{
+	struct efct_hw_s *hw = domain->hw;
+	struct efc_sli_port_s *sport = domain->sport;
+	int rc;
+
+	/*
+	 * For FC, the HW alread registered an FCFI.
+	 * Copy FCF information into the domain and jump to INIT_VFI.
+	 */
+	domain->fcf_indicator = hw->fcf_indicator;
+	rc = sli_cmd_init_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
+			      domain->indicator, domain->fcf_indicator,
+			sport->indicator);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VFI format failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_alloc_init_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VFI command failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+	}
+}
+
+static int
+efct_hw_domain_attach_reg_vfi_cb(struct efct_hw_s *hw,
+				 int status, u8 *mqe, void *arg)
+{
+	struct efc_domain_s *domain = arg;
+	int rc;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		hw->domain = NULL;
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ATTACH_FAIL, mqe);
+		return -1;
+	}
+
+	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ATTACH_OK,
+				      EFC_HW_DOMAIN_ATTACH_OK, mqe);
+	return 0;
+}
+
+static void
+efct_hw_domain_attach_reg_vfi(struct efc_domain_s *domain, void *data)
+{
+	struct efct_hw_s *hw = domain->hw;
+	int rc;
+
+	if (!sli_cmd_reg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
+			    domain->indicator, domain->fcf_indicator,
+			domain->dma, domain->sport->indicator,
+			domain->sport->sli_wwpn,
+			domain->sport->fc_id) == 0) {
+		efc_log_err(hw->os, "REG_VFI format failure\n");
+		goto cleanup;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_attach_reg_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "REG_VFI command failure\n");
+		goto cleanup;
+	}
+
+	return;
+
+cleanup:
+	hw->domain = NULL;
+	efct_hw_domain_free_resources(domain,
+				      EFC_HW_DOMAIN_ATTACH_FAIL, data);
+}
+
+static int
+efct_hw_domain_free_unreg_vfi_cb(struct efct_hw_s *hw,
+				 int status, u8 *mqe, void *arg)
+{
+	struct efc_domain_s *domain = arg;
+	int evt = EFC_HW_DOMAIN_FREE_OK;
+	int rc = 0;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		evt = EFC_HW_DOMAIN_FREE_FAIL;
+		rc = -1;
+	}
+
+	hw->domain = NULL;
+	efct_hw_domain_free_resources(domain, evt, mqe);
+	return rc;
+}
+
+static void
+efct_hw_domain_free_unreg_vfi(struct efc_domain_s *domain, void *data)
+{
+	struct efct_hw_s *hw = domain->hw;
+	int rc;
+
+	if (!data) {
+		data = kzalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!data)
+			goto cleanup;
+	}
+
+	rc = sli_cmd_unreg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
+			       domain->indicator, SLI4_UNREG_TYPE_DOMAIN);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VFI format failure\n");
+		goto cleanup;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_free_unreg_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VFI command failure\n");
+		goto cleanup;
+	}
+
+	return;
+
+cleanup:
+	hw->domain = NULL;
+	efct_hw_domain_free_resources(domain, EFC_HW_DOMAIN_FREE_FAIL, data);
+}
+
+/**
+ * @brief Write a portion of a firmware image to the device.
+ *
+ * @par Description
+ * Calls the correct firmware write function based on the device type.
+ *
+ * @param hw Hardware context.
+ * @param dma DMA structure containing the firmware image chunk.
+ * @param size Size of the firmware image chunk.
+ * @param offset Offset, in bytes, from the beginning of the firmware image.
+ * @param last True if this is the last chunk of the image.
+ * Causes the image to be committed to flash.
+ * @param cb Pointer to a callback function that is called when the command
+ * completes.
+ * The callback function prototype is
+ * <tt>void cb(int status, u32 bytes_written, void *arg)</tt>.
+ * @param arg Pointer to be passed to the callback function.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+enum efct_hw_rtn_e
+efct_hw_firmware_write(struct efct_hw_s *hw, struct efc_dma_s *dma,
+		       u32 size, u32 offset, int last,
+			void (*cb)(int status, u32 bytes_written,
+				   u32 change_status, void *arg),
+			void *arg)
+{
+	return efct_hw_firmware_write_sli4_intf_2(hw, dma, size, offset,
+						     last, cb, arg);
+}
+
+/**
+ * @brief Write a portion of a firmware image to the Emulex XE201 ASIC Type=2.
+ *
+ * @par Description
+ * Creates a SLI_CONFIG mailbox command, fills it with the correct values to
+ * write a firmware image chunk, and then sends the command with
+ * efct_hw_command(). On completion, the callback function
+ * efct_hw_fw_write_cb() gets called to free the mailbox and to signal the
+ * caller that the write has completed.
+ *
+ * @param hw Hardware context.
+ * @param dma DMA structure containing the firmware image chunk.
+ * @param size Size of the firmware image chunk.
+ * @param offset Offset, in bytes, from the beginning of the firmware image.
+ * @param last True if this is the last chunk of the image. Causes the image to
+ * be committed to flash.
+ * @param cb Pointer to a callback function that is called when the command
+ * completes.
+ * The callback function prototype is
+ * <tt>void cb(int status, u32 bytes_written, void *arg)</tt>.
+ * @param arg Pointer to be passed to the callback function.
+ *
+ * @return Returns 0 on success, or a non-zero value on failure.
+ */
+static enum efct_hw_rtn_e
+efct_hw_firmware_write_sli4_intf_2(struct efct_hw_s *hw, struct efc_dma_s *dma,
+				   u32 size, u32 offset, int last,
+			      void (*cb)(int status, u32 bytes_written,
+					 u32 change_status, void *arg),
+				void *arg)
+{
+	enum efct_hw_rtn_e rc = EFCT_HW_RTN_ERROR;
+	u8 *mbxdata;
+	struct efct_hw_fw_wr_cb_arg *cb_arg;
+	int noc = 0;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+	memset(cb_arg, 0, sizeof(struct efct_hw_fw_wr_cb_arg));
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_common_write_object(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
+					noc, last, size, offset, "/prg/",
+					dma))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_fw_write, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "COMMON_WRITE_OBJECT failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+/**
+ * @brief Called when the WRITE OBJECT command completes.
+ *
+ * @par Description
+ * Get the number of bytes actually written out of the response, free the
+ * mailbox that was malloc'd by efct_hw_firmware_write(), then call the
+ * callback and pass the status and bytes written.
+ *
+ * @param hw Hardware context.
+ * @param status Status field from the mbox completion.
+ * @param mqe Mailbox response structure.
+ * @param arg Pointer to a callback function that signals the caller that the
+ * command is done.
+ * The callback function prototype is <tt>void cb(int status,
+ * u32 bytes_written).
+ *
+ * @return Returns 0.
+ */
+static int
+efct_hw_cb_fw_write(struct efct_hw_s *hw, int status,
+		    u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_sli_config_s *mbox_rsp =
+					(struct sli4_cmd_sli_config_s *)mqe;
+	struct sli4_rsp_cmn_write_object_s *wr_obj_rsp;
+	struct efct_hw_fw_wr_cb_arg *cb_arg = arg;
+	u32 bytes_written;
+	u16 mbox_status;
+	u32 change_status;
+
+	wr_obj_rsp = (struct sli4_rsp_cmn_write_object_s *)
+		      &mbox_rsp->payload.embed;
+	bytes_written = le32_to_cpu(wr_obj_rsp->actual_write_length);
+	mbox_status = le16_to_cpu(mbox_rsp->hdr.status);
+	change_status = (le32_to_cpu(wr_obj_rsp->change_status_dword) &
+			 RSP_CHANGE_STATUS);
+
+	kfree(mqe);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (!status && mbox_status)
+				status = mbox_status;
+			cb_arg->cb(status, bytes_written, change_status,
+				   cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 6910dca917a4..f4877d373849 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -1200,5 +1200,76 @@ efct_hw_port_control(struct efct_hw_s *hw, enum efct_hw_port_e ctrl,
 		     uintptr_t value,
 		void (*cb)(int status, uintptr_t value, void *arg),
 		void *arg);
+extern enum efct_hw_rtn_e
+efct_hw_port_alloc(struct efc_lport *efc, struct efc_sli_port_s *sport,
+		   struct efc_domain_s *domain, u8 *wwpn);
+extern enum efct_hw_rtn_e
+efct_hw_port_attach(struct efc_lport *efc, struct efc_sli_port_s *sport,
+		    u32 fc_id);
+extern enum efct_hw_rtn_e
+efct_hw_port_free(struct efc_lport *efc, struct efc_sli_port_s *sport);
+extern enum efct_hw_rtn_e
+efct_hw_domain_alloc(struct efc_lport *efc, struct efc_domain_s *domain,
+		     u32 fcf);
+extern enum efct_hw_rtn_e
+efct_hw_domain_attach(struct efc_lport *efc,
+		      struct efc_domain_s *domain, u32 fc_id);
+extern enum efct_hw_rtn_e
+efct_hw_domain_free(struct efc_lport *efc, struct efc_domain_s *domain);
+extern enum efct_hw_rtn_e
+efct_hw_domain_force_free(struct efc_lport *efc, struct efc_domain_s *domain);
+extern enum efct_hw_rtn_e
+efct_hw_node_alloc(struct efc_lport *efc, struct efc_remote_node_s *rnode,
+		   u32 fc_addr, struct efc_sli_port_s *sport);
+extern enum efct_hw_rtn_e
+efct_hw_node_free_all(struct efct_hw_s *hw);
+extern enum efct_hw_rtn_e
+efct_hw_node_attach(struct efc_lport *efc, struct efc_remote_node_s *rnode,
+		    struct efc_dma_s *sparms);
+extern enum efct_hw_rtn_e
+efct_hw_node_detach(struct efc_lport *efc, struct efc_remote_node_s *rnode);
+extern enum efct_hw_rtn_e
+efct_hw_node_free_resources(struct efc_lport *efc,
+			    struct efc_remote_node_s *rnode);
+
+extern enum efct_hw_rtn_e
+efct_hw_firmware_write(struct efct_hw_s *hw, struct efc_dma_s *dma,
+		       u32 size, u32 offset, int last,
+		       void (*cb)(int status, u32 bytes_written,
+				  u32 change_status, void *arg),
+		       void *arg);
+
+extern enum efct_hw_rtn_e
+efct_hw_get_port_protocol(struct efct_hw_s *hw, u32 pci_func,
+			  void (*mgmt_cb)(int status,
+					  enum efct_hw_port_protocol_e
+					  port_protocol,
+			  void *arg),
+		void *ul_arg);
+extern enum efct_hw_rtn_e
+efct_hw_set_port_protocol(struct efct_hw_s *hw,
+			  enum efct_hw_port_protocol_e profile,
+			  u32 pci_func,
+			  void (*mgmt_cb)(int status,  void *arg),
+			  void *ul_arg);
+
+extern enum efct_hw_rtn_e
+efct_hw_get_nvparms(struct efct_hw_s *hw,
+		    void (*mgmt_cb)(int status, u8 *wwpn,
+				    u8 *wwnn, u8 hard_alpa,
+				    u32 preferred_d_id, void *arg),
+		    void *arg);
+extern
+enum efct_hw_rtn_e efct_hw_set_nvparms(struct efct_hw_s *hw,
+				       void (*mgmt_cb)(int status, void *arg),
+		u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		u32 preferred_d_id, void *arg);
+
+typedef int (*efct_hw_async_cb_t)(struct efct_hw_s *hw, int status,
+				  u8 *mqe, void *arg);
+extern int
+efct_hw_async_call(struct efct_hw_s *hw,
+		   efct_hw_async_cb_t callback, void *arg);
+
 
 #endif /* __EFCT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 30/32] elx: efct: scsi_transport_fc host interface support
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (28 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 29/32] elx: efct: Firmware update, async link processing James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-23 21:55 ` [PATCH 31/32] elx: efct: Add Makefile and Kconfig for efct driver James Smart
                   ` (2 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Integration with the scsi_fc_transport host interfaces

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_xport.c | 580 +++++++++++++++++++++++++++++++++++++
 1 file changed, 580 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
index d43027c57732..c0f75c0dde9c 100644
--- a/drivers/scsi/elx/efct/efct_xport.c
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -1146,3 +1146,583 @@ int efct_scsi_del_device(struct efct_s *efct)
 	}
 	return 0;
 }
+
+/**
+ * @brief Copy the vport DID into the SCSI host port id.
+ *
+ * @param shost Kernel SCSI host pointer.
+ */
+static void
+efct_get_host_port_id(struct Scsi_Host *shost)
+{
+	struct efct_vport_s *vport = (struct efct_vport_s *)shost->hostdata;
+	struct efct_s *efct = vport->efct;
+	struct efc_lport *efc = efct->efcport;
+	struct efc_sli_port_s *sport;
+
+	if (efc->domain && efc->domain->sport) {
+		sport = efc->domain->sport;
+		fc_host_port_id(shost) = sport->fc_id;
+	}
+}
+
+/**
+ * @brief Set the value of the SCSI host port type.
+ *
+ * @param shost Kernel SCSI host pointer.
+ */
+static void
+efct_get_host_port_type(struct Scsi_Host *shost)
+{
+	struct efct_vport_s *vport = (struct efct_vport_s *)shost->hostdata;
+	struct efct_s *efct = vport->efct;
+	struct efc_lport *efc = efct->efcport;
+	struct efc_sli_port_s *sport;
+	int type = FC_PORTTYPE_UNKNOWN;
+
+	if (efc->domain && efc->domain->sport) {
+		if (efc->domain->is_loop) {
+			type = FC_PORTTYPE_LPORT;
+		} else {
+			sport = efc->domain->sport;
+			if (sport->is_vport)
+				type = FC_PORTTYPE_NPIV;
+			else if (sport->topology == EFC_SPORT_TOPOLOGY_P2P)
+				type = FC_PORTTYPE_PTP;
+			else if (sport->topology == EFC_SPORT_TOPOLOGY_UNKNOWN)
+				type = FC_PORTTYPE_UNKNOWN;
+			else
+				type = FC_PORTTYPE_NPORT;
+		}
+	}
+	fc_host_port_type(shost) = type;
+}
+
+static void
+efct_get_host_vport_type(struct Scsi_Host *shost)
+{
+	fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
+}
+
+/**
+ * @brief Set the value of the SCSI host port state
+ *
+ * @param shost Kernel SCSI host pointer.
+ */
+static void
+efct_get_host_port_state(struct Scsi_Host *shost)
+{
+	struct efct_vport_s *vport = (struct efct_vport_s *)shost->hostdata;
+	struct efct_s *efct = vport->efct;
+	struct efc_lport *efc = efct->efcport;
+
+	if (efc->domain)
+		fc_host_port_state(shost) = FC_PORTSTATE_ONLINE;
+	else
+		fc_host_port_state(shost) = FC_PORTSTATE_OFFLINE;
+}
+
+/**
+ * @brief Set the value of the SCSI host speed.
+ *
+ * @param shost Kernel SCSI host pointer.
+ */
+static void
+efct_get_host_speed(struct Scsi_Host *shost)
+{
+	struct efct_vport_s *vport = (struct efct_vport_s *)shost->hostdata;
+	struct efct_s *efct = vport->efct;
+	struct efc_lport *efc = efct->efcport;
+	union efct_xport_stats_u speed;
+	u32 fc_speed = FC_PORTSPEED_UNKNOWN;
+	int rc;
+
+	if (efc->domain && efc->domain->sport) {
+		rc = efct_xport_status(efct->xport,
+				       EFCT_XPORT_LINK_SPEED, &speed);
+		if (rc == 0) {
+			switch (speed.value) {
+			case 1000:
+				fc_speed = FC_PORTSPEED_1GBIT;
+				break;
+			case 2000:
+				fc_speed = FC_PORTSPEED_2GBIT;
+				break;
+			case 4000:
+				fc_speed = FC_PORTSPEED_4GBIT;
+				break;
+			case 8000:
+				fc_speed = FC_PORTSPEED_8GBIT;
+				break;
+			case 10000:
+				fc_speed = FC_PORTSPEED_10GBIT;
+				break;
+			case 16000:
+				fc_speed = FC_PORTSPEED_16GBIT;
+				break;
+			case 32000:
+				fc_speed = FC_PORTSPEED_32GBIT;
+				break;
+			}
+		}
+	}
+	fc_host_speed(shost) = fc_speed;
+}
+
+/**
+ * @brief Set the value of the SCSI host fabric name.
+ *
+ * @param shost Kernel SCSI host pointer.
+ */
+static void
+efct_get_host_fabric_name(struct Scsi_Host *shost)
+{
+	struct efct_vport_s *vport = (struct efct_vport_s *)shost->hostdata;
+	struct efct_s *efct = vport->efct;
+	struct efc_lport *efc = efct->efcport;
+
+	if (efc->domain) {
+		struct fc_els_flogi  *sp =
+			(struct fc_els_flogi  *)
+				efc->domain->flogi_service_params;
+
+		fc_host_fabric_name(shost) = be64_to_cpu(sp->fl_wwnn);
+	}
+}
+
+/**
+ * @brief Return statistical information about the adapter.
+ *
+ * @par Description
+ * Returns NULL on error for link down, no mbox pool, sli2 active,
+ * management not allowed, memory allocation error, or mbox error.
+ *
+ * @param shost Kernel SCSI host pointer.
+ *
+ * @return NULL for error address of the adapter host statistics.
+ */
+static struct fc_host_statistics *
+efct_get_stats(struct Scsi_Host *shost)
+{
+	struct efct_vport_s *vport = (struct efct_vport_s *)shost->hostdata;
+	struct efct_s *efct = vport->efct;
+	union efct_xport_stats_u stats;
+	struct efct_xport_s *xport = efct->xport;
+	u32 rc = 1;
+
+	rc = efct_xport_status(xport, EFCT_XPORT_LINK_STATISTICS, &stats);
+	if (rc != 0) {
+		pr_err("efct_xport_status returned non 0 - %d\n", rc);
+		return NULL;
+	}
+
+	vport->fc_host_stats.loss_of_sync_count =
+		stats.stats.link_stats.loss_of_sync_error_count;
+	vport->fc_host_stats.link_failure_count =
+		stats.stats.link_stats.link_failure_error_count;
+	vport->fc_host_stats.prim_seq_protocol_err_count =
+		stats.stats.link_stats.primitive_sequence_error_count;
+	vport->fc_host_stats.invalid_tx_word_count =
+		stats.stats.link_stats.invalid_transmission_word_error_count;
+	vport->fc_host_stats.invalid_crc_count =
+		stats.stats.link_stats.crc_error_count;
+	/* mbox returns kbyte count so we need to convert to words */
+	vport->fc_host_stats.tx_words =
+		stats.stats.host_stats.transmit_kbyte_count * 256;
+	/* mbox returns kbyte count so we need to convert to words */
+	vport->fc_host_stats.rx_words =
+		stats.stats.host_stats.receive_kbyte_count * 256;
+	vport->fc_host_stats.tx_frames =
+		stats.stats.host_stats.transmit_frame_count;
+	vport->fc_host_stats.rx_frames =
+		stats.stats.host_stats.receive_frame_count;
+
+	vport->fc_host_stats.fcp_input_requests =
+			xport->fcp_stats.input_requests;
+	vport->fc_host_stats.fcp_output_requests =
+			xport->fcp_stats.output_requests;
+	vport->fc_host_stats.fcp_output_megabytes =
+			xport->fcp_stats.output_bytes >> 20;
+	vport->fc_host_stats.fcp_input_megabytes =
+			xport->fcp_stats.input_bytes >> 20;
+	vport->fc_host_stats.fcp_control_requests =
+			xport->fcp_stats.control_requests;
+
+	return &vport->fc_host_stats;
+}
+
+/**
+ * @brief Copy the adapter link stats information.
+ *
+ * @param shost Kernel SCSI host pointer.
+ */
+static void
+efct_reset_stats(struct Scsi_Host *shost)
+{
+	struct efct_vport_s *vport = (struct efct_vport_s *)shost->hostdata;
+	struct efct_s *efct = vport->efct;
+	/* argument has no purpose for this action */
+	union efct_xport_stats_u dummy;
+	u32 rc = 0;
+
+	rc = efct_xport_status(efct->xport, EFCT_XPORT_LINK_STAT_RESET, &dummy);
+	if (rc != 0)
+		pr_err("efct_xport_status returned non 0 - %d\n", rc);
+}
+
+/**
+ * @brief Set the target port id to the ndlp DID or -1.
+ *
+ * @param starget Kernel SCSI target pointer.
+ */
+static void
+efct_get_starget_port_id(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+/**
+ * @brief Set the target node name.
+ *
+ * @par Description
+ * Set the target node name to the ndlp node name wwn or zero.
+ *
+ * @param starget Kernel SCSI target pointer.
+ */
+static void
+efct_get_starget_node_name(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+/**
+ * @brief Set the target port name.
+ *
+ * @par Description
+ * Set the target port name to the ndlp port name wwn or zero.
+ *
+ * @param starget Kernel SCSI target pointer.
+ */
+static void
+efct_get_starget_port_name(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+/**
+ * @brief Set the vport's symbolic name.
+ *
+ * @par Description
+ * This function is called by the transport after the fc_vport's symbolic name
+ * has been changed. This function re-registers the symbolic name with the
+ * switch to propagate the change into the fabric, if the vport is active.
+ *
+ * @param fc_vport The fc_vport who's symbolic name has been changed.
+ */
+static void
+efct_set_vport_symbolic_name(struct fc_vport *fc_vport)
+{
+	pr_err("%s\n", __func__);
+}
+
+/**
+ * @brief Gracefully take the link down and reinitialize it
+ * (does not issue LIP).
+ *
+ * @par Description
+ * Bring the link down gracefully then re-init the link. The firmware will
+ * re-initialize the Fibre Channel interface as required.
+ * It does not issue a LIP.
+ *
+ * @param shost Scsi_Host pointer.
+ *
+ * @return
+ * - 0 - Success.
+ * - EPERM - Port is offline or management commands are being blocked.
+ * - ENOMEM - Unable to allocate memory for the mailbox command.
+ * - EIO - Error in sending the mailbox command.
+ */
+static int
+efct_issue_lip(struct Scsi_Host *shost)
+{
+	struct efct_vport_s *vport =
+			shost ? (struct efct_vport_s *)shost->hostdata : NULL;
+	struct efct_s *efct = vport ? vport->efct : NULL;
+
+	if (!shost || !vport || !efct) {
+		pr_err("%s: shost=%p vport=%p efct=%p\n", __func__,
+		       shost, vport, efct);
+		return -EPERM;
+	}
+
+	if (efct_xport_control(efct->xport, EFCT_XPORT_PORT_OFFLINE))
+		efc_log_test(efct, "EFCT_XPORT_PORT_OFFLINE failed\n");
+
+	if (efct_xport_control(efct->xport, EFCT_XPORT_PORT_ONLINE))
+		efc_log_test(efct, "EFCT_XPORT_PORT_ONLINE failed\n");
+
+	return 0;
+}
+
+struct efct_vport_s *
+efct_scsi_new_vport(struct efct_s *efct, struct device *dev)
+{
+	struct Scsi_Host *shost = NULL;
+	int error = 0;
+	struct efct_vport_s *vport = NULL;
+	union efct_xport_stats_u speed;
+	u32 supported_speeds = 0;
+
+	shost = scsi_host_alloc(&efct_template, sizeof(*vport));
+	if (!shost) {
+		efc_log_err(efct, "failed to allocate Scsi_Host struct\n");
+		return NULL;
+	}
+
+	/* save efct information to shost LLD-specific space */
+	vport = (struct efct_vport_s *)shost->hostdata;
+	vport->efct = efct;
+	vport->is_vport = true;
+
+	shost->can_queue = efct_scsi_get_property(efct, EFCT_SCSI_MAX_IOS);
+	shost->max_cmd_len = 16; /* 16-byte CDBs */
+	shost->max_id = 0xffff;
+	shost->max_lun = 0xffffffff;
+
+	/* can only accept (from mid-layer) as many SGEs as we've pre-regited*/
+	shost->sg_tablesize = efct_scsi_get_property(efct, EFCT_SCSI_MAX_SGL);
+
+	/* attach FC Transport template to shost */
+	shost->transportt = efct_vport_fc_tt;
+	efc_log_debug(efct, "vport transport template=%p\n",
+		       efct_vport_fc_tt);
+
+	/* get pci_dev structure and add host to SCSI ML */
+	error = scsi_add_host_with_dma(shost, dev, &efct->pcidev->dev);
+	if (error) {
+		efc_log_test(efct, "failed scsi_add_host_with_dma\n");
+		return NULL;
+	}
+
+	/* Set symbolic name for host port */
+	snprintf(fc_host_symbolic_name(shost),
+		 sizeof(fc_host_symbolic_name(shost)),
+		     "Emulex %s FV%s DV%s", efct->model,
+		     efct->fw_version, efct->driver_version);
+
+	/* Set host port supported classes */
+	fc_host_supported_classes(shost) = FC_COS_CLASS3;
+
+	speed.value = 1000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_1GBIT;
+	}
+	speed.value = 2000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_2GBIT;
+	}
+	speed.value = 4000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_4GBIT;
+	}
+	speed.value = 8000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_8GBIT;
+	}
+	speed.value = 10000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_10GBIT;
+	}
+	speed.value = 16000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_16GBIT;
+	}
+	speed.value = 32000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_32GBIT;
+	}
+
+	fc_host_supported_speeds(shost) = supported_speeds;
+	vport->shost = shost;
+
+	return vport;
+}
+
+int efct_scsi_del_vport(struct efct_s *efct, struct Scsi_Host *shost)
+{
+	if (shost) {
+		efc_log_debug(efct,
+			       "Unregistering vport with Transport Layer\n");
+		efct_xport_remove_host(shost);
+		efc_log_debug(efct, "Unregistering vport with SCSI Midlayer\n");
+		scsi_remove_host(shost);
+		scsi_host_put(shost);
+		return 0;
+	}
+	return -1;
+}
+
+static int
+efct_vport_create(struct fc_vport *fc_vport, bool disable)
+{
+	struct Scsi_Host *shost = fc_vport ? fc_vport->shost : NULL;
+	struct efct_vport_s *pport = shost ?
+					(struct efct_vport_s *)shost->hostdata :
+					NULL;
+	struct efct_s *efct = pport ? pport->efct : NULL;
+	struct efct_vport_s *vport = NULL;
+
+	if (!fc_vport || !shost || !efct)
+		goto fail;
+
+	vport = efct_scsi_new_vport(efct, &fc_vport->dev);
+	if (!vport) {
+		efc_log_err(efct, "failed to create vport\n");
+		goto fail;
+	}
+
+	vport->fc_vport = fc_vport;
+	vport->npiv_wwpn = fc_vport->port_name;
+	vport->npiv_wwnn = fc_vport->node_name;
+	fc_host_node_name(vport->shost) = vport->npiv_wwnn;
+	fc_host_port_name(vport->shost) = vport->npiv_wwpn;
+	*(struct efct_vport_s **)fc_vport->dd_data = vport;
+
+	return 0;
+
+fail:
+	return -1;
+}
+
+static int
+efct_vport_delete(struct fc_vport *fc_vport)
+{
+	struct efct_vport_s *vport = *(struct efct_vport_s **)fc_vport->dd_data;
+	struct Scsi_Host *shost = vport ? vport->shost : NULL;
+	struct efct_s *efct = vport ? vport->efct : NULL;
+	int rc = -1;
+
+	rc = efct_scsi_del_vport(efct, shost);
+
+	if (rc)
+		pr_err("%s: vport delete failed\n", __func__);
+
+	return rc;
+}
+
+static int
+efct_vport_disable(struct fc_vport *fc_vport, bool disable)
+{
+	return 0;
+}
+
+static struct fc_function_template efct_xport_functions = {
+	.get_starget_node_name = efct_get_starget_node_name,
+	.get_starget_port_name = efct_get_starget_port_name,
+	.get_starget_port_id  = efct_get_starget_port_id,
+
+	.get_host_port_id = efct_get_host_port_id,
+	.get_host_port_type = efct_get_host_port_type,
+	.get_host_port_state = efct_get_host_port_state,
+	.get_host_speed = efct_get_host_speed,
+	.get_host_fabric_name = efct_get_host_fabric_name,
+
+	.get_fc_host_stats = efct_get_stats,
+	.reset_fc_host_stats = efct_reset_stats,
+
+	.issue_fc_host_lip = efct_issue_lip,
+
+	.set_vport_symbolic_name = efct_set_vport_symbolic_name,
+	.vport_disable = efct_vport_disable,
+
+	/* allocation lengths for host-specific data */
+	.dd_fcrport_size = sizeof(struct efct_rport_data_s),
+	.dd_fcvport_size = 128, /* should be sizeof(...) */
+
+	/* remote port fixed attributes */
+	.show_rport_maxframe_size = 1,
+	.show_rport_supported_classes = 1,
+	.show_rport_dev_loss_tmo = 1,
+
+	/* target dynamic attributes */
+	.show_starget_node_name = 1,
+	.show_starget_port_name = 1,
+	.show_starget_port_id = 1,
+
+	/* host fixed attributes */
+	.show_host_node_name = 1,
+	.show_host_port_name = 1,
+	.show_host_supported_classes = 1,
+	.show_host_supported_fc4s = 1,
+	.show_host_supported_speeds = 1,
+	.show_host_maxframe_size = 1,
+
+	/* host dynamic attributes */
+	.show_host_port_id = 1,
+	.show_host_port_type = 1,
+	.show_host_port_state = 1,
+	/* active_fc4s is shown but doesn't change (thus no get function) */
+	.show_host_active_fc4s = 1,
+	.show_host_speed = 1,
+	.show_host_fabric_name = 1,
+	.show_host_symbolic_name = 1,
+	.vport_create = efct_vport_create,
+	.vport_delete = efct_vport_delete,
+};
+
+static struct fc_function_template efct_vport_functions = {
+	.get_starget_node_name = efct_get_starget_node_name,
+	.get_starget_port_name = efct_get_starget_port_name,
+	.get_starget_port_id  = efct_get_starget_port_id,
+
+	.get_host_port_id = efct_get_host_port_id,
+	.get_host_port_type = efct_get_host_vport_type,
+	.get_host_port_state = efct_get_host_port_state,
+	.get_host_speed = efct_get_host_speed,
+	.get_host_fabric_name = efct_get_host_fabric_name,
+
+	.get_fc_host_stats = efct_get_stats,
+	.reset_fc_host_stats = efct_reset_stats,
+
+	.issue_fc_host_lip = efct_issue_lip,
+	.set_vport_symbolic_name = efct_set_vport_symbolic_name,
+
+	/* allocation lengths for host-specific data */
+	.dd_fcrport_size = sizeof(struct efct_rport_data_s),
+	.dd_fcvport_size = 128, /* should be sizeof(...) */
+
+	/* remote port fixed attributes */
+	.show_rport_maxframe_size = 1,
+	.show_rport_supported_classes = 1,
+	.show_rport_dev_loss_tmo = 1,
+
+	/* target dynamic attributes */
+	.show_starget_node_name = 1,
+	.show_starget_port_name = 1,
+	.show_starget_port_id = 1,
+
+	/* host fixed attributes */
+	.show_host_node_name = 1,
+	.show_host_port_name = 1,
+	.show_host_supported_classes = 1,
+	.show_host_supported_fc4s = 1,
+	.show_host_supported_speeds = 1,
+	.show_host_maxframe_size = 1,
+
+	/* host dynamic attributes */
+	.show_host_port_id = 1,
+	.show_host_port_type = 1,
+	.show_host_port_state = 1,
+	/* active_fc4s is shown but doesn't change (thus no get function) */
+	.show_host_active_fc4s = 1,
+	.show_host_speed = 1,
+	.show_host_fabric_name = 1,
+	.show_host_symbolic_name = 1,
+};
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 31/32] elx: efct: Add Makefile and Kconfig for efct driver
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (29 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 30/32] elx: efct: scsi_transport_fc host interface support James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-25 15:55   ` Daniel Wagner
  2019-10-23 21:55 ` [PATCH 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
  2019-10-25 15:56 ` [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver Daniel Wagner
  32 siblings, 1 reply; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch completes the efct driver population.

This patch adds driver definitions for:
Adds the efct driver Kconfig and Makefiles

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/Kconfig  |  8 ++++++++
 drivers/scsi/elx/Makefile | 30 ++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)
 create mode 100644 drivers/scsi/elx/Kconfig
 create mode 100644 drivers/scsi/elx/Makefile

diff --git a/drivers/scsi/elx/Kconfig b/drivers/scsi/elx/Kconfig
new file mode 100644
index 000000000000..3d25d8463c48
--- /dev/null
+++ b/drivers/scsi/elx/Kconfig
@@ -0,0 +1,8 @@
+config SCSI_EFCT
+	tristate "Emulex Fibre Channel Target"
+	depends on PCI && SCSI
+	depends on SCSI_FC_ATTRS
+	select CRC_T10DIF
+	help
+          The efct driver provides enhanced SCSI Target Mode
+	  support for specific SLI-4 adapters.
diff --git a/drivers/scsi/elx/Makefile b/drivers/scsi/elx/Makefile
new file mode 100644
index 000000000000..79cc4e57676e
--- /dev/null
+++ b/drivers/scsi/elx/Makefile
@@ -0,0 +1,30 @@
+#/*******************************************************************
+# * This file is part of the Emulex Linux Device Driver for         *
+# * Fibre Channel Host Bus Adapters.                                *
+# * Copyright (C) 2018 Broadcom. All Rights Reserved. The term	   *
+# * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.     *
+# *                                                                 *
+# * This program is free software; you can redistribute it and/or   *
+# * modify it under the terms of version 2 of the GNU General       *
+# * Public License as published by the Free Software Foundation.    *
+# * This program is distributed in the hope that it will be useful. *
+# * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+# * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+# * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+# * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+# * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+# * more details, a copy of which can be found in the file COPYING  *
+# * included with this package.                                     *
+# ********************************************************************/
+
+obj-$(CONFIG_SCSI_EFCT) := efct.o
+
+efct-objs := efct/efct_driver.o efct/efct_io.o efct/efct_scsi.o efct/efct_els.o \
+	     efct/efct_xport.o efct/efct_hw.o efct/efct_hw_queues.o \
+	     efct/efct_utils.o efct/efct_lio.o efct/efct_unsol.o
+
+efct-objs += libefc/efc_domain.o libefc/efc_fabric.o libefc/efc_node.o \
+	     libefc/efc_sport.o libefc/efc_device.o \
+	     libefc/efc_lib.o libefc/efc_sm.o
+
+efct-objs += libefc_sli/sli4.o
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 32/32] elx: efct: Tie into kernel Kconfig and build process
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (30 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 31/32] elx: efct: Add Makefile and Kconfig for efct driver James Smart
@ 2019-10-23 21:55 ` James Smart
  2019-10-26  0:34   ` kbuild test robot
                     ` (2 more replies)
  2019-10-25 15:56 ` [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver Daniel Wagner
  32 siblings, 3 replies; 54+ messages in thread
From: James Smart @ 2019-10-23 21:55 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This final patch ties the efct driver into the kernel Kconfig
and build linkages in the drivers/scsi directory.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/Kconfig  | 2 ++
 drivers/scsi/Makefile | 1 +
 2 files changed, 3 insertions(+)

diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 1b92f3c19ff3..f8f4529d327e 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1176,6 +1176,8 @@ config SCSI_LPFC_DEBUG_FS
 	  This makes debugging information from the lpfc driver
 	  available via the debugfs filesystem.
 
+source "drivers/scsi/elx/Kconfig"
+
 config SCSI_SIM710
 	tristate "Simple 53c710 SCSI support (Compaq, NCR machines)"
 	depends on EISA && SCSI
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index c00e3dd57990..844db573283c 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -86,6 +86,7 @@ obj-$(CONFIG_SCSI_QLOGIC_1280)	+= qla1280.o
 obj-$(CONFIG_SCSI_QLA_FC)	+= qla2xxx/
 obj-$(CONFIG_SCSI_QLA_ISCSI)	+= libiscsi.o qla4xxx/
 obj-$(CONFIG_SCSI_LPFC)		+= lpfc/
+obj-$(CONFIG_SCSI_EFCT)		+= elx/
 obj-$(CONFIG_SCSI_BFA_FC)	+= bfa/
 obj-$(CONFIG_SCSI_CHELSIO_FCOE)	+= csiostor/
 obj-$(CONFIG_SCSI_DMX3191D)	+= dmx3191d.o
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [PATCH 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions
  2019-10-23 21:55 ` [PATCH 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
@ 2019-10-24 16:22   ` Daniel Wagner
  2019-10-25 23:04     ` James Smart
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel Wagner @ 2019-10-24 16:22 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi, Ram Vegesna

Hi,

On Wed, Oct 23, 2019 at 02:55:26PM -0700, James Smart wrote:
> +/*************************************************************************
> + * Common SLI-4 register offsets and field definitions
> + */
> +
> +/* SLI_INTF - SLI Interface Definition Register */
> +#define SLI4_INTF_REG		0x0058
> +enum {
> +	SLI4_INTF_REV_SHIFT = 4,
> +	SLI4_INTF_REV_MASK = 0x0F << SLI4_INTF_REV_SHIFT,
> +
> +	SLI4_INTF_REV_S3 = 3 << SLI4_INTF_REV_SHIFT,
> +	SLI4_INTF_REV_S4 = 4 << SLI4_INTF_REV_SHIFT,
> +
> +	SLI4_INTF_FAMILY_SHIFT = 8,
> +	SLI4_INTF_FAMILY_MASK  = 0x0F << SLI4_INTF_FAMILY_SHIFT,
> +
> +	SLI4_FAMILY_CHECK_ASIC_TYPE = 0xf << SLI4_INTF_FAMILY_SHIFT,
> +
> +	SLI4_INTF_IF_TYPE_SHIFT = 12,
> +	SLI4_INTF_IF_TYPE_MASK = 0x0F << SLI4_INTF_IF_TYPE_SHIFT,
> +
> +	SLI4_INTF_IF_TYPE_2 = 2 << SLI4_INTF_IF_TYPE_SHIFT,
> +	SLI4_INTF_IF_TYPE_6 = 6 << SLI4_INTF_IF_TYPE_SHIFT,
> +
> +	SLI4_INTF_VALID_SHIFT = 29,
> +	SLI4_INTF_VALID_MASK = 0x0F << SLI4_INTF_VALID_SHIFT,

Should this a 32 bit value? This overflows to 34 bits.

> +
> +	SLI4_INTF_VALID_VALUE = 6 << SLI4_INTF_VALID_SHIFT,
> +};

Just style question: what is the benefit using anonymous enums?  The
only reason I came up was that gdb could show the name of the
value. Though a quick test didn't work if the value is passed into a
function. Maybe I did something wrong.

I am asking because register number is a define and then the shift and
mask are enums.

> +
> +/* ASIC_ID - SLI ASIC Type and Revision Register */
> +#define SLI4_ASIC_ID_REG	0x009c
> +enum {
> +	SLI4_ASIC_GEN_SHIFT = 8,
> +	SLI4_ASIC_GEN_MASK = 0xFF << SLI4_ASIC_GEN_SHIFT,
> +	SLI4_ASIC_GEN_5 = 0x0b << SLI4_ASIC_GEN_SHIFT,
> +	SLI4_ASIC_GEN_6 = 0x0c << SLI4_ASIC_GEN_SHIFT,
> +	SLI4_ASIC_GEN_7 = 0x0d << SLI4_ASIC_GEN_SHIFT,
> +};
> +
> +enum {
> +	SLI4_ASIC_REV_A0 = 0x00,
> +	SLI4_ASIC_REV_A1 = 0x01,
> +	SLI4_ASIC_REV_A2 = 0x02,
> +	SLI4_ASIC_REV_A3 = 0x03,
> +	SLI4_ASIC_REV_B0 = 0x10,
> +	SLI4_ASIC_REV_B1 = 0x11,
> +	SLI4_ASIC_REV_B2 = 0x12,
> +	SLI4_ASIC_REV_C0 = 0x20,
> +	SLI4_ASIC_REV_C1 = 0x21,
> +	SLI4_ASIC_REV_C2 = 0x22,
> +	SLI4_ASIC_REV_D0 = 0x30,
> +};
> +
> +/* BMBX - Bootstrap Mailbox Register */
> +#define SLI4_BMBX_REG		0x0160
> +#define SLI4_BMBX_MASK_HI	0x3
> +#define SLI4_BMBX_MASK_LO	0xf
> +#define SLI4_BMBX_RDY		(1 << 0)
> +#define SLI4_BMBX_HI		(1 << 1)
> +#define SLI4_BMBX_WRITE_HI(r)	((upper_32_bits(r) & ~SLI4_BMBX_MASK_HI) | \
> +					SLI4_BMBX_HI)
> +#define SLI4_BMBX_WRITE_LO(r)	(((upper_32_bits(r) & SLI4_BMBX_MASK_HI) \
> +				<< 30) | (((r) & ~SLI4_BMBX_MASK_LO) >> 2))

Could you break the line differently so that the expression is a bit
simpler to read (there is a version below which does this
(SLI4_EQ_DOORBELL))?

> +#define SLI4_BMBX_SIZE				256
> +
> +/* SLIPORT_CONTROL - SLI Port Control Register */
> +#define SLI4_PORT_CTRL_REG		0x0408
> +#define SLI4_PORT_CTRL_IP		(1 << 27)
> +#define SLI4_PORT_CTRL_IDIS		(1 << 22)
> +#define SLI4_PORT_CTRL_FDD		(1 << 31)
> +
> +/* SLI4_SLIPORT_ERROR - SLI Port Error Register */
> +#define SLI4_PORT_ERROR1		0x040c
> +#define SLI4_PORT_ERROR2		0x0410
> +
> +/* EQCQ_DOORBELL - EQ and CQ Doorbell Register */
> +#define SLI4_EQCQ_DB_REG		0x120
> +enum {
> +	SLI4_EQ_ID_LO_MASK = 0x01FF,
> +
> +	SLI4_CQ_ID_LO_MASK = 0x03FF,
> +
> +	SLI4_EQCQ_CI_EQ = 0x0200,
> +
> +	SLI4_EQCQ_QT_EQ = 0x00000400,
> +	SLI4_EQCQ_QT_CQ = 0x00000000,
> +
> +	SLI4_EQCQ_ID_HI_SHIFT = 11,
> +	SLI4_EQCQ_ID_HI_MASK = 0xF800,
> +
> +	SLI4_EQCQ_NUM_SHIFT = 16,
> +	SLI4_EQCQ_NUM_MASK = 0x1FFF0000,
> +
> +	SLI4_EQCQ_ARM = 0x20000000,
> +	SLI4_EQCQ_UNARM = 0x00000000,
> +
> +};
> +
> +#define SLI4_EQ_DOORBELL(n, id, a)\
> +	((id & SLI4_EQ_ID_LO_MASK) | SLI4_EQCQ_QT_EQ |\
> +	(((id >> 9) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
> +	((n << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | \
> +	a | SLI4_EQCQ_CI_EQ)
> +
> +#define SLI4_CQ_DOORBELL(n, id, a)\
> +	((id & SLI4_CQ_ID_LO_MASK) | SLI4_EQCQ_QT_CQ |\
> +	(((id >> 10) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
> +	((n << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | a)
> +
> +/* EQ_DOORBELL - EQ Doorbell Register for IF_TYPE = 6*/
> +#define SLI4_IF6_EQ_DB_REG	0x120
> +enum {
> +	SLI4_IF6_EQ_ID_MASK = 0x0FFF,
> +
> +	SLI4_IF6_EQ_NUM_SHIFT = 16,
> +	SLI4_IF6_EQ_NUM_MASK = 0x1FFF0000,
> +};
> +
> +#define SLI4_IF6_EQ_DOORBELL(n, id, a)\
> +	((id & SLI4_IF6_EQ_ID_MASK) | \
> +	((n << SLI4_IF6_EQ_NUM_SHIFT) & SLI4_IF6_EQ_NUM_MASK) | a)
> +
> +/* CQ_DOORBELL - CQ Doorbell Register for IF_TYPE = 6*/
> +#define SLI4_IF6_CQ_DB_REG	0xC0
> +enum {
> +	SLI4_IF6_CQ_ID_MASK = 0xFFFF,
> +
> +	SLI4_IF6_CQ_NUM_SHIFT = 16,
> +	SLI4_IF6_CQ_NUM_MASK = 0x1FFF0000,
> +};
> +
> +#define SLI4_IF6_CQ_DOORBELL(n, id, a)\
> +	((id & SLI4_IF6_CQ_ID_MASK) | \
> +	((n << SLI4_IF6_CQ_NUM_SHIFT) & SLI4_IF6_CQ_NUM_MASK) | a)

There is sometimes a space before '\' and sometimes not. Just my OCD,
sorry...

> +/**
> + * @brief MQ_DOORBELL - MQ Doorbell Register
> + */
> +#define SLI4_MQ_DB_REG		0x0140	/* register offset */

Are the other registers defines also all offsets? Just wondering if
the comment is pointing out that these values are special or not.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 24/32] elx: efct: LIO backend interface routines
  2019-10-23 21:55 ` [PATCH 24/32] elx: efct: LIO backend interface routines James Smart
@ 2019-10-24 22:27   ` Bart Van Assche
  2019-10-28 17:49     ` James Smart
  0 siblings, 1 reply; 54+ messages in thread
From: Bart Van Assche @ 2019-10-24 22:27 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Ram Vegesna

On 10/23/19 2:55 PM, James Smart wrote:
> diff --git a/drivers/scsi/elx/efct/efct_lio.c b/drivers/scsi/elx/efct/efct_lio.c
> new file mode 100644
> index 000000000000..c2661ab3e9c3
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_lio.c
> @@ -0,0 +1,2643 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +
> +#include <scsi/scsi.h>
> +#include <scsi/scsi_host.h>
> +#include <scsi/scsi_device.h>
> +#include <scsi/scsi_cmnd.h>
> +#include <scsi/scsi_tcq.h>
> +#include <target/target_core_base.h>
> +#include <target/target_core_fabric.h>

Please do not include SCSI initiator header files in a SCSI target 
driver. See also commit ba929992522b ("target: Minimize SCSI header 
#include directives").

> +#define	FABRIC_NAME		"efct"
> +#define FABRIC_NAME_NPIV	"efct_npiv"

Some time ago Christoph Hellwig asked not to use the prefix "fabric" but 
to use the prefix "target" instead.

 > +#define	FABRIC_SNPRINTF_LEN	32

"FABRIC_SNPRINTF_LEN" is a bad choice for the name for this constant. 
Please change this into a name that refers to what the purpose of this 
constant is (wwn string?) instead of how that string is generated.

> +#define	FABRIC_SNPRINTF(str, len, pre, wwn)	snprintf(str, len, \
> +		"%s%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x", pre,  \
> +	    (u8)((wwn >> 56) & 0xff), (u8)((wwn >> 48) & 0xff),    \
> +	    (u8)((wwn >> 40) & 0xff), (u8)((wwn >> 32) & 0xff),    \
> +	    (u8)((wwn >> 24) & 0xff), (u8)((wwn >> 16) & 0xff),    \
> +	    (u8)((wwn >>  8) & 0xff), (u8)((wwn & 0xff)))

Please convert this macro into a function and choose a better name, e.g. 
efct_format_wwn().

> +#define	ARRAY2WWN(w, a)	(w = ((((u64)(a)[0]) << 56) | (((u64)(a)[1]) << 48) | \
> +			    (((u64)(a)[2]) << 40) | (((u64)(a)[3]) << 32) | \
> +			    (((u64)(a)[4]) << 24) | (((u64)(a)[5]) << 16) | \
> +			    (((u64)(a)[6]) <<  8) | (((u64)(a)[7]))))

Is this perhaps an open-coded version of get_unaligned_be64()?

> +/* Per-target data for virtual targets */
> +struct efct_lio_vport_data_t {
> +	struct list_head list_entry;
> +	bool initiator_mode;
> +	bool target_mode;
> +	u64 phy_wwpn;
> +	u64 phy_wwnn;
> +	u64 vport_wwpn;
> +	u64 vport_wwnn;
> +	struct efct_lio_vport *lio_vport;
> +};
> +
> +/* Per-target data for virtual targets */
> +struct efct_lio_vport_list_t {
> +	struct list_head list_entry;
> +	struct efct_lio_vport *lio_vport;
> +};

Two times the same comment for two different data structures? 
Additionally, what is a "virtual target"?

> +/* local prototypes */
> +static char *efct_lio_get_npiv_fabric_wwn(struct se_portal_group *);
> +static char *efct_lio_get_fabric_wwn(struct se_portal_group *);
> +static u16 efct_lio_get_tag(struct se_portal_group *);
> +static u16 efct_lio_get_npiv_tag(struct se_portal_group *);
> +static int efct_lio_check_demo_mode(struct se_portal_group *);
> +static int efct_lio_check_demo_mode_cache(struct se_portal_group *);
> +static int efct_lio_check_demo_write_protect(struct se_portal_group *);
> +static int efct_lio_check_prod_write_protect(struct se_portal_group *);
> +static int efct_lio_npiv_check_demo_write_protect(struct se_portal_group *);
> +static int efct_lio_npiv_check_prod_write_protect(struct se_portal_group *);
> +static u32 efct_lio_tpg_get_inst_index(struct se_portal_group *);
> +static int efct_lio_check_stop_free(struct se_cmd *se_cmd);
> +static void efct_lio_aborted_task(struct se_cmd *se_cmd);
> +static void efct_lio_release_cmd(struct se_cmd *);
> +static void efct_lio_close_session(struct se_session *);
> +static u32 efct_lio_sess_get_index(struct se_session *);
> +static int efct_lio_write_pending(struct se_cmd *);
> +static void efct_lio_set_default_node_attrs(struct se_node_acl *);
> +static int efct_lio_get_cmd_state(struct se_cmd *);
> +static int efct_lio_queue_data_in(struct se_cmd *);
> +static int efct_lio_queue_status(struct se_cmd *);
> +static void efct_lio_queue_tm_rsp(struct se_cmd *);
> +static struct se_wwn *efct_lio_make_sport(struct target_fabric_configfs *,
> +					  struct config_group *, const char *);
> +static void efct_lio_drop_sport(struct se_wwn *);
> +static void efct_lio_npiv_drop_sport(struct se_wwn *);
> +static int efct_lio_parse_wwn(const char *, u64 *, u8 npiv);
> +static int efct_lio_parse_npiv_wwn(const char *name, size_t size,
> +				   u64 *wwpn, u64 *wwnn);
> +static struct se_portal_group *efct_lio_make_tpg(struct se_wwn *,
> +						 const char *);
> +static struct se_portal_group *efct_lio_npiv_make_tpg(struct se_wwn *,
> +						      const char *);
> +static void efct_lio_drop_tpg(struct se_portal_group *);
> +static struct se_wwn *efct_lio_npiv_make_sport(struct target_fabric_configfs *,
> +					       struct config_group *,
> +					       const char *);
> +static int
> +efct_lio_parse_npiv_wwn(const char *name, size_t size, u64 *wwpn, u64 *wwnn);
> +static void efct_lio_npiv_drop_tpg(struct se_portal_group *);
> +static int efct_lio_async_worker(struct efct_s *efct);
> +static void efct_lio_sg_unmap(struct efct_io_s *io);
> +static int efct_lio_abort_tgt_cb(struct efct_io_s *io,
> +				 enum efct_scsi_io_status_e scsi_status,
> +				    u32 flags, void *arg);
> +
> +static int efct_lio_init_nodeacl(struct se_node_acl *, const char *);
> +
> +static int efct_lio_check_demo_mode_login_only(struct se_portal_group *);
> +static int efct_lio_npiv_check_demo_mode_login_only(struct se_portal_group *);

Please reorder the code in this file such that most or all of these 
function declarations disappear.

> +static ssize_t
> +efct_lio_wwn_version_show(struct config_item *item, char *page)
> +{
> +	return sprintf(page, "Emulex EFCT fabric module version %s\n",
> +		       __stringify(EFCT_LIO_VERSION));
> +}

Version numbers are not useful in upstream code. Please remove this 
attribute and also the EFCT_LIO_VERSION constant.

> +static struct efct_lio_tpg *
> +efct_get_vport_tpg(struct efc_node_s *node)
> +{
> +	struct efct_s *efct;
> +	u64 wwpn = node->sport->wwpn;
> +	struct efct_lio_vport_list_t *vport, *next;
> +	struct efct_lio_vport *lio_vport = NULL;
> +	struct efct_lio_tpg *tpg = NULL;
> +	unsigned long flags = 0;
> +
> +	efct = node->efc->base;
> +	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
> +		list_for_each_entry_safe(vport, next,
> +				 &efct->tgt_efct.vport_list, list_entry) {
> +			lio_vport = vport->lio_vport;
> +			if (wwpn && lio_vport &&
> +			    lio_vport->npiv_wwpn == wwpn) {
> +				efc_log_test(efct, "found tpg on vport\n");
> +				tpg = lio_vport->tpg;
> +				break;
> +			}
> +		}
> +	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
> +	return tpg;
> +}

The indentation in this function is wrong. list_for_each_entry() should 
occur at the same level as spin_lock_irqsave().

> +/* local static data */

 > +/* local static data */

Are these comments useful?

> +#define LIO_IOFMT "[%04x][i:%0*x t:%0*x h:%04x][c:%02x]"
> +#define LIO_TMFIOFMT "[%04x][i:%0*x t:%0*x h:%04x][f:%02x]"
> +#define LIO_IOFMT_ITT_SIZE(efct)	4
> +
> +#define efct_lio_io_printf(io, fmt, ...) \
> +	efc_log_debug(io->efct, "[%s]" LIO_IOFMT " " fmt,	\
> +	io->node->display_name, io->instance_index,		\
> +	LIO_IOFMT_ITT_SIZE(io->efct), io->init_task_tag,		\
> +	LIO_IOFMT_ITT_SIZE(io->efct), io->tgt_task_tag, io->hw_tag,\
> +	(io->tgt_io.cdb ? io->tgt_io.cdb[0] : 0xFF), ##__VA_ARGS__)
> +#define efct_lio_tmfio_printf(io, fmt, ...) \
> +	efc_log_debug(io->efct, "[%s]" LIO_TMFIOFMT " " fmt,\
> +	io->node->display_name, io->instance_index,		\
> +	LIO_IOFMT_ITT_SIZE(io->efct), io->init_task_tag,		\
> +	LIO_IOFMT_ITT_SIZE(io->efct), io->tgt_task_tag, io->hw_tag,\
> +	io->tgt_io.tmf,  ##__VA_ARGS__)

Please remove the LIO_IOFMT, LIO_TMFIOFMT and LIO_IOFMT_ITT_SIZE macros 
and expand these macros where these are used. I think that will make the 
above logging functions much more easy to read.

> +#define efct_lio_io_state_trace(io, value) (io->tgt_io.state |= value)

This macro has "trace" in its name but does not trace anything. Please 
either remove this macro or choose a better name.

> +static int  efct_lio_tgt_session_data(struct efct_s *efct, u64 wwpn,
> +				      char *buf, int size)
> +{
> +	struct efc_sli_port_s *sport = NULL;
> +	struct efc_node_s *node = NULL;
> +	struct efc_lport *efc = efct->efcport;
> +	u16 loop_id = 0;
> +	int off = 0, rc = 0;
> +
> +	if (!efc->domain) {
> +		efc_log_err(efct, "failed to find efct/domain\n");
> +		return -1;
> +	}
> +
> +	list_for_each_entry(sport, &efc->domain->sport_list, list_entry) {
> +		if (sport->wwpn == wwpn) {
> +			list_for_each_entry(node, &sport->node_list,
> +					    list_entry) {
> +				/* Dump sessions only remote NPORT
> +				 * sessions
> +				 */
> +				if (efct_lio_node_is_initiator(node)) {
> +					rc = snprintf(buf + off,
> +						      size - off,
> +						"0x%016llx,0x%08x,0x%04x\n",
> +						be64_to_cpup((__force __be64 *)
> +								node->wwpn),
> +						node->rnode.fc_id,
> +						loop_id);
> +					if (rc < 0)
> +						break;
> +					off += rc;
> +				}
> +			}
> +		}
> +	}
> +
> +	buf[size - 1] = '\0';
> +	return 0;
> +}

Please use get_unaligned_be64() instead of using __force casts.

> +static const struct file_operations efct_debugfs_session_fops = {
> +	.owner		= THIS_MODULE,
> +	.open		= efct_debugfs_session_open,
> +	.release	= efct_debugfs_session_close,
> +	.read		= efct_debugfs_session_read,
> +	.write		= efct_debugfs_session_write,
> +	.llseek		= default_llseek,
> +};
> +
> +static const struct file_operations efct_npiv_debugfs_session_fops = {
> +	.owner		= THIS_MODULE,
> +	.open		= efct_npiv_debugfs_session_open,
> +	.release	= efct_debugfs_session_close,
> +	.read		= efct_debugfs_session_read,
> +	.write		= efct_debugfs_session_write,
> +	.llseek		= default_llseek,
> +};

Since the information that is exported through debugfs (logged in 
initiators) is information that is also useful for other target drivers, 
I think this functionality should be implemented in the target core 
instead of in this target driver.

> +/* command has been aborted, cleanup here */
> +static void efct_lio_aborted_task(struct se_cmd *se_cmd)
> +{
> +	int rc;
> +	struct efct_scsi_tgt_io_s *ocp = container_of(se_cmd,
> +						     struct efct_scsi_tgt_io_s,
> +						     cmd);
> +	struct efct_io_s *io = container_of(ocp, struct efct_io_s, tgt_io);
> +
> +	efct_lio_io_trace(io, "%s\n", __func__);
> +	efct_lio_io_state_trace(io, EFCT_LIO_STATE_TFO_ABORTED_TASK);
> +
> +	if (!(se_cmd->transport_state & CMD_T_ABORTED) || ocp->rsp_sent)
> +		return;
> +
> +	/*
> +	 * if io is non-null, take a reference out on it so it isn't
> +	 * freed until the abort operation is complete.
> +	 */
> +	if (kref_get_unless_zero(&io->ref) == 0) {
> +		/* command no longer active */
> +		struct efct_s *efct = io->efct;
> +
> +		efc_log_test(efct,
> +			      "success: command no longer active (exists=%d)\n",
> +			     (io != NULL));
> +		return;
> +	}
> +
> +	efct_lio_io_printf(io, "CMD_T_ABORTED set, aborting=%d\n",
> +			   ocp->aborting);
> +	ocp->aborting = true;
> +	/* set to non-success so data moves won't continue */
> +	ocp->err = EFCT_SCSI_STATUS_ABORTED;
> +
> +	/* wait until abort is complete; once we return, LIO will call
> +	 * queue_tm_rsp() which will send response to TMF
> +	 */
> +	init_completion(&io->tgt_io.done);
> +
> +	rc = efct_scsi_tgt_abort_io(io, efct_lio_abort_tgt_cb, NULL);
> +	if (rc == 0) {
> +		/* wait for abort to complete before returning */
> +		rc = wait_for_completion_timeout(&io->tgt_io.done,
> +						 usecs_to_jiffies(10000000));
> +
> +		/* done with reference on aborted IO */
> +		kref_put(&io->ref, io->release);
> +
> +		if (rc) {
> +			efct_lio_io_printf(io,
> +					   "abort completed successfully\n");
> +			/* check if TASK_ABORTED status should be sent
> +			 * for this IO
> +			 */
> +		} else {
> +			efct_lio_io_printf(io,
> +					   "timeout waiting for abort completed\n");
> +		}
> +	} else {
> +		efct_lio_io_printf(io, "Failed to abort\n");
> +	}
> +}

The .aborted_task() callback function must not wait until the aborted 
command has finished but instead must free the resources owned by the 
aborted command.

The comment "check if TASK_ABORTED status should be sent for this IO" is 
wrong. .aborted_task() is only called if no response will be sent to the 
initiator.

> +/**
> + * @brief Housekeeping for LIO SG mapping.
> + *
> + * @param io Pointer to IO context.
> + *
> + * @return count Count returned by pci_map_sg.
> + */

The above comment follows the Doxygen syntax. Kernel function headers 
must use the kernel-doc syntax. See also 
Documentation/process/kernel-docs.rst.

> +static struct se_wwn *
> +efct_lio_make_sport(struct target_fabric_configfs *tf,
> +		    struct config_group *group, const char *name)
> +{
> +	struct efct_lio_sport *lio_sport;
> +	struct efct_s *efct;
> +	int efctidx, ret;
> +	u64 wwpn;
> +	char *sessions_name;
> +
> +	ret = efct_lio_parse_wwn(name, &wwpn, 0);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	/* Now search for the HBA that has this WWPN */
> +	for (efctidx = 0; efctidx < MAX_EFCT_DEVICES; efctidx++) {
> +		u64 pwwn;
> +		u8 pn[8];
> +
> +		efct = efct_devices[efctidx];
> +		if (!efct)
> +			continue;
> +		memcpy(pn, efct_hw_get_ptr(&efct->hw, EFCT_HW_WWN_PORT),
> +		       sizeof(pn));
> +		ARRAY2WWN(pwwn, pn);
> +		if (pwwn == wwpn)
> +			break;
> +	}
> +	if (efctidx == MAX_EFCT_DEVICES) {
> +		pr_err("cannot find EFCT for wwpn %s\n", name);
> +		return ERR_PTR(-ENXIO);
> +	}
> +	efct = efct_devices[efctidx];
> +	lio_sport = kzalloc(sizeof(*lio_sport), GFP_KERNEL);
> +	if (!lio_sport)
> +		return ERR_PTR(-ENOMEM);
> +	lio_sport->efct = efct;
> +	lio_sport->wwpn = wwpn;
> +	FABRIC_SNPRINTF(lio_sport->wwpn_str, sizeof(lio_sport->wwpn_str),
> +			"naa.", wwpn);
> +	efct->tgt_efct.lio_sport = lio_sport;
> +
> +	sessions_name = kasprintf(GFP_KERNEL, "efct-sessions-%d",
> +				  efct->instance_index);
> +	if (sessions_name && efct->sess_debugfs_dir)
> +		lio_sport->sessions = debugfs_create_file(sessions_name,
> +							  0644,
> +						efct->sess_debugfs_dir,
> +						lio_sport,
> +						&efct_debugfs_session_fops);
> +	kfree(sessions_name);
> +
> +	return &lio_sport->sport_wwn;
> +}
> +
> +static struct se_wwn *
> +efct_lio_npiv_make_sport(struct target_fabric_configfs *tf,
> +			 struct config_group *group, const char *name)
> +{
> +	struct efct_lio_vport *lio_vport;
> +	struct efct_s *efct;
> +	int efctidx, ret = -1;
> +	u64 p_wwpn, npiv_wwpn, npiv_wwnn;
> +	char *p, tmp[128];
> +	struct efct_lio_vport_list_t *vport_list;
> +	char *sessions_name;
> +	struct fc_vport *new_fc_vport;
> +	struct fc_vport_identifiers vport_id;
> +	unsigned long flags = 0;
> +
> +	snprintf(tmp, 128, "%s", name);
> +
> +	p = strchr(tmp, '@');
> +
> +	if (!p) {
> +		pr_err("Unable to find separator operator(@)\n");
> +		return ERR_PTR(ret);
> +	}
> +	*p++ = '\0';
> +
> +	ret = efct_lio_parse_wwn(tmp, &p_wwpn, 0);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	ret = efct_lio_parse_npiv_wwn(p, strlen(p) + 1, &npiv_wwpn, &npiv_wwnn);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	 /* Now search for the HBA that has this WWPN */
> +	for (efctidx = 0; efctidx < MAX_EFCT_DEVICES; efctidx++) {
> +		u64 pwwn;
> +		u8 pn[8];
> +
> +		efct = efct_devices[efctidx];
> +		if (!efct)
> +			continue;
> +		if (!efct->xport->req_wwpn) {
> +			memcpy(pn, efct_hw_get_ptr(&efct->hw,
> +				   EFCT_HW_WWN_PORT), sizeof(pn));
> +			ARRAY2WWN(pwwn, pn);
> +		} else {
> +			pwwn = efct->xport->req_wwpn;
> +		}
> +		if (pwwn == p_wwpn)
> +			break;
> +	}
> +	if (efctidx == MAX_EFCT_DEVICES) {
> +		pr_err("cannot find EFCT for base wwpn %s\n", name);
> +		return ERR_PTR(-ENXIO);
> +	}
> +	efct = efct_devices[efctidx];
> +	lio_vport = kzalloc(sizeof(*lio_vport), GFP_KERNEL);
> +	if (!lio_vport)
> +		return ERR_PTR(-ENOMEM);
> +
> +	lio_vport->efct = efct;
> +	lio_vport->wwpn = p_wwpn;
> +	lio_vport->npiv_wwpn = npiv_wwpn;
> +	lio_vport->npiv_wwnn = npiv_wwnn;
> +
> +	FABRIC_SNPRINTF(lio_vport->wwpn_str, sizeof(lio_vport->wwpn_str),
> +			"naa.", npiv_wwpn);
> +
> +	vport_list = kmalloc(sizeof(*vport_list), GFP_KERNEL);
> +	if (!vport_list) {
> +		kfree(lio_vport);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	memset(vport_list, 0, sizeof(struct efct_lio_vport_list_t));
> +	vport_list->lio_vport = lio_vport;
> +	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
> +	INIT_LIST_HEAD(&vport_list->list_entry);
> +	list_add_tail(&vport_list->list_entry, &efct->tgt_efct.vport_list);
> +	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
> +
> +	sessions_name = kasprintf(GFP_KERNEL, "sessions-npiv-%d",
> +				  efct->instance_index);
> +	if (sessions_name && efct->sess_debugfs_dir)
> +		lio_vport->sessions = debugfs_create_file(sessions_name,
> +							  0644,
> +					   efct->sess_debugfs_dir,
> +					   lio_vport,
> +					   &efct_npiv_debugfs_session_fops);
> +	kfree(sessions_name);
> +	memset(&vport_id, 0, sizeof(vport_id));
> +	vport_id.port_name = npiv_wwpn;
> +	vport_id.node_name = npiv_wwnn;
> +	vport_id.roles = FC_PORT_ROLE_FCP_INITIATOR;
> +	vport_id.vport_type = FC_PORTTYPE_NPIV;
> +	vport_id.disable = false;
> +
> +	new_fc_vport = fc_vport_create(efct->shost, 0, &vport_id);
> +	if (!new_fc_vport) {
> +		efc_log_err(efct, "fc_vport_create failed\n");
> +		kfree(lio_vport);
> +		kfree(vport_list);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	lio_vport->fc_vport = new_fc_vport;
> +
> +	return &lio_vport->vport_wwn;
> +}

Please rework efct_lio_make_sport() and efct_lio_npiv_make_sport() such 
that the amount of duplicate code is reduced significantly.

> +
> +	/* Create kernel worker thread to service async requests
> +	 * (new/delete initiator, new cmd/tmf). Previously, a worker thread
> +	 * was needed to make upcalls into LIO because the HW completion
> +	 * context ran in an interrupt context (tasklet).
> +	 * This is no longer necessary now that HW completions run in a
> +	 * kernel thread context. However, performance is much better when
> +	 * these types of reqs have their own thread.
> +	 *
> +	 * Note: We've seen better performance when IO completion (non-async)
> +	 * upcalls into LIO are not given an additional kernel thread.
> +	 * Thus,make such upcalls directly from the HW completion kernel thread
> +	 */
> +
> +	worker = &efct->tgt_efct.async_worker;
> +	efct_mqueue_init(efct, &worker->wq);
> +
> +	worker->thread = kthread_create((int(*)(void *)) efct_lio_async_worker,
> +					efct, "efct_lio_async_worker");
> +
> +	if (IS_ERR(worker->thread)) {
> +		efc_log_err(efct, "kthread_create failed: %ld\n",
> +			     PTR_ERR(worker->thread));
> +		worker->thread = NULL;
> +		return -1;
> +	}
> +
> +	wake_up_process(worker->thread);

Please use the kernel workqueue infrastructure instead of duplicating it.

> +/**
> + * @brief Worker thread for LIO commands.
> + *
> + * @par Description
> + * This thread is used to make LIO upcalls associated with
> + * asynchronous requests (i.e. new commands received, register
> + * sessions, unregister sessions).
> + *
> + * @param mythread Pointer to the thread object.
> + *
> + * @return Always returns 0.
> + */
> +static int efct_lio_async_worker(struct efct_s *efct)
> +{
> +	struct efct_lio_wq_data_s *wq_data;
> +	struct efc_node_s *node;
> +	struct se_session *se_sess;
> +	int done = 0;
> +	bool free_data = true;
> +	struct efct_scsi_tgt_io_s *ocp;
> +	int dir, rc = 0;
> +	struct efct_io_s *io;
> +	struct efct_io_s *tmfio;
> +	struct efct_scsi_tgt_node_s *tgt_node = NULL;
> +
> +	while (!done) {
> +		/* Poll with a timeout, to keep the kernel from complaining
> +		 * of not periodically running
> +		 */
> +		wq_data = efct_mqueue_get(&efct->tgt_efct.async_worker.wq,
> +					  10000000);
> +		if (kthread_should_stop())
> +			break;
> +
> +		if (!wq_data)
> +			continue;
> +
> [ ... ]
> +		}
> +		if (free_data)
> +			kfree(wq_data);
> +	}
> +
> +	complete(&efct->tgt_efct.async_worker.done);
> +
> +	return 0;
> +}

Same comment here: please use the kernel workqueue infrastructure 
instead of duplicating it.

> +#define scsi_pack_result(key, code, qualifier) (((key & 0xff) << 16) | \
> +				((code && 0xff) << 8) | (qualifier & 0xff))

Where is this macro used? I haven't found any uses of this macro in this 
patch.

> +#define FABRIC_SNPRINTF_LEN     32

Please choose a better name for this constant. Or even better, leave out 
this define entirely and use sizeof().

> +static inline int
> +efct_mqueue_init(struct efct_s *efct, struct efct_mqueue_s *q)
> +{
> +	memset(q, 0, sizeof(*q));
> +	q->efct = efct;
> +	spin_lock_init(&q->lock);
> +	init_completion(&q->prod);
> +	INIT_LIST_HEAD(&q->queue);
> +	return 0;
> +}

Functions that are not in the hot path should be defined in a .c file 
instead of in a header file.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 02/32] elx: libefc_sli: SLI Descriptors and Queue entries
  2019-10-23 21:55 ` [PATCH 02/32] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
@ 2019-10-25  9:59   ` Daniel Wagner
  2019-10-25 23:00     ` James Smart
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel Wagner @ 2019-10-25  9:59 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi, Ram Vegesna

Hi James,

> @@ -0,0 +1,26 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFC_COMMON_H__)

What about #ifndef which is more commonly used.

> +enum {
> +	/* DW2W1 */
> +	DISEED_SGE_HS			= (1 << 2),
> +	DISEED_SGE_WS			= (1 << 3),
> +	DISEED_SGE_IC			= (1 << 4),
> +	DISEED_SGE_ICS			= (1 << 5),
> +	DISEED_SGE_ATRT			= (1 << 6),
> +	DISEED_SGE_AT			= (1 << 7),
> +	DISEED_SGE_FAT			= (1 << 8),
> +	DISEED_SGE_NA			= (1 << 9),
> +	DISEED_SGE_HI			= (1 << 10),

I noticed sometimes there are also BIT() used. Wouldn't it make sense
to the whole driver to use one or the other version of bit
definitions?

> +#define SLI4_QUEUE_DEFAULT_CQ	U16_MAX /** Use the default CQ */
> +
> +#define SLI4_QUEUE_RQ_BATCH	8
> +
> +#define CFG_RQST_CMDSZ(stype)    sizeof(struct sli4_rqst_##stype##_s)

The alignment of sizeof is off. Suppose it should be a tab there instead spaces.

> +
> +#define CFG_RQST_PYLD_LEN(stype)	\
> +		cpu_to_le32(sizeof(struct sli4_rqst_##stype##_s) -	\
> +			sizeof(struct sli4_rqst_hdr_s))
> +
> +#define CFG_RQST_PYLD_LEN_VAR(stype, varpyld)	\
> +		cpu_to_le32((sizeof(struct sli4_rqst_##stype##_s) +	\
> +			varpyld) - sizeof(struct sli4_rqst_hdr_s))
> +
> +#define SZ_DMAADDR              sizeof(struct sli4_dmaaddr_s)
> +
> +/* Payload length must accommodate both request and response */
> +#define SLI_CONFIG_PYLD_LENGTH(stype)	\
> +	max(sizeof(struct sli4_rqst_##stype##_s),		\
> +		sizeof(struct sli4_rsp_##stype##_s))

Here are the '\' have more indention compared to patch #1.

> +#define CQ_CNT_VAL(type) (CQ_CNT_##type << CQ_CNT_SHIFT)
> +
> +#define SLI4_CQE_BYTES			(4 * sizeof(u32))
> +
> +#define SLI4_COMMON_CREATE_CQ_V2_MAX_PAGES 8

Maybe use the same indention for the first to defines?

> +
> +/**
> + * @brief Generic Common Create EQ/CQ/MQ/WQ/RQ Queue completion
> + */
> +struct sli4_rsp_cmn_create_queue_s {
> +	struct sli4_rsp_hdr_s	hdr;
> +	__le16	q_id;
> +	u8	rsvd18;
> +	u8	ulp;
> +	__le32	db_offset;
> +	__le16	db_rs;
> +	__le16	db_fmt;
> +};

Just wondering about all these definitions here: These structs
describes the wire format, no? Shouldn't this marked with __packed? I
keep forgetting the rules.

> +/**
> + * EQ count.
> + */
> +enum {
> +	EQ_CNT_SHIFT	= 26,
> +
> +	EQ_CNT_256	= 0,
> +	EQ_CNT_512	= 1,
> +	EQ_CNT_1024	= 2,
> +	EQ_CNT_2048	= 3,
> +	EQ_CNT_4096	= 3,
> +};
> +#define EQ_CNT_VAL(type) (EQ_CNT_##type << EQ_CNT_SHIFT)
> +
> +#define SLI4_EQE_SIZE_4			0
> +#define SLI4_EQE_SIZE_16		1

Picking up my question from patch #1, what's the idea about the enums
and defines? Why are the last two ones not an enum?

> +/**
> + * @brief WQ_CREATE
> + *
> + * Create a Work Queue for FC.
> + */
> +#define SLI4_WQ_CREATE_V0_MAX_PAGES	4
> +struct sli4_rqst_wq_create_s {
> +	struct sli4_rqst_hdr_s	hdr;
> +	u8		num_pages;
> +	u8		dua_byte;
> +	__le16		cq_id;
> +	struct sli4_dmaaddr_s page_phys_addr[SLI4_WQ_CREATE_V0_MAX_PAGES];
> +	u8		bqu_byte;
> +	u8		ulp;
> +	__le16		rsvd;
> +};
> +
> +struct sli4_rsp_wq_create_s {
> +	struct sli4_rsp_cmn_create_queue_s q_rsp;
> +};
> +
> +/**
> + * @brief WQ_CREATE_V1
> + *
> + * Create a version 1 Work Queue for FC use.
> + */

Why does the workqueue code encode a version? Isn't this pure driver
code?

> +#define SLI4_WQ_CREATE_V1_MAX_PAGES	8
> +struct sli4_rqst_wq_create_v1_s {
> +	struct sli4_rqst_hdr_s	hdr;
> +	__le16		num_pages;
> +	__le16		cq_id;
> +	u8		page_size;
> +	u8		wqe_size_byte;
> +	__le16		wqe_count;
> +	__le32		rsvd;
> +	struct	sli4_dmaaddr_s page_phys_addr[SLI4_WQ_CREATE_V1_MAX_PAGES];
> +};
> +
> +struct sli4_rsp_wq_create_v1_s {
> +	struct sli4_rsp_cmn_create_queue_s rsp;
> +};
> +/**

Empty line missing.

> + * @brief WQ_DESTROY
> + *
> + * Destroy an FC Work Queue.
> + */


> +enum {
> +	LINK_ATTN_TYPE_LINK_UP		= 0x01,
> +	LINK_ATTN_TYPE_LINK_DOWN	= 0x02,
> +	LINK_ATTN_TYPE_NO_HARD_ALPA	= 0x03,
> +
> +	LINK_ATTN_P2P			= 0x01,
> +	LINK_ATTN_FC_AL			= 0x02,
> +	LINK_ATTN_INTERNAL_LOOPBACK	= 0x03,
> +	LINK_ATTN_SERDES_LOOPBACK	= 0x04,
> +
> +	LINK_ATTN_1G			= 0x01,
> +	LINK_ATTN_2G			= 0x02,
> +	LINK_ATTN_4G			= 0x04,
> +	LINK_ATTN_8G			= 0x08,
> +	LINK_ATTN_10G			= 0x0a,
> +	LINK_ATTN_16G			= 0x10,
> +

One empty line too much.

> +};
> +

> +/**
> + * @brief FC Completion Status Codes.
> + */
> +#define SLI4_FC_WCQE_STATUS_SUCCESS		0x00
> +#define SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE	0x01
> +#define SLI4_FC_WCQE_STATUS_REMOTE_STOP		0x02
> +#define SLI4_FC_WCQE_STATUS_LOCAL_REJECT	0x03
> +#define SLI4_FC_WCQE_STATUS_NPORT_RJT		0x04
> +#define SLI4_FC_WCQE_STATUS_FABRIC_RJT		0x05
> +#define SLI4_FC_WCQE_STATUS_NPORT_BSY		0x06
> +#define SLI4_FC_WCQE_STATUS_FABRIC_BSY		0x07
> +#define SLI4_FC_WCQE_STATUS_LS_RJT		0x09
> +#define SLI4_FC_WCQE_STATUS_CMD_REJECT		0x0b
> +#define SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK	0x0c
> +#define SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED	0x11
> +#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED 0x12
> +#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC	0x13
> +#define SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE	0x14
> +#define SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE	0x15
> +#define SLI4_FC_WCQE_STATUS_DI_ERROR		0x16
> +#define SLI4_FC_WCQE_STATUS_BA_RJT		0x17
> +#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED 0x18
> +#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC	0x19
> +#define SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT	0x1a
> +#define SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST	0x1b

Here are defines and no enums.

> +/**
> + * @brief WQE used to create an FCP initiator write.
> + */
> +enum {
> +	SLI4_IWR_WQE_DBDE	= 0x40,
> +	SLI4_IWR_WQE_XBL	= 0x8,
> +	SLI4_IWR_WQE_XC		= 0x20,
> +	SLI4_IWR_WQE_IOD	= 0x20,
> +	SLI4_IWR_WQE_HLM	= 0x10,
> +	SLI4_IWR_WQE_DNRX	= 0x10,
> +	SLI4_IWR_WQE_CCPE	= 0x80,
> +	SLI4_IWR_WQE_EAT	= 0x10,
> +	SLI4_IWR_WQE_APPID	= 0x10,
> +	SLI4_IWR_WQE_WQES	= 0x80,
> +	SLI4_IWR_WQE_PU_SHFT	= 4,
> +	SLI4_IWR_WQE_CT_SHFT	= 2,
> +	SLI4_IWR_WQE_BS_SHFT	= 4,
> +	SLI4_IWR_WQE_LEN_LOC_BIT1 = 0x80,
> +	SLI4_IWR_WQE_LEN_LOC_BIT2 = 0x1,
> +};

There are a couple of the same patters above. There is still enough
space for adding another level of indention so that all '=' are align.

> +/**
> + * @brief Local Reject Reason Codes.
> + */
> +#define SLI4_FC_LOCAL_REJECT_MISSING_CONTINUE	0x01
> +#define SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT	0x02
> +#define SLI4_FC_LOCAL_REJECT_INTERNAL_ERROR	0x03
> +#define SLI4_FC_LOCAL_REJECT_INVALID_RPI	0x04
> +#define SLI4_FC_LOCAL_REJECT_NO_XRI		0x05
> +#define SLI4_FC_LOCAL_REJECT_ILLEGAL_COMMAND	0x06
> +#define SLI4_FC_LOCAL_REJECT_XCHG_DROPPED	0x07
> +#define SLI4_FC_LOCAL_REJECT_ILLEGAL_FIELD	0x08
> +#define SLI4_FC_LOCAL_REJECT_NO_ABORT_MATCH	0x0c
> +#define SLI4_FC_LOCAL_REJECT_TX_DMA_FAILED	0x0d
> +#define SLI4_FC_LOCAL_REJECT_RX_DMA_FAILED	0x0e
> +#define SLI4_FC_LOCAL_REJECT_ILLEGAL_FRAME	0x0f
> +#define SLI4_FC_LOCAL_REJECT_NO_RESOURCES	0x11
> +#define SLI4_FC_LOCAL_REJECT_FCP_CONF_FAILURE	0x12
> +#define SLI4_FC_LOCAL_REJECT_ILLEGAL_LENGTH	0x13
> +#define SLI4_FC_LOCAL_REJECT_UNSUPPORTED_FEATURE 0x14
> +#define SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS	0x15
> +#define SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED	0x16
> +#define SLI4_FC_LOCAL_REJECT_RCV_BUFFER_TIMEOUT	0x17
> +#define SLI4_FC_LOCAL_REJECT_LOOP_OPEN_FAILURE	0x18
> +#define SLI4_FC_LOCAL_REJECT_LINK_DOWN		0x1a
> +#define SLI4_FC_LOCAL_REJECT_CORRUPTED_DATA	0x1b
> +#define SLI4_FC_LOCAL_REJECT_CORRUPTED_RPI	0x1c
> +#define SLI4_FC_LOCAL_REJECT_OUTOFORDER_DATA	0x1d
> +#define SLI4_FC_LOCAL_REJECT_OUTOFORDER_ACK	0x1e
> +#define SLI4_FC_LOCAL_REJECT_DUP_FRAME		0x1f
> +#define SLI4_FC_LOCAL_REJECT_LINK_CONTROL_FRAME	0x20
> +#define SLI4_FC_LOCAL_REJECT_BAD_HOST_ADDRESS	0x21
> +#define SLI4_FC_LOCAL_REJECT_MISSING_HDR_BUFFER	0x23
> +#define SLI4_FC_LOCAL_REJECT_MSEQ_CHAIN_CORRUPTED 0x24
> +#define SLI4_FC_LOCAL_REJECT_ABORTMULT_REQUESTED 0x25
> +#define SLI4_FC_LOCAL_REJECT_BUFFER_SHORTAGE	0x28
> +#define SLI4_FC_LOCAL_REJECT_RCV_XRIBUF_WAITING	0x29
> +#define SLI4_FC_LOCAL_REJECT_INVALID_VPI	0x2e
> +#define SLI4_FC_LOCAL_REJECT_MISSING_XRIBUF	0x30
> +#define SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET	0x40
> +#define SLI4_FC_LOCAL_REJECT_MISSING_RELOFFSET	0x41
> +#define SLI4_FC_LOCAL_REJECT_INSUFF_BUFFERSPACE	0x42
> +#define SLI4_FC_LOCAL_REJECT_MISSING_SI		0x43
> +#define SLI4_FC_LOCAL_REJECT_MISSING_ES		0x44
> +#define SLI4_FC_LOCAL_REJECT_INCOMPLETE_XFER	0x45
> +#define SLI4_FC_LOCAL_REJECT_SLER_FAILURE	0x46
> +#define SLI4_FC_LOCAL_REJECT_SLER_CMD_RCV_FAILURE 0x47
> +#define SLI4_FC_LOCAL_REJECT_SLER_REC_RJT_ERR	0x48
> +#define SLI4_FC_LOCAL_REJECT_SLER_REC_SRR_RETRY_ERR 0x49
> +#define SLI4_FC_LOCAL_REJECT_SLER_SRR_RJT_ERR	0x4a
> +#define SLI4_FC_LOCAL_REJECT_SLER_RRQ_RJT_ERR	0x4c
> +#define SLI4_FC_LOCAL_REJECT_SLER_RRQ_RETRY_ERR	0x4d
> +#define SLI4_FC_LOCAL_REJECT_SLER_ABTS_ERR	0x4e

There are a bunch of not align values. Having another tab wouldn't hurt.

> +
> +enum {
> +	SLI4_RACQE_RQ_EL_INDX = 0xfff,
> +	SLI4_RACQE_FCFI = 0x3f,
> +	SLI4_RACQE_HDPL = 0x3f,
> +	SLI4_RACQE_RQ_ID = 0xffc0,
> +};

And here the values are not aligned.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 03/32] elx: libefc_sli: Data structures and defines for mbox commands
  2019-10-23 21:55 ` [PATCH 03/32] elx: libefc_sli: Data structures and defines for mbox commands James Smart
@ 2019-10-25 11:19   ` Daniel Wagner
  2019-10-25 12:20     ` Steffen Maier
  2019-10-25 22:42     ` James Smart
  0 siblings, 2 replies; 54+ messages in thread
From: Daniel Wagner @ 2019-10-25 11:19 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi, Ram Vegesna

Hi James,

On Wed, Oct 23, 2019 at 02:55:28PM -0700, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds definitions for SLI-4 mailbox commands
> and responses.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc_sli/sli4.h | 1996 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 1996 insertions(+)
> 
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> index ebc6a67e9c8c..b36d67abf219 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.h
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -2264,4 +2264,2000 @@ struct sli4_fc_xri_aborted_cqe_s {
>  #define SLI4_ELS_REQUEST64_CMD_NON_FABRIC	0x0c
>  #define SLI4_ELS_REQUEST64_CMD_FABRIC		0x0d
>  
> +#define SLI_PAGE_SIZE		(1 << 12)	/* 4096 */

So SLI_PAGE_SIZE is fixed and can't be changed...

> +#define SLI_SUB_PAGE_MASK	(SLI_PAGE_SIZE - 1)
> +#define SLI_ROUND_PAGE(b)	(((b) + SLI_SUB_PAGE_MASK) & ~SLI_SUB_PAGE_MASK)
> +
> +#define SLI4_BMBX_TIMEOUT_MSEC		30000
> +#define SLI4_FW_READY_TIMEOUT_MSEC	30000
> +
> +#define SLI4_BMBX_DELAY_US 1000 /* 1 ms */
> +#define SLI4_INIT_PORT_DELAY_US 10000 /* 10 ms */
> +
> +static inline u32
> +sli_page_count(size_t bytes, u32 page_size)

... and callers of this function pass in SLI_PAGE_SIZE.
> +{
> +	u32	mask = page_size - 1;
> +	u32	shift = 0;
> +
> +	switch (page_size) {
> +	case 4096:
> +		shift = 12;
> +		break;
> +	case 8192:
> +		shift = 13;
> +		break;
> +	case 16384:
> +		shift = 14;
> +		break;
> +	case 32768:
> +		shift = 15;
> +		break;
> +	case 65536:
> +		shift = 16;
> +		break;
> +	default:
> +		return 0;
> +	}

What about using __ffs(page_size)? But...

> +
> +	return (bytes + mask) >> shift;

... mask and shift could just be defined like SLI_PAGE_SIZE and we
safe a few instructions. Unless SLI_PAGE_SIZE will be dynamic in future.

> +}
> +
> +/*************************************************************************
> + * SLI-4 mailbox command formats and definitions
> + */
> +
> +struct sli4_mbox_command_header_s {
> +	u8	resvd0;
> +	u8	command;
> +	__le16	status;	/* Port writes to indicate success/fail */
> +};
> +
> +enum {
> +	MBX_CMD_CONFIG_LINK	= 0x07,
> +	MBX_CMD_DUMP		= 0x17,
> +	MBX_CMD_DOWN_LINK	= 0x06,
> +	MBX_CMD_INIT_LINK	= 0x05,
> +	MBX_CMD_INIT_VFI	= 0xa3,
> +	MBX_CMD_INIT_VPI	= 0xa4,
> +	MBX_CMD_POST_XRI	= 0xa7,
> +	MBX_CMD_RELEASE_XRI	= 0xac,
> +	MBX_CMD_READ_CONFIG	= 0x0b,
> +	MBX_CMD_READ_STATUS	= 0x0e,
> +	MBX_CMD_READ_NVPARMS	= 0x02,
> +	MBX_CMD_READ_REV	= 0x11,
> +	MBX_CMD_READ_LNK_STAT	= 0x12,
> +	MBX_CMD_READ_SPARM64	= 0x8d,
> +	MBX_CMD_READ_TOPOLOGY	= 0x95,
> +	MBX_CMD_REG_FCFI	= 0xa0,
> +	MBX_CMD_REG_FCFI_MRQ	= 0xaf,
> +	MBX_CMD_REG_RPI		= 0x93,
> +	MBX_CMD_REG_RX_RQ	= 0xa6,
> +	MBX_CMD_REG_VFI		= 0x9f,
> +	MBX_CMD_REG_VPI		= 0x96,
> +	MBX_CMD_RQST_FEATURES	= 0x9d,
> +	MBX_CMD_SLI_CONFIG	= 0x9b,
> +	MBX_CMD_UNREG_FCFI	= 0xa2,
> +	MBX_CMD_UNREG_RPI	= 0x14,
> +	MBX_CMD_UNREG_VFI	= 0xa1,
> +	MBX_CMD_UNREG_VPI	= 0x97,
> +	MBX_CMD_WRITE_NVPARMS	= 0x03,
> +	MBX_CMD_CFG_AUTO_XFER_RDY = 0xAD,
> +
> +	MBX_STATUS_SUCCESS	= 0x0000,
> +	MBX_STATUS_FAILURE	= 0x0001,
> +	MBX_STATUS_RPI_NOT_REG	= 0x1400,
> +};
> +
> +/**
> + * @brief CONFIG_LINK
> + */
> +enum {
> +	SLI4_CFG_LINK_BBSCN = 0xf00,
> +	SLI4_CFG_LINK_CSCN  = 0x1000,
> +};
> +
> +struct sli4_cmd_config_link_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	u8		maxbbc;		/* Max buffer-to-buffer credit */

Why stopping here documention the members?

> +	u8		rsvd5;
> +	u8		rsvd6;
> +	u8		rsvd7;
> +	u8		alpa;
> +	__le16		n_port_id;
> +	u8		rsvd11;
> +	__le32		rsvd12;
> +	__le32		e_d_tov;
> +	__le32		lp_tov;
> +	__le32		r_a_tov;
> +	__le32		r_t_tov;
> +	__le32		al_tov;
> +	__le32		rsvd36;
> +	/*
> +	 * Buffer-to-buffer state change number
> +	 * Configure BBSCN
> +	 */
> +	__le32		bbscn_dword;
> +};
> +
> +/**
> + * @brief DUMP Type 4
> + */
> +enum {
> +	SLI4_DUMP4_TYPE = 0xf,
> +};
> +
> +#define SLI4_WKI_TAG_SAT_TEM 0x1040
> +
> +struct sli4_cmd_dump4_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le32		type_dword;
> +	__le16		wki_selection;
> +	__le16		rsvd10;
> +	__le32		rsvd12;
> +	__le32		returned_byte_cnt;
> +	__le32		resp_data[59];
> +};
> +
> +/* INIT_LINK - initialize the link for a FC port */
> +#define FC_TOPOLOGY_FCAL	0
> +#define FC_TOPOLOGY_P2P		1
> +
> +#define SLI4_INIT_LINK_F_LOOP_BACK	(1 << 0)
> +#define SLI4_INIT_LINK_F_UNFAIR		(1 << 6)
> +#define SLI4_INIT_LINK_F_NO_LIRP	(1 << 7)
> +#define SLI4_INIT_LINK_F_LOOP_VALID_CHK	(1 << 8)
> +#define SLI4_INIT_LINK_F_NO_LISA	(1 << 9)
> +#define SLI4_INIT_LINK_F_FAIL_OVER	(1 << 10)
> +#define SLI4_INIT_LINK_F_NO_AUTOSPEED	(1 << 11)
> +#define SLI4_INIT_LINK_F_PICK_HI_ALPA	(1 << 15)
> +
> +#define SLI4_INIT_LINK_F_P2P_ONLY	1
> +#define SLI4_INIT_LINK_F_FCAL_ONLY	2
> +
> +#define SLI4_INIT_LINK_F_FCAL_FAIL_OVER	0
> +#define SLI4_INIT_LINK_F_P2P_FAIL_OVER	1
> +
> +enum {
> +	SLI4_INIT_LINK_SEL_RESET_AL_PA = 0xff,
> +	SLI4_INIT_LINK_FLAG_LOOPBACK = 0x1,
> +	SLI4_INIT_LINK_FLAG_TOPOLOGY = 0x6,
> +	SLI4_INIT_LINK_FLAG_UNFAIR   = 0x40,
> +	SLI4_INIT_LINK_FLAG_SKIP_LIRP_LILP = 0x80,
> +	SLI4_INIT_LINK_FLAG_LOOP_VALIDITY = 0x100,
> +	SLI4_INIT_LINK_FLAG_SKIP_LISA = 0x200,
> +	SLI4_INIT_LINK_FLAG_EN_TOPO_FAILOVER = 0x400,
> +	SLI4_INIT_LINK_FLAG_FIXED_SPEED = 0x800,
> +	SLI4_INIT_LINK_FLAG_SEL_HIGHTEST_AL_PA = 0x8000,
> +};
> +
> +struct sli4_cmd_init_link_s {
> +	struct sli4_mbox_command_header_s       hdr;
> +	__le32	sel_reset_al_pa_dword;
> +	__le32	flags0;
> +	__le32	link_speed_sel_code;
> +#define FC_LINK_SPEED_1G		1
> +#define FC_LINK_SPEED_2G		2
> +#define FC_LINK_SPEED_AUTO_1_2		3
> +#define FC_LINK_SPEED_4G		4
> +#define FC_LINK_SPEED_AUTO_4_1		5
> +#define FC_LINK_SPEED_AUTO_4_2		6
> +#define FC_LINK_SPEED_AUTO_4_2_1	7
> +#define FC_LINK_SPEED_8G		8
> +#define FC_LINK_SPEED_AUTO_8_1		9
> +#define FC_LINK_SPEED_AUTO_8_2		10
> +#define FC_LINK_SPEED_AUTO_8_2_1	11
> +#define FC_LINK_SPEED_AUTO_8_4		12
> +#define FC_LINK_SPEED_AUTO_8_4_1	13
> +#define FC_LINK_SPEED_AUTO_8_4_2	14
> +#define FC_LINK_SPEED_10G		16
> +#define FC_LINK_SPEED_16G		17
> +#define FC_LINK_SPEED_AUTO_16_8_4	18
> +#define FC_LINK_SPEED_AUTO_16_8		19
> +#define FC_LINK_SPEED_32G		20
> +#define FC_LINK_SPEED_AUTO_32_16_8	21
> +#define FC_LINK_SPEED_AUTO_32_16	22
> +};

I would move the defines in front of the struct.

> +
> +/**
> + * @brief INIT_VFI - initialize the VFI resource
> + */
> +enum {
> +	SLI4_INIT_VFI_FLAG_VP = 0x1000,		/* DW1W1 */
> +	SLI4_INIT_VFI_FLAG_VF = 0x2000,
> +	SLI4_INIT_VFI_FLAG_VT = 0x4000,
> +	SLI4_INIT_VFI_FLAG_VR = 0x8000,
> +
> +	SLI4_INIT_VFI_VFID	 = 0x1fff,	/* DW3W0 */
> +	SLI4_INIT_VFI_PRI	 = 0xe000,
> +
> +	SLI4_INIT_VFI_HOP_COUNT = 0xff000000,	/* DW4 */
> +};

I would align all the '='.

> +
> +struct sli4_cmd_init_vfi_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le16		vfi;
> +	__le16		flags0_word;
> +	__le16		fcfi;
> +	__le16		vpi;
> +	__le32		vf_id_pri_dword;
> +	__le32		hop_cnt_dword;
> +};
> +
> +/**
> + * @brief INIT_VPI - initialize the VPI resource
> + */
> +struct sli4_cmd_init_vpi_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le16		vpi;
> +	__le16		vfi;
> +};
> +
> +/**
> + * @brief POST_XRI - post XRI resources to the SLI Port
> + */
> +enum {
> +	SLI4_POST_XRI_COUNT	= 0xfff,	/* DW1W1 */
> +	SLI4_POST_XRI_FLAG_ENX	= 0x1000,
> +	SLI4_POST_XRI_FLAG_DL	= 0x2000,
> +	SLI4_POST_XRI_FLAG_DI	= 0x4000,
> +	SLI4_POST_XRI_FLAG_VAL	= 0x8000,
> +};
> +
> +struct sli4_cmd_post_xri_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le16		xri_base;
> +	__le16		xri_count_flags;
> +};
> +
> +/**
> + * @brief RELEASE_XRI - Release XRI resources from the SLI Port
> + */
> +enum {
> +	SLI4_RELEASE_XRI_REL_XRI_CNT	= 0x1f,	/* DW1W0 */
> +	SLI4_RELEASE_XRI_COUNT		= 0x1f,	/* DW1W1 */
> +};
> +
> +struct sli4_cmd_release_xri_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le16		rel_xri_count_word;
> +	__le16		xri_count_word;
> +
> +	struct {
> +		__le16	xri_tag0;
> +		__le16	xri_tag1;
> +	} xri_tbl[62];
> +};
> +
> +/**
> + * @brief READ_CONFIG - read SLI port configuration parameters
> + */
> +struct sli4_cmd_read_config_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +};
> +
> +enum {
> +	SLI4_READ_CFG_RESP_RESOURCE_EXT = 0x80000000,	/* DW1 */
> +	SLI4_READ_CFG_RESP_TOPOLOGY = 0xff000000,	/* DW2 */
> +};
> +
> +struct sli4_rsp_read_config_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le32		ext_dword;
> +	__le32		topology_dword;
> +	__le32		resvd8;
> +	__le16		e_d_tov;
> +	__le16		resvd14;
> +	__le32		resvd16;
> +	__le16		r_a_tov;
> +	__le16		resvd22;
> +	__le32		resvd24;
> +	__le32		resvd28;
> +	__le16		lmt;
> +	__le16		resvd34;
> +	__le32		resvd36;
> +	__le32		resvd40;
> +	__le16		xri_base;
> +	__le16		xri_count;
> +	__le16		rpi_base;
> +	__le16		rpi_count;
> +	__le16		vpi_base;
> +	__le16		vpi_count;
> +	__le16		vfi_base;
> +	__le16		vfi_count;
> +	__le16		resvd60;
> +	__le16		fcfi_count;
> +	__le16		rq_count;
> +	__le16		eq_count;
> +	__le16		wq_count;
> +	__le16		cq_count;
> +	__le32		pad[45];
> +};
> +
> +#define SLI4_READ_CFG_TOPO_FC		0x1	/** FC topology unknown */
> +#define SLI4_READ_CFG_TOPO_FC_DA	0x2 /* FC Direct Attach (non FC-AL) */
> +#define SLI4_READ_CFG_TOPO_FC_AL	0x3	/** FC-AL topology */

The comments are not aligned.

> + * @brief READ_NVPARMS - read SLI port configuration parameters
> + */
> +
> +enum {
> +	SLI4_READ_NVPARAMS_HARD_ALPA	  = 0xff,
> +	SLI4_READ_NVPARAMS_PREFERRED_D_ID = 0xffffff00,
> +};
> +
> +struct sli4_cmd_read_nvparms_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le32		resvd0;
> +	__le32		resvd4;
> +	__le32		resvd8;
> +	__le32		resvd12;
> +	u8		wwpn[8];
> +	u8		wwnn[8];
> +	__le32		hard_alpa_d_id;
> +};
> +
> +/**
> + * @brief WRITE_NVPARMS - write SLI port configuration parameters
> + */
> +struct sli4_cmd_write_nvparms_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le32		resvd0;
> +	__le32		resvd4;
> +	__le32		resvd8;
> +	__le32		resvd12;
> +	u8		wwpn[8];
> +	u8		wwnn[8];
> +	__le32		hard_alpa_d_id;
> +};
> +
> +/**
> + * @brief READ_REV - read the Port revision levels
> + */
> +enum {
> +	SLI4_READ_REV_FLAG_SLI_LEVEL = 0xf,
> +	SLI4_READ_REV_FLAG_FCOEM	= 0x10,
> +	SLI4_READ_REV_FLAG_CEEV	= 0x60,
> +	SLI4_READ_REV_FLAG_VPD	= 0x2000,
> +
> +	SLI4_READ_REV_AVAILABLE_LENGTH = 0xffffff,
> +};

Also here I would align the '='.

> +
> +struct sli4_cmd_read_rev_s {
> +	struct sli4_mbox_command_header_s hdr;
> +	__le16		resvd0;
> +	__le16		flags0_word;
> +	__le32		first_hw_rev;
> +	__le32		second_hw_rev;
> +	__le32		resvd12;
> +	__le32		third_hw_rev;
> +	u8		fc_ph_low;
> +	u8		fc_ph_high;
> +	u8		feature_level_low;
> +	u8		feature_level_high;
> +	__le32		resvd24;
> +	__le32		first_fw_id;
> +	u8		first_fw_name[16];
> +	__le32		second_fw_id;
> +	u8		second_fw_name[16];
> +	__le32		rsvd18[30];
> +	__le32		available_length_dword;
> +	struct sli4_dmaaddr_s hostbuf;
> +	__le32		returned_vpd_length;
> +	__le32		actual_vpd_length;
> +};
> +
> +/**
> + * @brief READ_SPARM64 - read the Port service parameters
> + */
> +struct sli4_cmd_read_sparm64_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le32		resvd0;
> +	__le32		resvd4;
> +	struct sli4_bde_s	bde_64;
> +	__le16		vpi;
> +	__le16		resvd22;
> +	__le16		port_name_start;
> +	__le16		port_name_len;
> +	__le16		node_name_start;
> +	__le16		node_name_len;
> +};

I would also ident all members (except hdr I guess) to the same level.

> +
> +#define SLI4_READ_SPARM64_VPI_DEFAULT	0
> +#define SLI4_READ_SPARM64_VPI_SPECIAL	U16_MAX
> +
> +#define SLI4_READ_SPARM64_WWPN_OFFSET	(4 * sizeof(u32))
> +#define SLI4_READ_SPARM64_WWNN_OFFSET	(SLI4_READ_SPARM64_WWPN_OFFSET \
> +					+ sizeof(uint64_t))
> +/**
> + * @brief READ_TOPOLOGY - read the link event information
> + */
> +enum {
> +	SLI4_READTOPO_ATTEN_TYPE	= 0xff,		/* DW2 */
> +	SLI4_READTOPO_FLAG_IL		= 0x100,
> +	SLI4_READTOPO_FLAG_PB_RECVD	= 0x200,
> +
> +	SLI4_READTOPO_LINKSTATE_RECV	= 0x3,
> +	SLI4_READTOPO_LINKSTATE_TRANS	= 0xc,
> +	SLI4_READTOPO_LINKSTATE_MACHINE	= 0xf0,
> +	SLI4_READTOPO_LINKSTATE_SPEED	= 0xff00,
> +	SLI4_READTOPO_LINKSTATE_TF	= 0x40000000,
> +	SLI4_READTOPO_LINKSTATE_LU	= 0x80000000,
> +
> +	SLI4_READTOPO_SCN_BBSCN		= 0xf,		/* DW9W1B0 */
> +	SLI4_READTOPO_SCN_CBBSCN	= 0xf0,
> +
> +	SLI4_READTOPO_R_T_TOV		= 0x1ff,	/* DW10WO */
> +	SLI4_READTOPO_AL_TOV		= 0xf000,
> +
> +	SLI4_READTOPO_PB_FLAG		= 0x80,
> +
> +	SLI4_READTOPO_INIT_N_PORTID	= 0xffffff,
> +};
> +
> +struct sli4_cmd_read_topology_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le32		event_tag;
> +	__le32		dw2_attentype;
> +	u8		topology;
> +	u8		lip_type;
> +	u8		lip_al_ps;
> +	u8		al_pa_granted;
> +	struct sli4_bde_s	bde_loop_map;
> +	__le32		linkdown_state;
> +	__le32		currlink_state;
> +	u8		max_bbc;
> +	u8		init_bbc;
> +	u8		scn_flags;
> +	u8		rsvd39;
> +	__le16		dw10w0_al_rt_tov;
> +	__le16		lp_tov;
> +	u8		acquired_al_pa;
> +	u8		pb_flags;
> +	__le16		specified_al_pa;
> +	__le32		dw12_init_n_port_id;
> +};

also here same ident level for all (except hdr, I suppose).

> +
> +#define SLI4_MIN_LOOP_MAP_BYTES	128
> +
> +#define SLI4_READ_TOPOLOGY_LINK_UP	0x1
> +#define SLI4_READ_TOPOLOGY_LINK_DOWN	0x2
> +#define SLI4_READ_TOPOLOGY_LINK_NO_ALPA	0x3
> +
> +#define SLI4_READ_TOPOLOGY_UNKNOWN	0x0
> +#define SLI4_READ_TOPOLOGY_NPORT	0x1
> +#define SLI4_READ_TOPOLOGY_FC_AL	0x2
> +
> +#define SLI4_READ_TOPOLOGY_SPEED_NONE	0x00
> +#define SLI4_READ_TOPOLOGY_SPEED_1G	0x04
> +#define SLI4_READ_TOPOLOGY_SPEED_2G	0x08
> +#define SLI4_READ_TOPOLOGY_SPEED_4G	0x10
> +#define SLI4_READ_TOPOLOGY_SPEED_8G	0x20
> +#define SLI4_READ_TOPOLOGY_SPEED_10G	0x40
> +#define SLI4_READ_TOPOLOGY_SPEED_16G	0x80
> +#define SLI4_READ_TOPOLOGY_SPEED_32G	0x90
> +
> +/**
> + * @brief REG_FCFI - activate a FC Forwarder
> + */
> +struct sli4_cmd_reg_fcfi_rq_cfg {
> +	u8	r_ctl_mask;
> +	u8	r_ctl_match;
> +	u8	type_mask;
> +	u8	type_match;
> +};
> +
> +enum {
> +	SLI4_REGFCFI_VLAN_TAG		= 0xfff,
> +	SLI4_REGFCFI_VLANTAG_VALID	= 0x1000,
> +};
> +
> +#define SLI4_CMD_REG_FCFI_NUM_RQ_CFG	4
> +struct sli4_cmd_reg_fcfi_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le16		fcf_index;
> +	__le16		fcfi;
> +	__le16		rqid1;
> +	__le16		rqid0;
> +	__le16		rqid3;
> +	__le16		rqid2;
> +	struct sli4_cmd_reg_fcfi_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];

below in struct sli4_cmd_reg_fcfi_mrq_s the member is on a new line,
maybe make it here too? Consistency :)

> +	__le32		dw8_vlan;
> +};
> +
> +#define SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG	4
> +#define SLI4_CMD_REG_FCFI_MRQ_MAX_NUM_RQ	32
> +#define SLI4_CMD_REG_FCFI_SET_FCFI_MODE		0
> +#define SLI4_CMD_REG_FCFI_SET_MRQ_MODE		1
> +
> +enum {
> +	SLI4_REGFCFI_MRQ_VLAN_TAG	= 0xfff,
> +	SLI4_REGFCFI_MRQ_VLANTAG_VALID	= 0x1000,
> +	SLI4_REGFCFI_MRQ_MODE		= 0x2000,
> +
> +	SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS	= 0xff,
> +	SLI4_REGFCFI_MRQ_FILTER_BITMASK = 0xf00,
> +	SLI4_REGFCFI_MRQ_RQ_SEL_POLICY	= 0xf000,
> +};
> +
> +struct sli4_cmd_reg_fcfi_mrq_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le16		fcf_index;
> +	__le16		fcfi;
> +	__le16		rqid1;
> +	__le16		rqid0;
> +	__le16		rqid3;
> +	__le16		rqid2;
> +	struct sli4_cmd_reg_fcfi_rq_cfg
> +				rq_cfg[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
> +	__le32		dw8_vlan;
> +	__le32		dw9_mrqflags;
> +};
> +
> +/**
> + * @brief REG_RPI - register a Remote Port Indicator
> + */
> +enum {
> +	SLI4_REGRPI_REMOTE_N_PORTID	= 0xffffff,	/* DW2 */
> +	SLI4_REGRPI_UPD			= 0x1000000,
> +	SLI4_REGRPI_ETOW		= 0x8000000,
> +	SLI4_REGRPI_TERP		= 0x20000000,
> +	SLI4_REGRPI_CI			= 0x80000000,
> +};
> +
> +struct sli4_cmd_reg_rpi_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le16		rpi;
> +	__le16		rsvd2;
> +	__le32		dw2_rportid_flags;
> +	struct sli4_bde_s	bde_64;
> +	__le16		vpi;
> +	__le16		rsvd26;
> +};

Again all members the same ident level?

> +
> +#define SLI4_REG_RPI_BUF_LEN			0x70
> +
> +/**
> + * @brief REG_VFI - register a Virtual Fabric Indicator
> + */
> +enum {
> +	SLI4_REGVFI_VP		= 0x1000,	/* DW1 */
> +	SLI4_REGVFI_UPD		= 0x2000,
> +
> +	SLI4_REGVFI_LOCAL_N_PORTID = 0xffffff,	/* DW10 */
> +};

ditto.

> +
> +struct sli4_cmd_reg_vfi_s {
> +	struct sli4_mbox_command_header_s	hdr;
> +	__le16		vfi;
> +	__le16		dw0w1_flags;
> +	__le16		fcfi;
> +	__le16		vpi;			/* vp=TRUE */
> +	u8		wwpn[8];
> +	struct sli4_bde_s sparm;
> +	__le32		e_d_tov;
> +	__le32		r_a_tov;
> +	__le32		dw10_lportid_flags;
> +};

and here.

> +/**
> + * @brief COMMON_GET_SLI4_PARAMETERS
> + */
> +
> +#define GET_Q_CNT_METHOD(val)\
> +	((val & RSP_GET_PARAM_Q_CNT_MTHD_MASK)\
> +	>> RSP_GET_PARAM_Q_CNT_MTHD_SHFT)
> +#define GET_Q_CREATE_VERSION(val)\
> +	((val & RSP_GET_PARAM_QV_MASK)\
> +	>> RSP_GET_PARAM_QV_SHIFT)

This time there is no space in front of '\'. Does the expresssion not
fit on one line? Would be easier to read:

#define GET_Q_CREATE_VERSION(val) \
	((val & RSP_GET_PARAM_QV_MASK) >> RSP_GET_PARAM_QV_SHIFT)


> +#define SLI4_FUNCTION_MODE_INI_MODE 0x40
> +#define SLI4_FUNCTION_MODE_TGT_MODE 0x80
> +#define SLI4_FUNCTION_MODE_DUA_MODE      0x800

Just one space between _MODE and 0x800

> +struct sli4_rqst_cmn_read_object_s {
> +	struct sli4_rqst_hdr_s	hdr;
> +	__le32		desired_read_length_dword;
> +	__le32		read_offset;
> +	u8		object_name[104];
> +	__le32		host_buffer_descriptor_count;
> +	struct sli4_bde_s	host_buffer_descriptor[0];
> +};

Same ident for all members?

> +
> +enum {
> +	RSP_COM_READ_OBJ_EOF = 0x80000000
> +
> +};
> +
> +struct sli4_rsp_cmn_read_object_s {
> +	struct sli4_rsp_hdr_s	hdr;
> +	__le32		actual_read_length;
> +	__le32		eof_dword;
> +};

Also here?

> +/**
> + * @brief COMMON_WRITE_OBJECT
> + */
> +
> +enum {
> +	SLI4_RQ_DES_WRITE_LEN = 0xFFFFFF,
> +	SLI4_RQ_DES_WRITE_LEN_NOC = 0x40000000,
> +	SLI4_RQ_DES_WRITE_LEN_EOF = 0x80000000
> +
> +};

New line too much and I would also add the ',' to the last member as
it was done above.  Furhtermore aligment of '='?

> +
> +struct sli4_rqst_cmn_write_object_s {
> +	struct sli4_rqst_hdr_s	hdr;
> +	__le32		desired_write_len_dword;
> +	__le32		write_offset;
> +	u8		object_name[104];
> +	__le32		host_buffer_descriptor_count;
> +	struct sli4_bde_s	host_buffer_descriptor[0];
> +};

Aligment of the members?

> +enum {
> +	RSP_CHANGE_STATUS = 0xFF
> +
> +};

One newline too much and ',' on the member?

> +
> +struct sli4_rsp_cmn_write_object_s {
> +	struct sli4_rsp_hdr_s	hdr;
> +	__le32		actual_write_length;
> +	__le32		change_status_dword;
> +};

One more aligment thingy

> +
> +/**
> + * @brief COMMON_DELETE_OBJECT
> + */
> +struct sli4_rqst_cmn_delete_object_s {
> +	struct sli4_rqst_hdr_s	hdr;
> +	__le32		rsvd4;
> +	__le32		rsvd5;
> +	u8		object_name[104];
> +};
> +
> +/**
> + * @brief COMMON_READ_OBJECT_LIST
> + */
> +
> +enum {
> +	SLI4_RQ_OBJ_LIST_READ_LEN = 0xFFFFFF
> +
> +};

Newline too much and and missing ','

> +
> +struct sli4_rqst_cmn_read_object_list_s {
> +	struct sli4_rqst_hdr_s	hdr;
> +	__le32		desired_read_length_dword;
> +	__le32		read_offset;
> +	u8		object_name[104];
> +	__le32		host_buffer_descriptor_count;
> +	struct sli4_bde_s	host_buffer_descriptor[0];
> +};

aliment of the members

> +
> +/**
> + * @brief COMMON_SET_DUMP_LOCATION
> + */
> +
> +enum {
> +	SLI4_RQ_COM_SET_DUMP_BUFFER_LEN = 0xFFFFFF,
> +	SLI4_RQ_COM_SET_DUMP_FDB = 0x20000000,
> +	SLI4_RQ_COM_SET_DUMP_BLP = 0x40000000,
> +	SLI4_RQ_COM_SET_DUMP_QRY = 0x80000000,
> +
> +};

Newline too much

> +
> +struct sli4_rqst_cmn_set_dump_location_s {
> +	struct sli4_rqst_hdr_s	hdr;
> +	__le32		buffer_length_dword;
> +	__le32		buf_addr_low;
> +	__le32		buf_addr_high;
> +};

same comment as above

> +
> +enum {
> +	RSP_SET_DUMP_BUFFER_LEN = 0xFFFFFF
> +
> +};

same comment as above

> +
> +struct sli4_rsp_cmn_set_dump_location_s {
> +	struct sli4_rsp_hdr_s	hdr;
> +	__le32		buffer_length_dword;
> +};

same comment as above

> +
> +/**
> + * @brief COMMON_SET_SET_FEATURES
> + */
> +#define SLI4_SET_FEATURES_DIF_SEED			0x01
> +#define SLI4_SET_FEATURES_XRI_TIMER			0x03
> +#define SLI4_SET_FEATURES_MAX_PCIE_SPEED		0x04
> +#define SLI4_SET_FEATURES_FCTL_CHECK			0x05
> +#define SLI4_SET_FEATURES_FEC				0x06
> +#define SLI4_SET_FEATURES_PCIE_RECV_DETECT		0x07
> +#define SLI4_SET_FEATURES_DIF_MEMORY_MODE		0x08
> +#define SLI4_SET_FEATURES_DISABLE_SLI_PORT_PAUSE_STATE	0x09
> +#define SLI4_SET_FEATURES_ENABLE_PCIE_OPTIONS		0x0A
> +#define SLI4_SET_FEAT_CFG_AUTO_XFER_RDY_T10PI	0x0C
> +#define SLI4_SET_FEATURES_ENABLE_MULTI_RECEIVE_QUEUE	0x0D
> +#define SLI4_SET_FEATURES_SET_FTD_XFER_HINT		0x0F
> +#define SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK		0x11
> +
> +struct sli4_rqst_cmn_set_features_s {
> +	struct sli4_rqst_hdr_s	hdr;
> +	__le32		feature;
> +	__le32		param_len;
> +	__le32		params[8];
> +};

same comment as above

> +
> +struct sli4_rqst_cmn_set_features_dif_seed_s {
> +	__le16		seed;
> +	__le16		rsvd16;
> +};
> +
> +enum {
> +	SLI4_RQ_COM_SET_T10_PI_MEM_MODEL = 0x1
> +
> +};

same comment as above

> +
> +struct sli4_rqst_cmn_set_features_t10_pi_mem_model_s {
> +	__le32		tmm_dword;
> +};
> +
> +enum {
> +	SLI4_RQ_MULTIRQ_ISR = 0x1,
> +	SLI4_RQ_MULTIRQ_AUTOGEN_XFER_RDY = 0x2,
> +
> +	SLI4_RQ_MULTIRQ_NUM_RQS = 0xFF,
> +	SLI4_RQ_MULTIRQ_RQ_SELECT = 0xF00
> +};

aligment of '='?

> +
> +struct sli4_rqst_cmn_set_features_multirq_s {
> +	__le32		auto_gen_xfer_dword; /* Include Sequence Reporting */
> +					/* Auto Generate XFER-RDY Enabled */
> +	__le32		num_rqs_dword;
> +};

Aligment of the comment?

> +
> +enum {
> +	SLI4_SETFEAT_XFERRDY_T10PI_RTC	= (1 << 0),	/* DW0 */
> +	SLI4_SETFEAT_XFERRDY_T10PI_ATV	= (1 << 1),
> +	SLI4_SETFEAT_XFERRDY_T10PI_TMM	= (1 << 2),
> +	SLI4_SETFEAT_XFERRDY_T10PI_PTYPE = (0x7 << 4),
> +	SLI4_SETFEAT_XFERRDY_T10PI_BLKSIZ = (0x7 << 7),
> +};

Aligment of the '='

> +
> +struct sli4_rqst_cmn_set_features_xfer_rdy_t10pi_s {
> +	__le32		dw0_flags;
> +	__le16		app_tag;
> +	__le16		rsvd6;
> +};
> +
> +enum {
> +	SLI4_RQ_HEALTH_CHECK_ENABLE = 0x1,
> +	SLI4_RQ_HEALTH_CHECK_QUERY = 0x2
> +
> +};

Newline and missing ',' on the last entry.

I stop here pointing out the same issues for the rest of this
patch. There are few more of those alignment issues in my opinion.q

> +struct sli4_s {
> +	void	*os;
> +	struct pci_dev	*pcidev;
> +#define	SLI_PCI_MAX_REGS		6

I would move the define in front of the struct.

> +	void __iomem *reg[SLI_PCI_MAX_REGS];
> +

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 03/32] elx: libefc_sli: Data structures and defines for mbox commands
  2019-10-25 11:19   ` Daniel Wagner
@ 2019-10-25 12:20     ` Steffen Maier
  2019-10-25 22:10       ` James Smart
  2019-10-25 22:42     ` James Smart
  1 sibling, 1 reply; 54+ messages in thread
From: Steffen Maier @ 2019-10-25 12:20 UTC (permalink / raw)
  To: Daniel Wagner, James Smart; +Cc: linux-scsi, Ram Vegesna

On 10/25/19 1:19 PM, Daniel Wagner wrote:
> On Wed, Oct 23, 2019 at 02:55:28PM -0700, James Smart wrote:
>> This patch continues the libefc_sli SLI-4 library population.
>>
>> This patch adds definitions for SLI-4 mailbox commands
>> and responses.
>>
>> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
>> Signed-off-by: James Smart <jsmart2021@gmail.com>
>> ---
>>   drivers/scsi/elx/libefc_sli/sli4.h | 1996 ++++++++++++++++++++++++++++++++++++
>>   1 file changed, 1996 insertions(+)

>> +#define SLI_ROUND_PAGE(b)	(((b) + SLI_SUB_PAGE_MASK) & ~SLI_SUB_PAGE_MASK)


>> +/**
>> + * @brief COMMON_GET_SLI4_PARAMETERS
>> + */
>> +
>> +#define GET_Q_CNT_METHOD(val)\
>> +	((val & RSP_GET_PARAM_Q_CNT_MTHD_MASK)\
>> +	>> RSP_GET_PARAM_Q_CNT_MTHD_SHFT)
>> +#define GET_Q_CREATE_VERSION(val)\
>> +	((val & RSP_GET_PARAM_QV_MASK)\
>> +	>> RSP_GET_PARAM_QV_SHIFT)
> 
> This time there is no space in front of '\'. Does the expresssion not
> fit on one line? Would be easier to read:
> 
> #define GET_Q_CREATE_VERSION(val) \
> 	((val & RSP_GET_PARAM_QV_MASK) >> RSP_GET_PARAM_QV_SHIFT)

Protect the macro argument in the expansion with parentheses to prevent 
unintended operator precedence during evaluation?
As with (b) of SLI_ROUND_PAGE(b) above.

(((val) &  ... ))



-- 
Mit freundlichen Gruessen / Kind regards
Steffen Maier

Linux on IBM Z Development

https://www.ibm.com/privacy/us/en/
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Matthias Hartmann
Geschaeftsfuehrung: Dirk Wittkopp
Sitz der Gesellschaft: Boeblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 04/32] elx: libefc_sli: queue create/destroy/parse routines
  2019-10-23 21:55 ` [PATCH 04/32] elx: libefc_sli: queue create/destroy/parse routines James Smart
@ 2019-10-25 15:35   ` Daniel Wagner
  2019-10-25 22:24     ` James Smart
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel Wagner @ 2019-10-25 15:35 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi, Ram Vegesna

Hi James,

On Wed, Oct 23, 2019 at 02:55:29PM -0700, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds service routines to create mailbox commands
> and adds APIs to create/destroy/parse SLI-4 EQ, CQ, RQ and MQ queues.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/include/efc_common.h |   18 +
>  drivers/scsi/elx/libefc_sli/sli4.c    | 2155 +++++++++++++++++++++++++++++++++
>  2 files changed, 2173 insertions(+)
> 
> diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
> index dbabc4f6ee5e..62d0f3b3f936 100644
> --- a/drivers/scsi/elx/include/efc_common.h
> +++ b/drivers/scsi/elx/include/efc_common.h
> @@ -23,4 +23,22 @@ struct efc_dma_s {
>  	struct pci_dev	*pdev;
>  };
>  
> +#define efc_log_crit(efc, fmt, args...) \
> +		dev_crit(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_err(efc, fmt, args...) \
> +		dev_err(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_warn(efc, fmt, args...) \
> +		dev_warn(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_info(efc, fmt, args...) \
> +		dev_info(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_test(efc, fmt, args...) \
> +		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_debug(efc, fmt, args...) \
> +		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
> +
>  #endif /* __EFC_COMMON_H__ */
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> index 68ccd3ad8ac8..6b62b7d8b5a4 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.c
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -24,3 +24,2158 @@ static struct sli4_asic_entry_t sli4_asic_table[] = {
>  	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
>  	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
>  };
> +
> +/*
> + * @brief Convert queue type enum (SLI_QTYPE_*) into a string.
> + */
> +static char *SLI_QNAME[] = {
> +	"Event Queue",
> +	"Completion Queue",
> +	"Mailbox Queue",
> +	"Work Queue",
> +	"Receive Queue",
> +	"Undefined"
> +};
> +
> +/**
> + * @ingroup sli
> + * @brief Write a SLI_CONFIG command to the provided buffer.
> + *

Could add some additional comments about the size and dma parameter...

> + * @param sli4 SLI context pointer.
> + * @param buf Virtual pointer to the destination buffer.
> + * @param size Buffer size, in bytes.
> + * @param length Length in bytes of attached command.
> + * @param dma DMA buffer for non-embedded commands.
> + *
> + * @return Returns the number of bytes written.
> + */
> +static void *
> +sli_config_cmd_init(struct sli4_s *sli4, void *buf,
> +		    size_t size, u32 length,
> +		    struct efc_dma_s *dma)
> +{
> +	struct sli4_cmd_sli_config_s *sli_config = NULL;
> +	u32 flags = 0;
> +
> +	if (length > sizeof(sli_config->payload.embed) && !dma) {
> +		efc_log_info(sli4, "length(%d) > payload(%ld)\n",
> +			length, sizeof(sli_config->payload.embed));
> +		return NULL;
> +	}

...this logs something but what does it tell? I suppose it has
something to do if a data are embedded or not.

> +	sli_config = buf;
> +
> +	memset(buf, 0, size);
> +
> +	sli_config->hdr.command = MBX_CMD_SLI_CONFIG;
> +	if (!dma) {
> +		flags |= SLI4_SLICONF_EMB;
> +		sli_config->dw1_flags = cpu_to_le32(flags);
> +		sli_config->payload_len = cpu_to_le32(length);

Could you move the last return here ...

> +	} else {

... and get rid of this else here? Easier to read in my opinion
because the control flow is not splittet over the else part.

> +		flags = SLI4_SLICONF_PMDCMD_VAL_1;	/* pmd_count = 1 */
> +		flags &= ~SLI4_SLICONF_EMB;
> +		sli_config->dw1_flags = cpu_to_le32(flags);
> +
> +		sli_config->payload.mem.addr.low =
> +			cpu_to_le32(lower_32_bits(dma->phys));
> +		sli_config->payload.mem.addr.high =
> +			cpu_to_le32(upper_32_bits(dma->phys));
> +		sli_config->payload.mem.length =
> +			cpu_to_le32(dma->size & SLI4_SLICONFIG_PMD_LEN);
> +		sli_config->payload_len = cpu_to_le32(dma->size);
> +		/* save pointer to DMA for BMBX dumping purposes */
> +		sli4->bmbx_non_emb_pmd = dma;
> +		return dma->virt;
> +	}
> +
> +	return buf + offsetof(struct sli4_cmd_sli_config_s, payload.embed);
> +}
> +
> +/**
> + * @brief Write a COMMON_CREATE_CQ command.
> + *
> + * @param sli4 SLI context.
> + * @param buf Destination buffer for the command.
> + * @param size Buffer size, in bytes.
> + * @param qmem DMA memory for the queue.
> + * @param eq_id Associated EQ_ID
> + * @param ignored This parameter carries the ULP
> + * which is only used for WQ and RQs
> + *
> + * @note This creates a Version 0 message.

I wonder if this should be encoded into the function name?

> + *
> + * @return Returns 0 on success, or non-zero otherwise.
> + */
> +static int
> +sli_cmd_common_create_cq(struct sli4_s *sli4, void *buf, size_t size,
> +			 struct efc_dma_s *qmem,
> +			 u16 eq_id)
> +{
> +	struct sli4_rqst_cmn_create_cq_v2_s	*cqv2 = NULL;

too many spaces between the type and the variable name.

> +	u32 p;
> +	uintptr_t addr;
> +	u32 page_bytes = 0;
> +	u32 num_pages = 0;
> +	size_t cmd_size = 0;
> +	u32 page_size = 0;
> +	u32 n_cqe = 0;
> +	u32 dw5_flags = 0;
> +	u16 dw6w1_arm = 0;
> +
> +	/* First calculate number of pages and the mailbox cmd length */
> +	n_cqe = qmem->size / SLI4_CQE_BYTES;
> +	switch (n_cqe) {
> +	case 256:
> +	case 512:
> +	case 1024:
> +	case 2048:
> +		page_size = 1;
> +		break;
> +	case 4096:
> +		page_size = 2;
> +		break;
> +	default:
> +		return EFC_FAIL;

If it's an error code it should probably be an negative value.

I would also go with something like

	if (n_cqe <= 2048)
		page_size = 1;
	else if (n_cqe == 4096)
		page_size = 2;
	else
		return -EFC_FAIL;

since this results into a smaller code and I find simpler to read. It
is not completely equivalent since your version limits the values to
256, 512, 1024, 2048 and 4096. My version tolerates all values below
2048. Since the same switch pattern is repeated below (see
sli_cmd_wq_create_v1) am not utterly sure if my idea is good.

> +	}
> +	page_bytes = page_size * SLI_PAGE_SIZE;
> +	num_pages = sli_page_count(qmem->size, page_bytes);
> +
> +	cmd_size = CFG_RQST_CMDSZ(cmn_create_cq_v2) + SZ_DMAADDR * num_pages;
> +
> +	cqv2 = sli_config_cmd_init(sli4, buf, size, cmd_size, NULL);
> +	if (!cqv2)
> +		return EFC_FAIL;
> +
> +	cqv2->hdr.opcode = CMN_CREATE_CQ;
> +	cqv2->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
> +	cqv2->hdr.dw3_version = cpu_to_le32(CMD_V2);

Is this now a the command version? Shouldn't it be V0 as the
documentation writes?

> +	cmd_size = CFG_RQST_PYLD_LEN_VAR(cmn_create_cq_v2,
> +					 SZ_DMAADDR * num_pages);
> +	cqv2->hdr.request_length = cmd_size;
> +	cqv2->page_size = page_size;
> +
> +	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.3) */
> +	cqv2->num_pages = cpu_to_le16(num_pages);
> +	if (!num_pages ||
> +	    num_pages > SLI4_COMMON_CREATE_CQ_V2_MAX_PAGES) {

I would write the condition on one line. There is still enough space left on the line.

> +		return EFC_FAIL;
> +	}
> +
> +	switch (num_pages) {
> +	case 1:
> +		dw5_flags |= CQ_CNT_VAL(256);
> +		break;
> +	case 2:
> +		dw5_flags |= CQ_CNT_VAL(512);
> +		break;
> +	case 4:
> +		dw5_flags |= CQ_CNT_VAL(1024);
> +		break;
> +	case 8:
> +		dw5_flags |= CQ_CNT_VAL(LARGE);
> +		cqv2->cqe_count = cpu_to_le16(n_cqe);
> +		break;
> +	default:
> +		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
> +		return -1;

return -EFC_FAIL; ?

> +	}
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		dw5_flags |= CREATE_CQV2_AUTOVALID;
> +
> +	dw5_flags |= CREATE_CQV2_EVT;
> +	dw5_flags |= CREATE_CQV2_VALID;
> +
> +	cqv2->dw5_flags = cpu_to_le32(dw5_flags);
> +	cqv2->dw6w1_arm = cpu_to_le16(dw6w1_arm);
> +	cqv2->eq_id = cpu_to_le16(eq_id);
> +
> +	for (p = 0, addr = qmem->phys; p < num_pages;
> +	     p++, addr += page_bytes) {

This fits on one line, exact 80 chars :)

> +		cqv2->page_phys_addr[p].low =
> +			cpu_to_le32(lower_32_bits(addr));
> +		cqv2->page_phys_addr[p].high =
> +			cpu_to_le32(upper_32_bits(addr));
> +	}

Also these two assignment fit into 80 chars.

> +
> +	return EFC_SUCCESS;
> +}
> +
> +/**
> + * @brief Write a COMMON_DESTROY_CQ command.
> + *
> + * @param sli4 SLI context.
> + * @param buf Destination buffer for the command.
> + * @param size Buffer size, in bytes.
> + * @param cq_id CQ ID
> + *
> + * @note This creates a Version 0 message.
> + *
> + * @return Returns 0 on success, or non-zero otherwise.
> + */
> +static int
> +sli_cmd_common_destroy_cq(struct sli4_s *sli4, void *buf,
> +			  size_t size, u16 cq_id)
> +{
> +	struct sli4_rqst_cmn_destroy_cq_s *cq = NULL;
> +
> +	/* Payload length must accommodate both request and response */

Is this common? Is this true for all commands? If so maybe have this
kind of information at the beginning of the file explaining some of
the inner workings of the code would certainly help.

> +	cq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(cmn_destroy_cq), NULL);
> +	if (!cq)
> +		return EFC_FAIL;

return -EFC_FAIL ?

(I stop report this one now)

> +
> +	cq->hdr.opcode = CMN_DESTROY_CQ;
> +	cq->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
> +	cq->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_destroy_cq);
> +	cq->cq_id = cpu_to_le16(cq_id);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/**
> + * @brief Write a COMMON_CREATE_EQ command.
> + *
> + * @param sli4 SLI context.
> + * @param buf Destination buffer for the command.
> + * @param size Buffer size, in bytes.
> + * @param qmem DMA memory for the queue.
> + * @param ignored1 Ignored
> + * (used for consistency among queue creation functions).
> + * @param ignored2 Ignored
> + * (used for consistency among queue creation functions).

There is no ignored2 in the function declarations.

> + *
> + * @note Other queue creation routines use the last parameter to pass in
> + * the associated Q_ID and ULP. EQ doesn't have an associated queue or ULP,
> + * so these parameters are ignored
> + *
> + * @note This creates a Version 0 message
> + *
> + * @return Returns 0 on success, or non-zero otherwise.
> + */
> +static int
> +sli_cmd_common_create_eq(struct sli4_s *sli4, void *buf, size_t size,
> +			 struct efc_dma_s *qmem,
> +			 u16 ignored1)

If you want to be consistent with the other function declarations,
ignored1 should go on the same line as qmem.

> +{
> +	struct sli4_rqst_cmn_create_eq_s *eq = NULL;

No need to set eq to NULL because it will set as first thing in the function

> +	u32 p;
> +	uintptr_t addr;
> +	u16 num_pages;
> +	u32 dw5_flags = 0;
> +	u32 dw6_flags = 0;
> +
> +	eq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(cmn_create_eq), NULL);

You should test for NULL since sli_config_cmd_init() can return NULL.

> +
> +	eq->hdr.opcode = CMN_CREATE_EQ;
> +	eq->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		eq->hdr.dw3_version = cpu_to_le32(CMD_V2);

Same question on the command version number as above.

> +
> +	eq->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_create_eq);
> +
> +	/* valid values for number of pages: 1, 2, 4 (sec 4.4.3) */
> +	num_pages = qmem->size / SLI_PAGE_SIZE;
> +	eq->num_pages = cpu_to_le16(num_pages);
> +
> +	switch (num_pages) {
> +	case 1:
> +		dw5_flags |= SLI4_EQE_SIZE_4;
> +		dw6_flags |= EQ_CNT_VAL(1024);
> +		break;
> +	case 2:
> +		dw5_flags |= SLI4_EQE_SIZE_4;
> +		dw6_flags |= EQ_CNT_VAL(2048);
> +		break;
> +	case 4:
> +		dw5_flags |= SLI4_EQE_SIZE_4;
> +		dw6_flags |= EQ_CNT_VAL(4096);
> +		break;
> +	default:
> +		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
> +		return EFC_FAIL;
> +	}
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		dw5_flags |= CREATE_EQ_AUTOVALID;
> +
> +	dw5_flags |= CREATE_EQ_VALID;
> +	dw6_flags &= (~CREATE_EQ_ARM);
> +	eq->dw5_flags = cpu_to_le32(dw5_flags);
> +	eq->dw6_flags = cpu_to_le32(dw6_flags);
> +	eq->dw7_delaymulti = cpu_to_le32(CREATE_EQ_DELAYMULTI);
> +
> +	for (p = 0, addr = qmem->phys; p < num_pages;
> +	     p++, addr += SLI_PAGE_SIZE) {

One line?

> +		eq->page_address[p].low = cpu_to_le32(lower_32_bits(addr));
> +		eq->page_address[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/**
> + * @brief Write a COMMON_DESTROY_EQ command.
> + *
> + * @param sli4 SLI context.
> + * @param buf Destination buffer for the command.
> + * @param size Buffer size, in bytes.
> + * @param eq_id Queue ID to destroy.
> + *
> + * @note Other queue creation routines use the last parameter to pass in
> + * the associated Q_ID. EQ doesn't have an associated queue so this
> + * parameter is ignored.

Too much copy paste?

> + *
> + * @note This creates a Version 0 message.
> + *
> + * @return Returns zero for success and non-zero for failure.
> + */
> +static int
> +sli_md_common_destroy_eq(struct sli4_s *sli4, void *buf, size_t size,
> +			  u16 eq_id)
> +{
> +	struct sli4_rqst_cmn_destroy_eq_s *eq = NULL;

No need to initialized eq

> +
> +	eq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(cmn_destroy_eq), NULL);
> +	if (!eq)
> +		return EFC_FAIL;
> +
> +	eq->hdr.opcode = CMN_DESTROY_EQ;
> +	eq->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
> +	eq->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_destroy_eq);
> +
> +	eq->eq_id = cpu_to_le16(eq_id);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/**
> + * @brief Write a COMMON_CREATE_MQ_EXT command.
> + *
> + * @param sli4 SLI context.
> + * @param buf Destination buffer for the command.
> + * @param size Buffer size, in bytes.
> + * @param qmem DMA memory for the queue.
> + * @param cq_id Associated CQ_ID.
> + * @param ignored This parameter carries the ULP
> + * which is only used for WQ and RQs
> + *
> + * @note This creates a Version 0 message.
> + *
> + * @return Returns zero for success and non-zero for failure.
> + */
> +static int
> +sli_cmd_common_create_mq_ext(struct sli4_s *sli4, void *buf, size_t size,
> +			     struct efc_dma_s *qmem,
> +			     u16 cq_id)
> +{
> +	struct sli4_rqst_cmn_create_mq_ext_s	*mq = NULL;

Too many spaces between type and variable.

> +	u32 p;
> +	uintptr_t addr;
> +	u32 num_pages;
> +	u16 dw6w1_flags = 0;
> +
> +	mq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(cmn_create_mq_ext),
> +				 NULL);
> +	if (!mq)
> +		return EFC_FAIL;
> +
> +	mq->hdr.opcode = CMN_CREATE_MQ_EXT;
> +	mq->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
> +	mq->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_create_mq_ext);
> +
> +	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.12) */
> +	num_pages = qmem->size / SLI_PAGE_SIZE;
> +	mq->num_pages = cpu_to_le16(num_pages);
> +	switch (num_pages) {
> +	case 1:
> +		dw6w1_flags |= SLI4_MQE_SIZE_16;
> +		break;
> +	case 2:
> +		dw6w1_flags |= SLI4_MQE_SIZE_32;
> +		break;
> +	case 4:
> +		dw6w1_flags |= SLI4_MQE_SIZE_64;
> +		break;
> +	case 8:
> +		dw6w1_flags |= SLI4_MQE_SIZE_128;
> +		break;
> +	default:
> +		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
> +		return EFC_FAIL;
> +	}
> +
> +	mq->async_event_bitmap = cpu_to_le32(SLI4_ASYNC_EVT_FC_ALL);
> +
> +	if (sli4->mq_create_version) {
> +		mq->cq_id_v1 = cpu_to_le16(cq_id);
> +		mq->hdr.dw3_version = cpu_to_le32(CMD_V1);
> +	} else {
> +		dw6w1_flags |= (cq_id << CREATE_MQEXT_CQID_SHIFT);
> +	}
> +	mq->dw7_val = cpu_to_le32(CREATE_MQEXT_VAL);
> +
> +	mq->dw6w1_flags = cpu_to_le16(dw6w1_flags);
> +	for (p = 0, addr = qmem->phys; p < num_pages;
> +	     p++, addr += SLI_PAGE_SIZE) {

On one line?

> +		mq->page_phys_addr[p].low =
> +			cpu_to_le32(lower_32_bits(addr));
> +		mq->page_phys_addr[p].high =
> +			cpu_to_le32(upper_32_bits(addr));

And these as well?

> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/**
> + * @brief Write a COMMON_DESTROY_MQ command.
> + *
> + * @param sli4 SLI context.
> + * @param buf Destination buffer for the command.
> + * @param size Buffer size, in bytes.
> + * @param mq_id MQ ID
> + *
> + * @note This creates a Version 0 message.
> + *
> + * @return Returns zero for success and non-zero for failure.
> + */
> +static int
> +sli_cmd_common_destroy_mq(struct sli4_s *sli4, void *buf, size_t size,
> +			  u16 mq_id)
> +{
> +	struct sli4_rqst_cmn_destroy_mq_s *mq = NULL;
> +
> +	mq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(cmn_destroy_mq), NULL);
> +	if (!mq)
> +		return EFC_FAIL;
> +
> +	mq->hdr.opcode = CMN_DESTROY_MQ;
> +	mq->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
> +	mq->hdr.request_length = CFG_RQST_PYLD_LEN(cmn_destroy_mq);
> +
> +	mq->mq_id = cpu_to_le16(mq_id);
> +
> +	return EFC_SUCCESS;
> +}

sli_cmd_common_destroy_eq(), sli_cmd_common_destroy_cq() and
sli_cmd_common_destroy_mq() look almost identically. Could those
function be unified?

> +
> +/**
> + * @ingroup sli_fc
> + * @brief Write an WQ_CREATE command.
> + *
> + * @param sli4 SLI context.
> + * @param buf Destination buffer for the command.
> + * @param size Buffer size, in bytes.
> + * @param qmem DMA memory for the queue.
> + * @param cq_id Associated CQ_ID.
> + *
> + * @note This creates a Version 0 message.
> + *
> + * @return Returns zero for success and non-zero for failure.
> + */
> +int
> +sli_cmd_wq_create(struct sli4_s *sli4, void *buf, size_t size,
> +		  struct efc_dma_s *qmem, u16 cq_id)
> +{
> +	struct sli4_rqst_wq_create_s	*wq = NULL;

Too many spaces between type and variable.

> +	u32 p;
> +	uintptr_t addr;
> +
> +	wq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(wq_create), NULL);
> +	if (!wq)
> +		return EFC_FAIL;
> +
> +	wq->hdr.opcode = SLI4_OPC_WQ_CREATE;
> +	wq->hdr.subsystem = SLI4_SUBSYSTEM_FC;
> +	wq->hdr.request_length = CFG_RQST_PYLD_LEN(wq_create);
> +
> +	/* valid values for number of pages: 1-4 (sec 4.5.1) */
> +	wq->num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
> +	if (!wq->num_pages ||
> +	    wq->num_pages > SLI4_WQ_CREATE_V0_MAX_PAGES)

On one line

> +		return EFC_FAIL;
> +
> +	wq->cq_id = cpu_to_le16(cq_id);
> +
> +	for (p = 0, addr = qmem->phys;
> +			p < wq->num_pages;
> +			p++, addr += SLI_PAGE_SIZE) {
> +		wq->page_phys_addr[p].low  =
> +				cpu_to_le32(lower_32_bits(addr));
> +		wq->page_phys_addr[p].high =
> +				cpu_to_le32(upper_32_bits(addr));

At least the last two assignments fit on one line.

> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/**
> + * @ingroup sli_fc
> + * @brief Write an WQ_CREATE_V1 command.
> + *
> + * @param sli4 SLI context.
> + * @param buf Destination buffer for the command.
> + * @param size Buffer size, in bytes.
> + * @param qmem DMA memory for the queue.
> + * @param cq_id Associated CQ_ID.
> + *
> + * @return Returns zero for success and non-zero for failure.
> + */
> +int
> +sli_cmd_wq_create_v1(struct sli4_s *sli4, void *buf, size_t size,
> +		     struct efc_dma_s *qmem,
> +		     u16 cq_id)

cq_id on the same line as qmem?

> +{
> +	struct sli4_rqst_wq_create_v1_s *wq = NULL;
> +	u32 p;
> +	uintptr_t addr;
> +	u32 page_size = 0;
> +	u32 page_bytes = 0;
> +	u32 n_wqe = 0;
> +	u16 num_pages;
> +
> +	wq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(wq_create_v1), NULL);
> +	if (!wq)
> +		return EFC_FAIL;
> +
> +	wq->hdr.opcode = SLI4_OPC_WQ_CREATE;
> +	wq->hdr.subsystem = SLI4_SUBSYSTEM_FC;
> +	wq->hdr.request_length = CFG_RQST_PYLD_LEN(wq_create_v1);
> +	wq->hdr.dw3_version = cpu_to_le32(CMD_V1);
> +
> +	n_wqe = qmem->size / sli4->wqe_size;
> +
> +	/*
> +	 * This heuristic to determine the page size is simplistic but could
> +	 * be made more sophisticated

Okay, but what is too simple?

> +	 */
> +	switch (qmem->size) {
> +	case 4096:
> +	case 8192:
> +	case 16384:
> +	case 32768:
> +		page_size = 1;
> +		break;
> +	case 65536:
> +		page_size = 2;
> +		break;
> +	case 131072:
> +		page_size = 4;
> +		break;
> +	case 262144:
> +		page_size = 8;
> +		break;
> +	case 524288:
> +		page_size = 10;
> +		break;
> +	default:
> +		return 0;

Isn't this an error?

> +	}
> +	page_bytes = page_size * SLI_PAGE_SIZE;
> +
> +	/* valid values for number of pages: 1-8 */

This comment is for num_pages, right? If so maybe it helps to add the
variable name to the comment "... number of pages (num_pages): 1.8". I
got slightly distracted by page_size :)

> +	num_pages = sli_page_count(qmem->size, page_bytes);
> +	wq->num_pages = cpu_to_le16(num_pages);
> +	if (!num_pages ||
> +	    num_pages > SLI4_WQ_CREATE_V1_MAX_PAGES)

On one line?

> +		return EFC_FAIL;
> +
> +	wq->cq_id = cpu_to_le16(cq_id);
> +
> +	wq->page_size = page_size;
> +
> +	if (sli4->wqe_size == SLI4_WQE_EXT_BYTES)
> +		wq->wqe_size_byte |= SLI4_WQE_EXT_SIZE;
> +	else
> +		wq->wqe_size_byte |= SLI4_WQE_SIZE;
> +
> +	wq->wqe_count = cpu_to_le16(n_wqe);
> +
> +	for (p = 0, addr = qmem->phys;
> +			p < num_pages;
> +			p++, addr += page_bytes) {
> +		wq->page_phys_addr[p].low  =
> +					cpu_to_le32(lower_32_bits(addr));
> +		wq->page_phys_addr[p].high =
> +					cpu_to_le32(upper_32_bits(addr));

These assignments fit on one line

> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/**
> + * @ingroup sli_fc
> + * @brief Write an WQ_DESTROY command.
> + *
> + * @param sli4 SLI context.
> + * @param buf Destination buffer for the command.
> + * @param size Buffer size, in bytes.
> + * @param wq_id WQ_ID.
> + *
> + * @return Returns zero for success and non-zero for failure.
> + */
> +int
> +sli_cmd_wq_destroy(struct sli4_s *sli4, void *buf, size_t size,
> +		   u16 wq_id)
> +{
> +	struct sli4_rqst_wq_destroy_s *wq = NULL;
> +
> +	wq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(wq_destroy), NULL);
> +	if (!wq)
> +		return EFC_FAIL;
> +
> +	wq->hdr.opcode = SLI4_OPC_WQ_DESTROY;
> +	wq->hdr.subsystem = SLI4_SUBSYSTEM_FC;
> +	wq->hdr.request_length = CFG_RQST_PYLD_LEN(wq_destroy);
> +
> +	wq->wq_id = cpu_to_le16(wq_id);
> +
> +	return EFC_SUCCESS;
> +}

So many function look almost identical. Is there no better way to
create the commands? Or is something like a generic command creation
function worse to maintain? There is so much copy paste... I stop now
pointing out the same issues again.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 31/32] elx: efct: Add Makefile and Kconfig for efct driver
  2019-10-23 21:55 ` [PATCH 31/32] elx: efct: Add Makefile and Kconfig for efct driver James Smart
@ 2019-10-25 15:55   ` Daniel Wagner
  2019-10-25 22:47     ` James Smart
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel Wagner @ 2019-10-25 15:55 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi, Ram Vegesna

Hi,

On Wed, Oct 23, 2019 at 02:55:56PM -0700, James Smart wrote:
> This patch completes the efct driver population.
> 
> This patch adds driver definitions for:
> Adds the efct driver Kconfig and Makefiles
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/Kconfig  |  8 ++++++++
>  drivers/scsi/elx/Makefile | 30 ++++++++++++++++++++++++++++++
>  2 files changed, 38 insertions(+)
>  create mode 100644 drivers/scsi/elx/Kconfig
>  create mode 100644 drivers/scsi/elx/Makefile
> 
> diff --git a/drivers/scsi/elx/Kconfig b/drivers/scsi/elx/Kconfig
> new file mode 100644
> index 000000000000..3d25d8463c48
> --- /dev/null
> +++ b/drivers/scsi/elx/Kconfig
> @@ -0,0 +1,8 @@
> +config SCSI_EFCT
> +	tristate "Emulex Fibre Channel Target"
> +	depends on PCI && SCSI
> +	depends on SCSI_FC_ATTRS

Is TARGET_ISCSI missing?

: drivers/scsi/elx/efct/efct_lio.o: in function `efct_lio_npiv_drop_tpg':
efct_lio.c:(.text+0xa35): undefined reference to `core_tpg_deregister'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_lio_drop_tpg':
efct_lio.c:(.text+0xa6e): undefined reference to `core_tpg_deregister'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_lio_tmf_done':
efct_lio.c:(.text+0xceb): undefined reference to `transport_generic_free_cmd'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_lio_async_worker':
efct_lio.c:(.text+0x1136): undefined reference to `target_submit_tmr'
ld: efct_lio.c:(.text+0x12b5): undefined reference to `target_setup_session'
ld: efct_lio.c:(.text+0x133c): undefined reference to `target_sess_cmd_list_set_waiting'
ld: efct_lio.c:(.text+0x1344): undefined reference to `target_wait_for_sess_cmds'
ld: efct_lio.c:(.text+0x134c): undefined reference to `target_remove_session'
ld: efct_lio.c:(.text+0x1464): undefined reference to `target_submit_cmd'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_lio_status_done':
efct_lio.c:(.text+0x1579): undefined reference to `transport_generic_free_cmd'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_lio_datamove_done':
efct_lio.c:(.text+0x1a01): undefined reference to `transport_generic_request_failure'
ld: efct_lio.c:(.text+0x1a6c): undefined reference to `transport_generic_free_cmd'
ld: efct_lio.c:(.text+0x1a87): undefined reference to `target_execute_cmd'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_lio_make_tpg':
efct_lio.c:(.text+0x1ba4): undefined reference to `core_tpg_register'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_lio_npiv_make_tpg':
efct_lio.c:(.text+0x2078): undefined reference to `core_tpg_register'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_scsi_tgt_driver_init':
efct_lio.c:(.text+0x2eba): undefined reference to `target_register_template'
ld: efct_lio.c:(.text+0x2ece): undefined reference to `target_register_template'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_scsi_tgt_driver_exit':
efct_lio.c:(.text+0x2ef8): undefined reference to `target_unregister_template'
ld: efct_lio.c:(.text+0x2f04): undefined reference to `target_unregister_template'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_lio_check_stop_free':
efct_lio.c:(.text+0xdf8): undefined reference to `target_put_sess_cmd'
ld: drivers/scsi/elx/efct/efct_lio.o: in function `efct_scsi_tgt_driver_init.cold':
efct_lio.c:(.text.unlikely+0x323): undefined reference to `target_unregister_template'
make[1]: *** [/home/wagi/work/linux/Makefile:1094: vmlinux] Error 1
make[1]: Leaving directory '/home/wagi/work/build/efct'

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver
  2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (31 preceding siblings ...)
  2019-10-23 21:55 ` [PATCH 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
@ 2019-10-25 15:56 ` Daniel Wagner
  2019-10-25 22:31   ` James Smart
  32 siblings, 1 reply; 54+ messages in thread
From: Daniel Wagner @ 2019-10-25 15:56 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi

Hi James,

> Review comments welcome!

My compiler is complaining:

home/wagi/work/linux/drivers/scsi/elx/efct/efct_driver.c: In function ‘efct_request_firmware_update’:
/home/wagi/work/linux/drivers/scsi/elx/efct/efct_driver.c:350:3: warning: ‘fw_change_status’ may be used uninitialized in this function [-Wmaybe-uninitialized]
  350 |   switch (fw_change_status) {
      |   ^~~~~~
  CC      drivers/scsi/elx/libefc/efc_device.o
  CC      drivers/scsi/elx/libefc/efc_lib.o
  CC      drivers/scsi/elx/libefc/efc_sm.o
  CC      drivers/scsi/elx/libefc_sli/sli4.o
/home/wagi/work/linux/drivers/scsi/elx/libefc_sli/sli4.c: In function ‘sli_fc_rq_set_alloc’:
/home/wagi/work/linux/drivers/scsi/elx/libefc_sli/sli4.c:818:12: warning: ‘offset’ may be used uninitialized in this function [-Wmaybe-uninitialized]
  818 |  u32 i, p, offset;
      |            ^~~~~~

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 03/32] elx: libefc_sli: Data structures and defines for mbox commands
  2019-10-25 12:20     ` Steffen Maier
@ 2019-10-25 22:10       ` James Smart
  0 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-25 22:10 UTC (permalink / raw)
  To: Steffen Maier, Daniel Wagner; +Cc: linux-scsi, Ram Vegesna

On 10/25/2019 5:20 AM, Steffen Maier wrote:
> Protect the macro argument in the expansion with parentheses to prevent 
> unintended operator precedence during evaluation?
> As with (b) of SLI_ROUND_PAGE(b) above.
> 
> (((val) &  ... ))

yep, agree.  Thanks

-- james


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 04/32] elx: libefc_sli: queue create/destroy/parse routines
  2019-10-25 15:35   ` Daniel Wagner
@ 2019-10-25 22:24     ` James Smart
  0 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-25 22:24 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: linux-scsi, Ram Vegesna

Thanks. We mostly agree with the comment written and will work on the 
changes.

Exceptions or answers to questions are inline below.

-- james



On 10/25/2019 8:35 AM, Daniel Wagner wrote:
>> +static void *
>> +sli_config_cmd_init(struct sli4_s *sli4, void *buf,
>> +		    size_t size, u32 length,
>> +		    struct efc_dma_s *dma)
>> +{
>> +	struct sli4_cmd_sli_config_s *sli_config = NULL;
>> +	u32 flags = 0;
>> +
>> +	if (length > sizeof(sli_config->payload.embed) && !dma) {
>> +		efc_log_info(sli4, "length(%d) > payload(%ld)\n",
>> +			length, sizeof(sli_config->payload.embed));
>> +		return NULL;
>> +	}
> 
> ...this logs something but what does it tell? I suppose it has
> something to do if a data are embedded or not.

yep - if its too big to be embedded and there's isn't a dma address to 
use for non-embedded format, it's an error. We will make that log 
message reflect what I just said.


>> +	cqv2->hdr.opcode = CMN_CREATE_CQ;
>> +	cqv2->hdr.subsystem = SLI4_SUBSYSTEM_COMMON;
>> +	cqv2->hdr.dw3_version = cpu_to_le32(CMD_V2);
> 
> Is this now a the command version? Shouldn't it be V0 as the
> documentation writes?

nope comment was wrong. We'll remove the comment. We won't bother with 
routine names reflecting cmd version # unless the driver has to use more 
than 1 version.


>> +static int
>> +sli_cmd_common_destroy_cq(struct sli4_s *sli4, void *buf,
>> +			  size_t size, u16 cq_id)
>> +{
>> +	struct sli4_rqst_cmn_destroy_cq_s *cq = NULL;
>> +
>> +	/* Payload length must accommodate both request and response */
> 
> Is this common? Is this true for all commands? If so maybe have this
> kind of information at the beginning of the file explaining some of
> the inner workings of the code would certainly help.

For the SLI_CONFIG mailbox command, which is a wrapper that issues a 
bunch of other mailbox commands specified by subsystem and 
subsystem-specific opcode - yes, it's true.

We'll clean this up. Likely remove the indicated comment and say 
something up in sli_config_cmd_init().


> sli_cmd_common_destroy_eq(), sli_cmd_common_destroy_cq() and
> sli_cmd_common_destroy_mq() look almost identically. Could those
> function be unified?

We'll look at better commonizing through small service routines and/or 
macros. We'll see if unification falls out.


> So many function look almost identical. Is there no better way to
> create the commands? Or is something like a generic command creation
> function worse to maintain? There is so much copy paste... I stop now
> pointing out the same issues again.

Same as last comment. A few helper macros should distill it to the items 
that are specific to the individual commands.



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver
  2019-10-25 15:56 ` [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver Daniel Wagner
@ 2019-10-25 22:31   ` James Smart
  0 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-25 22:31 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: linux-scsi

On 10/25/2019 8:56 AM, Daniel Wagner wrote:
> Hi James,
> 
>> Review comments welcome!
> 
> My compiler is complaining:
> 
> home/wagi/work/linux/drivers/scsi/elx/efct/efct_driver.c: In function ‘efct_request_firmware_update’:
> /home/wagi/work/linux/drivers/scsi/elx/efct/efct_driver.c:350:3: warning: ‘fw_change_status’ may be used uninitialized in this function [-Wmaybe-uninitialized]
>    350 |   switch (fw_change_status) {
>        |   ^~~~~~
>    CC      drivers/scsi/elx/libefc/efc_device.o
>    CC      drivers/scsi/elx/libefc/efc_lib.o
>    CC      drivers/scsi/elx/libefc/efc_sm.o
>    CC      drivers/scsi/elx/libefc_sli/sli4.o

Well, functionally it isn't an error as the way the code is written and 
the writes occur, it will always have a valid value when there was no 
call value. But, I can see how this is hard for the compiler to figure 
out.  We'll patch something to make the compiler happy.

> /home/wagi/work/linux/drivers/scsi/elx/libefc_sli/sli4.c: In function ‘sli_fc_rq_set_alloc’:
> /home/wagi/work/linux/drivers/scsi/elx/libefc_sli/sli4.c:818:12: warning: ‘offset’ may be used uninitialized in this function [-Wmaybe-uninitialized]
>    818 |  u32 i, p, offset;
>        |            ^~~~~~

Yep - we will fix.

-- james


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 03/32] elx: libefc_sli: Data structures and defines for mbox commands
  2019-10-25 11:19   ` Daniel Wagner
  2019-10-25 12:20     ` Steffen Maier
@ 2019-10-25 22:42     ` James Smart
  1 sibling, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-25 22:42 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: linux-scsi, Ram Vegesna

Thanks. We mostly agree with the comment written and will work on the 
changes. Agreed that consistency and uniformity helps.

Exceptions or answers to questions are inline below.

-- james


On 10/25/2019 4:19 AM, Daniel Wagner wrote:

>> +#define SLI_PAGE_SIZE		(1 << 12)	/* 4096 */
> 
> So SLI_PAGE_SIZE is fixed and can't be changed...

For how the driver uses the SLI interface in this current 
implementation, yes. The interface's page size is independent from what 
the OS's pagesize is.


> ... and callers of this function pass in SLI_PAGE_SIZE.

>> +{
>> +	u32	mask = page_size - 1;
>> +	u32	shift = 0;
>> +
>> +	switch (page_size) {
>> +	case 4096:
>> +		shift = 12;
>> +		break;
>> +	case 8192:
>> +		shift = 13;
>> +		break;
>> +	case 16384:
>> +		shift = 14;
>> +		break;
>> +	case 32768:
>> +		shift = 15;
>> +		break;
>> +	case 65536:
>> +		shift = 16;
>> +		break;
>> +	default:
>> +		return 0;
>> +	}
> 
> What about using __ffs(page_size)? But...

will look into it...

> 
>> +
>> +	return (bytes + mask) >> shift;
> 
> ... mask and shift could just be defined like SLI_PAGE_SIZE and we
> safe a few instructions. Unless SLI_PAGE_SIZE will be dynamic in future.

desire is to keep it easily adapted. There's actually some conditions 
that could have us use different page sizes for different structures.



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 31/32] elx: efct: Add Makefile and Kconfig for efct driver
  2019-10-25 15:55   ` Daniel Wagner
@ 2019-10-25 22:47     ` James Smart
  0 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-25 22:47 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: linux-scsi, Ram Vegesna

On 10/25/2019 8:55 AM, Daniel Wagner wrote:
>> diff --git a/drivers/scsi/elx/Kconfig b/drivers/scsi/elx/Kconfig
>> new file mode 100644
>> index 000000000000..3d25d8463c48
>> --- /dev/null
>> +++ b/drivers/scsi/elx/Kconfig
>> @@ -0,0 +1,8 @@
>> +config SCSI_EFCT
>> +	tristate "Emulex Fibre Channel Target"
>> +	depends on PCI && SCSI
>> +	depends on SCSI_FC_ATTRS
> 
> Is TARGET_ISCSI missing?


I think we missed 'TARGET_CORE', will fix that in the next version.

-- James and Ram

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 02/32] elx: libefc_sli: SLI Descriptors and Queue entries
  2019-10-25  9:59   ` Daniel Wagner
@ 2019-10-25 23:00     ` James Smart
  0 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-25 23:00 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: linux-scsi, Ram Vegesna

Thanks. We mostly agree with the comment written and will work on the 
changes.

Exceptions or answers to questions are inline below.

-- james


On 10/25/2019 2:59 AM, Daniel Wagner wrote:

> I noticed sometimes there are also BIT() used. Wouldn't it make sense
> to the whole driver to use one or the other version of bit
> definitions?

We don't want to have BIT() used. Any references will be removed.


>> +
>> +/**
>> + * @brief Generic Common Create EQ/CQ/MQ/WQ/RQ Queue completion
>> + */
>> +struct sli4_rsp_cmn_create_queue_s {
>> +	struct sli4_rsp_hdr_s	hdr;
>> +	__le16	q_id;
>> +	u8	rsvd18;
>> +	u8	ulp;
>> +	__le32	db_offset;
>> +	__le16	db_rs;
>> +	__le16	db_fmt;
>> +};
> 
> Just wondering about all these definitions here: These structs
> describes the wire format, no? Shouldn't this marked with __packed? I
> keep forgetting the rules.

not wire format, but rather the endianness of the adapter interface.

yes, it's probably good practice to use __packed. The existing 
definitions should have been ok as the layouts should never have created 
a condition where pad would have been added. but... better safe than sorry.



> Picking up my question from patch #1, what's the idea about the enums
> and defines? Why are the last two ones not an enum?

Well, its a code volume issue. We migrated old code which was mostly 
defines. When close attention was made to properly code for endianness 
in register definitions and things at the lower interfaces, we used 
enums.  Some things changed while others didn't.  Upon conclusion, we 
saw a large amount of both and it is a lot of work for no technical gain 
and limited readability gain to make them all one way or the other.

I asked around as to whether we must be all one type or not and there's 
not a mandate to be one or the other or even specifically when to do 
what. So we've stuck with what we have.

> 
>> +/**
>> + * @brief WQ_CREATE
>> + *
>> + * Create a Work Queue for FC.
>> + */
>> +#define SLI4_WQ_CREATE_V0_MAX_PAGES	4
>> +struct sli4_rqst_wq_create_s {
>> +	struct sli4_rqst_hdr_s	hdr;
>> +	u8		num_pages;
>> +	u8		dua_byte;
>> +	__le16		cq_id;
>> +	struct sli4_dmaaddr_s page_phys_addr[SLI4_WQ_CREATE_V0_MAX_PAGES];
>> +	u8		bqu_byte;
>> +	u8		ulp;
>> +	__le16		rsvd;
>> +};
>> +
>> +struct sli4_rsp_wq_create_s {
>> +	struct sli4_rsp_cmn_create_queue_s q_rsp;
>> +};
>> +
>> +/**
>> + * @brief WQ_CREATE_V1
>> + *
>> + * Create a version 1 Work Queue for FC use.
>> + */
> 
> Why does the workqueue code encode a version? Isn't this pure driver
> code?

The same command can have multiple forms. yes, there's no need to be 
calling out the version if they are the same between versions. We'll fix 
this.




^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions
  2019-10-24 16:22   ` Daniel Wagner
@ 2019-10-25 23:04     ` James Smart
  0 siblings, 0 replies; 54+ messages in thread
From: James Smart @ 2019-10-25 23:04 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: linux-scsi, Ram Vegesna

Thanks. We mostly agree with the comment written and will work on the 
changes.

Exceptions or answers to questions are inline below.

-- james


On 10/24/2019 9:22 AM, Daniel Wagner wrote:

>> +	SLI4_INTF_VALID_SHIFT = 29,
>> +	SLI4_INTF_VALID_MASK = 0x0F << SLI4_INTF_VALID_SHIFT,
> 
> Should this a 32 bit value? This overflows to 34 bits.

agreed

> 
>> +
>> +	SLI4_INTF_VALID_VALUE = 6 << SLI4_INTF_VALID_SHIFT,
>> +};
> 
> Just style question: what is the benefit using anonymous enums?  The
> only reason I came up was that gdb could show the name of the
> value. Though a quick test didn't work if the value is passed into a
> function. Maybe I did something wrong.
> 
> I am asking because register number is a define and then the shift and
> mask are enums.

In newer code I've seen the preference being anonymous enums. But in 
looking for why or when to use what, there wasn't much guidance or 
reasons.  As in my other email, we had older defines. In newer code we 
used enums. So we have a mix. At this point, there's so much volume it's 
not worth making it one way or the other.


>> +/**
>> + * @brief MQ_DOORBELL - MQ Doorbell Register
>> + */
>> +#define SLI4_MQ_DB_REG		0x0140	/* register offset */
> 
> Are the other registers defines also all offsets? Just wondering if
> the comment is pointing out that these values are special or not.

yes.  We'll clarify them as well.




^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 32/32] elx: efct: Tie into kernel Kconfig and build process
  2019-10-23 21:55 ` [PATCH 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
@ 2019-10-26  0:34   ` kbuild test robot
  2019-10-26  0:39     ` Randy Dunlap
  2019-10-26 14:13   ` kbuild test robot
  2019-10-26 14:13   ` [RFC PATCH] elx: efct: efct_libefc_templ can be static kbuild test robot
  2 siblings, 1 reply; 54+ messages in thread
From: kbuild test robot @ 2019-10-26  0:34 UTC (permalink / raw)
  To: James Smart; +Cc: kbuild-all, linux-scsi, James Smart, Ram Vegesna

[-- Attachment #1: Type: text/plain, Size: 7937 bytes --]

Hi James,

I love your patch! Perhaps something to improve:

[auto build test WARNING on mkp-scsi/for-next]
[cannot apply to v5.4-rc4 next-20191025]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/James-Smart/efct-Broadcom-Emulex-FC-Target-driver/20191026-050814
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next
config: ia64-allmodconfig (attached as .config)
compiler: ia64-linux-gcc (GCC) 7.4.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.4.0 make.cross ARCH=ia64 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

Note: it may well be a FALSE warning. FWIW you are at least aware of it now.
http://gcc.gnu.org/wiki/Better_Uninitialized_Warnings

All warnings (new ones prefixed by >>):

   drivers/scsi/elx/libefc_sli/sli4.c: In function 'sli_fc_rq_set_alloc':
>> drivers/scsi/elx/libefc_sli/sli4.c:818:12: warning: 'offset' may be used uninitialized in this function [-Wmaybe-uninitialized]
     u32 i, p, offset;
               ^~~~~~

vim +/offset +818 drivers/scsi/elx/libefc_sli/sli4.c

8994bfd36daa33 James Smart 2019-10-23  795  
8994bfd36daa33 James Smart 2019-10-23  796  /**
8994bfd36daa33 James Smart 2019-10-23  797   * @ingroup sli_fc
8994bfd36daa33 James Smart 2019-10-23  798   * @brief Write an RQ_CREATE_V2 command.
8994bfd36daa33 James Smart 2019-10-23  799   *
8994bfd36daa33 James Smart 2019-10-23  800   * @param sli4 SLI context.
8994bfd36daa33 James Smart 2019-10-23  801   * @param buf Destination buffer for the command.
8994bfd36daa33 James Smart 2019-10-23  802   * @param size Buffer size, in bytes.
8994bfd36daa33 James Smart 2019-10-23  803   * @param qmem DMA memory for the queue.
8994bfd36daa33 James Smart 2019-10-23  804   * @param cq_id Associated CQ_ID.
8994bfd36daa33 James Smart 2019-10-23  805   * @param buffer_size Buffer size pointed to by each RQE.
8994bfd36daa33 James Smart 2019-10-23  806   *
8994bfd36daa33 James Smart 2019-10-23  807   * @note This creates a Version 0 message
8994bfd36daa33 James Smart 2019-10-23  808   *
8994bfd36daa33 James Smart 2019-10-23  809   * @return Returns zero for success and non-zero for failure.
8994bfd36daa33 James Smart 2019-10-23  810   */
8994bfd36daa33 James Smart 2019-10-23  811  static int
8994bfd36daa33 James Smart 2019-10-23  812  sli_cmd_rq_create_v2(struct sli4_s *sli4, u32 num_rqs,
8994bfd36daa33 James Smart 2019-10-23  813  		     struct sli4_queue_s *qs[], u32 base_cq_id,
8994bfd36daa33 James Smart 2019-10-23  814  		     u32 header_buffer_size,
8994bfd36daa33 James Smart 2019-10-23  815  		     u32 payload_buffer_size, struct efc_dma_s *dma)
8994bfd36daa33 James Smart 2019-10-23  816  {
8994bfd36daa33 James Smart 2019-10-23  817  	struct sli4_rqst_rq_create_v2_s *req = NULL;
8994bfd36daa33 James Smart 2019-10-23 @818  	u32 i, p, offset;
8994bfd36daa33 James Smart 2019-10-23  819  	u32 payload_size, page_count;
8994bfd36daa33 James Smart 2019-10-23  820  	uintptr_t addr;
8994bfd36daa33 James Smart 2019-10-23  821  	u32 num_pages;
8994bfd36daa33 James Smart 2019-10-23  822  
8994bfd36daa33 James Smart 2019-10-23  823  	page_count =  sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE) * num_rqs;
8994bfd36daa33 James Smart 2019-10-23  824  
8994bfd36daa33 James Smart 2019-10-23  825  	/* Payload length must accommodate both request and response */
8994bfd36daa33 James Smart 2019-10-23  826  	payload_size = max(CFG_RQST_CMDSZ(rq_create_v2) +
8994bfd36daa33 James Smart 2019-10-23  827  			   SZ_DMAADDR * page_count,
8994bfd36daa33 James Smart 2019-10-23  828  			   sizeof(struct sli4_rsp_cmn_create_queue_set_s));
8994bfd36daa33 James Smart 2019-10-23  829  
8994bfd36daa33 James Smart 2019-10-23  830  	dma->size = payload_size;
8994bfd36daa33 James Smart 2019-10-23  831  	dma->virt = dma_alloc_coherent(&sli4->pcidev->dev, dma->size,
8994bfd36daa33 James Smart 2019-10-23  832  				      &dma->phys, GFP_DMA);
8994bfd36daa33 James Smart 2019-10-23  833  	if (!dma->virt)
8994bfd36daa33 James Smart 2019-10-23  834  		return EFC_FAIL;
8994bfd36daa33 James Smart 2019-10-23  835  
8994bfd36daa33 James Smart 2019-10-23  836  	memset(dma->virt, 0, payload_size);
8994bfd36daa33 James Smart 2019-10-23  837  
8994bfd36daa33 James Smart 2019-10-23  838  	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
8994bfd36daa33 James Smart 2019-10-23  839  			       payload_size, dma);
8994bfd36daa33 James Smart 2019-10-23  840  	if (!req)
8994bfd36daa33 James Smart 2019-10-23  841  		return EFC_FAIL;
8994bfd36daa33 James Smart 2019-10-23  842  
8994bfd36daa33 James Smart 2019-10-23  843  	/* Fill Header fields */
8994bfd36daa33 James Smart 2019-10-23  844  	req->hdr.opcode    = SLI4_OPC_RQ_CREATE;
8994bfd36daa33 James Smart 2019-10-23  845  	req->hdr.subsystem = SLI4_SUBSYSTEM_FC;
8994bfd36daa33 James Smart 2019-10-23  846  	req->hdr.dw3_version   = cpu_to_le32(CMD_V2);
8994bfd36daa33 James Smart 2019-10-23  847  	req->hdr.request_length = CFG_RQST_PYLD_LEN_VAR(rq_create_v2,
8994bfd36daa33 James Smart 2019-10-23  848  						SZ_DMAADDR * page_count);
8994bfd36daa33 James Smart 2019-10-23  849  
8994bfd36daa33 James Smart 2019-10-23  850  	/* Fill Payload fields */
8994bfd36daa33 James Smart 2019-10-23  851  	req->dim_dfd_dnb  |= SLI4_RQCREATEV2_DNB;
8994bfd36daa33 James Smart 2019-10-23  852  	num_pages = sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE);
8994bfd36daa33 James Smart 2019-10-23  853  	req->num_pages	   = cpu_to_le16(num_pages);
8994bfd36daa33 James Smart 2019-10-23  854  	req->rqe_count     = cpu_to_le16(qs[0]->dma.size / SLI4_RQE_SIZE);
8994bfd36daa33 James Smart 2019-10-23  855  	req->rqe_size_byte |= SLI4_RQE_SIZE_8;
8994bfd36daa33 James Smart 2019-10-23  856  	req->page_size     = SLI4_RQ_PAGE_SIZE_4096;
8994bfd36daa33 James Smart 2019-10-23  857  	req->rq_count      = num_rqs;
8994bfd36daa33 James Smart 2019-10-23  858  	req->base_cq_id    = cpu_to_le16(base_cq_id);
8994bfd36daa33 James Smart 2019-10-23  859  	req->hdr_buffer_size     = cpu_to_le16(header_buffer_size);
8994bfd36daa33 James Smart 2019-10-23  860  	req->payload_buffer_size = cpu_to_le16(payload_buffer_size);
8994bfd36daa33 James Smart 2019-10-23  861  
8994bfd36daa33 James Smart 2019-10-23  862  	for (i = 0; i < num_rqs; i++) {
8994bfd36daa33 James Smart 2019-10-23  863  		for (p = 0, addr = qs[i]->dma.phys; p < num_pages;
8994bfd36daa33 James Smart 2019-10-23  864  		     p++, addr += SLI_PAGE_SIZE) {
8994bfd36daa33 James Smart 2019-10-23  865  			req->page_phys_addr[offset].low =
8994bfd36daa33 James Smart 2019-10-23  866  					cpu_to_le32(lower_32_bits(addr));
8994bfd36daa33 James Smart 2019-10-23  867  			req->page_phys_addr[offset].high =
8994bfd36daa33 James Smart 2019-10-23  868  					cpu_to_le32(upper_32_bits(addr));
8994bfd36daa33 James Smart 2019-10-23  869  			offset++;
8994bfd36daa33 James Smart 2019-10-23  870  		}
8994bfd36daa33 James Smart 2019-10-23  871  	}
8994bfd36daa33 James Smart 2019-10-23  872  
8994bfd36daa33 James Smart 2019-10-23  873  	return EFC_SUCCESS;
8994bfd36daa33 James Smart 2019-10-23  874  }
8994bfd36daa33 James Smart 2019-10-23  875  

:::::: The code at line 818 was first introduced by commit
:::::: 8994bfd36daa331dd81afd4af5a1d567fb75b6ac elx: libefc_sli: queue create/destroy/parse routines

:::::: TO: James Smart <jsmart2021@gmail.com>
:::::: CC: 0day robot <lkp@intel.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 54944 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 32/32] elx: efct: Tie into kernel Kconfig and build process
  2019-10-26  0:34   ` kbuild test robot
@ 2019-10-26  0:39     ` Randy Dunlap
  0 siblings, 0 replies; 54+ messages in thread
From: Randy Dunlap @ 2019-10-26  0:39 UTC (permalink / raw)
  To: kbuild test robot, James Smart; +Cc: kbuild-all, linux-scsi, Ram Vegesna

Hi James,


On 10/25/19 5:34 PM, kbuild test robot wrote:
> vim +/offset +818 drivers/scsi/elx/libefc_sli/sli4.c
> 
> 8994bfd36daa33 James Smart 2019-10-23  795  
> 8994bfd36daa33 James Smart 2019-10-23  796  /**
> 8994bfd36daa33 James Smart 2019-10-23  797   * @ingroup sli_fc
> 8994bfd36daa33 James Smart 2019-10-23  798   * @brief Write an RQ_CREATE_V2 command.
> 8994bfd36daa33 James Smart 2019-10-23  799   *
> 8994bfd36daa33 James Smart 2019-10-23  800   * @param sli4 SLI context.
> 8994bfd36daa33 James Smart 2019-10-23  801   * @param buf Destination buffer for the command.
> 8994bfd36daa33 James Smart 2019-10-23  802   * @param size Buffer size, in bytes.
> 8994bfd36daa33 James Smart 2019-10-23  803   * @param qmem DMA memory for the queue.
> 8994bfd36daa33 James Smart 2019-10-23  804   * @param cq_id Associated CQ_ID.
> 8994bfd36daa33 James Smart 2019-10-23  805   * @param buffer_size Buffer size pointed to by each RQE.
> 8994bfd36daa33 James Smart 2019-10-23  806   *
> 8994bfd36daa33 James Smart 2019-10-23  807   * @note This creates a Version 0 message
> 8994bfd36daa33 James Smart 2019-10-23  808   *
> 8994bfd36daa33 James Smart 2019-10-23  809   * @return Returns zero for success and non-zero for failure.
> 8994bfd36daa33 James Smart 2019-10-23  810   */

BTW, what is that notation?  It's not kernel-doc. Please use kernel-doc notation
for documentation in the kernel.

> 8994bfd36daa33 James Smart 2019-10-23  811  static int
> 8994bfd36daa33 James Smart 2019-10-23  812  sli_cmd_rq_create_v2(struct sli4_s *sli4, u32 num_rqs,
> 8994bfd36daa33 James Smart 2019-10-23  813  		     struct sli4_queue_s *qs[], u32 base_cq_id,
> 8994bfd36daa33 James Smart 2019-10-23  814  		     u32 header_buffer_size,
> 8994bfd36daa33 James Smart 2019-10-23  815  		     u32 payload_buffer_size, struct efc_dma_s *dma)
> 8994bfd36daa33 James Smart 2019-10-23  816  {


thanks.
-- 
~Randy


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 32/32] elx: efct: Tie into kernel Kconfig and build process
  2019-10-23 21:55 ` [PATCH 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
  2019-10-26  0:34   ` kbuild test robot
@ 2019-10-26 14:13   ` kbuild test robot
  2019-10-26 14:13   ` [RFC PATCH] elx: efct: efct_libefc_templ can be static kbuild test robot
  2 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2019-10-26 14:13 UTC (permalink / raw)
  To: James Smart; +Cc: kbuild-all, linux-scsi, James Smart, Ram Vegesna

Hi James,

I love your patch! Perhaps something to improve:

[auto build test WARNING on mkp-scsi/for-next]
[cannot apply to v5.4-rc4 next-20191025]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/James-Smart/efct-Broadcom-Emulex-FC-Target-driver/20191026-050814
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.1-dirty
        make ARCH=x86_64 allmodconfig
        make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)

>> drivers/scsi/elx/efct/efct_driver.c:49:33: sparse: sparse: symbol 'efct_libefc_templ' was not declared. Should it be static?
--
>> drivers/scsi/elx/efct/efct_scsi.c:1839:30: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __be16 [usertype] ox_id @@    got icted __be16 [usertype] ox_id @@
>> drivers/scsi/elx/efct/efct_scsi.c:1839:30: sparse:    expected restricted __be16 [usertype] ox_id
>> drivers/scsi/elx/efct/efct_scsi.c:1839:30: sparse:    got unsigned int [usertype] init_task_tag
>> drivers/scsi/elx/efct/efct_scsi.c:1840:30: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __be16 [usertype] rx_id @@    got tricted __be16 [usertype] rx_id @@
>> drivers/scsi/elx/efct/efct_scsi.c:1840:30: sparse:    expected restricted __be16 [usertype] rx_id
>> drivers/scsi/elx/efct/efct_scsi.c:1840:30: sparse:    got unsigned short [usertype] abort_rx_id
>> drivers/scsi/elx/efct/efct_scsi.c:1848:30: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __be16 [usertype] ba_high_seq_cnt @@    got tricted __be16 [usertype] ba_high_seq_cnt @@
>> drivers/scsi/elx/efct/efct_scsi.c:1848:30: sparse:    expected restricted __be16 [usertype] ba_high_seq_cnt
>> drivers/scsi/elx/efct/efct_scsi.c:1848:30: sparse:    got unsigned short [usertype]
--
>> drivers/scsi/elx/efct/efct_els.c:2263:23: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __be16 [usertype] ba_ox_id @@    got tricted __be16 [usertype] ba_ox_id @@
>> drivers/scsi/elx/efct/efct_els.c:2263:23: sparse:    expected restricted __be16 [usertype] ba_ox_id
>> drivers/scsi/elx/efct/efct_els.c:2263:23: sparse:    got unsigned short [usertype] ox_id
>> drivers/scsi/elx/efct/efct_els.c:2264:23: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __be16 [usertype] ba_rx_id @@    got tricted __be16 [usertype] ba_rx_id @@
>> drivers/scsi/elx/efct/efct_els.c:2264:23: sparse:    expected restricted __be16 [usertype] ba_rx_id
>> drivers/scsi/elx/efct/efct_els.c:2264:23: sparse:    got unsigned short [usertype] rx_id
>> drivers/scsi/elx/efct/efct_els.c:2265:30: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __be16 [usertype] ba_high_seq_cnt @@    got tricted __be16 [usertype] ba_high_seq_cnt @@
>> drivers/scsi/elx/efct/efct_els.c:2265:30: sparse:    expected restricted __be16 [usertype] ba_high_seq_cnt
>> drivers/scsi/elx/efct/efct_els.c:2265:30: sparse:    got unsigned short [usertype]
>> drivers/scsi/elx/efct/efct_els.c:1443:43: sparse: sparse: invalid assignment: |=
>> drivers/scsi/elx/efct/efct_els.c:1443:43: sparse:    left side has type restricted __be16
>> drivers/scsi/elx/efct/efct_els.c:1443:43: sparse:    right side has type restricted __be32
>> drivers/scsi/elx/efct/efct_els.c:1549:46: sparse: sparse: cast to restricted __be32
>> drivers/scsi/elx/efct/efct_els.c:1549:46: sparse: sparse: cast to restricted __be32
>> drivers/scsi/elx/efct/efct_els.c:1549:46: sparse: sparse: cast to restricted __be32
>> drivers/scsi/elx/efct/efct_els.c:1549:46: sparse: sparse: cast to restricted __be32
>> drivers/scsi/elx/efct/efct_els.c:1549:46: sparse: sparse: cast to restricted __be32
>> drivers/scsi/elx/efct/efct_els.c:1549:46: sparse: sparse: cast to restricted __be32
   drivers/scsi/elx/efct/efct_els.c:1549:42: sparse: sparse: invalid assignment: |=
>> drivers/scsi/elx/efct/efct_els.c:1549:42: sparse:    left side has type restricted __be32
>> drivers/scsi/elx/efct/efct_els.c:1549:42: sparse:    right side has type unsigned int
>> drivers/scsi/elx/efct/efct_els.c:2602:27: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] init_task_tag @@    got restrunsigned int [usertype] init_task_tag @@
>> drivers/scsi/elx/efct/efct_els.c:2602:27: sparse:    expected unsigned int [usertype] init_task_tag
>> drivers/scsi/elx/efct/efct_els.c:2602:27: sparse:    got restricted __be16 [usertype] ox_id
>> drivers/scsi/elx/efct/efct_els.c:2609:38: sparse: sparse: cast from restricted __be16
>> drivers/scsi/elx/efct/efct_els.c:2609:38: sparse: sparse: incorrect type in argument 1 (different base types) @@    expected unsigned short [usertype] val @@    got resunsigned short [usertype] val @@
>> drivers/scsi/elx/efct/efct_els.c:2609:38: sparse:    expected unsigned short [usertype] val
   drivers/scsi/elx/efct/efct_els.c:2609:38: sparse:    got restricted __be16 [usertype] ox_id
>> drivers/scsi/elx/efct/efct_els.c:2609:38: sparse: sparse: cast from restricted __be16
>> drivers/scsi/elx/efct/efct_els.c:2609:38: sparse: sparse: cast from restricted __be16
>> drivers/scsi/elx/efct/efct_els.c:2609:36: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned short [usertype] ox_id @@    got resunsigned short [usertype] ox_id @@
>> drivers/scsi/elx/efct/efct_els.c:2609:36: sparse:    expected unsigned short [usertype] ox_id
>> drivers/scsi/elx/efct/efct_els.c:2609:36: sparse:    got restricted __be16 [usertype]
--
>> drivers/scsi/elx/efct/efct_hw.c:4748:59: sparse: sparse: incorrect type in argument 3 (different base types) @@    expected unsigned int [usertype] *data @@    got ed int [usertype] *data @@
>> drivers/scsi/elx/efct/efct_hw.c:4748:59: sparse:    expected unsigned int [usertype] *data
>> drivers/scsi/elx/efct/efct_hw.c:4748:59: sparse:    got restricted __le32 *
>> drivers/scsi/elx/efct/efct_hw.c:921:36: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le16 [usertype] rq_id @@    got e] rq_id @@
>> drivers/scsi/elx/efct/efct_hw.c:921:36: sparse:    expected restricted __le16 [usertype] rq_id
>> drivers/scsi/elx/efct/efct_hw.c:921:36: sparse:    got int
>> drivers/scsi/elx/efct/efct_hw.c:937:49: sparse: sparse: restricted __le16 degrades to integer
   drivers/scsi/elx/efct/efct_hw.c:943:57: sparse: sparse: restricted __le16 degrades to integer
>> drivers/scsi/elx/efct/efct_hw.c:953:60: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le16 [usertype] rq_id @@    got icted __le16 [usertype] rq_id @@
   drivers/scsi/elx/efct/efct_hw.c:953:60: sparse:    expected restricted __le16 [usertype] rq_id
>> drivers/scsi/elx/efct/efct_hw.c:953:60: sparse:    got unsigned int [usertype] base_mrq_id
>> drivers/scsi/elx/efct/efct_hw.c:956:60: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le16 [usertype] rq_id @@    got tricted __le16 [usertype] rq_id @@
   drivers/scsi/elx/efct/efct_hw.c:956:60: sparse:    expected restricted __le16 [usertype] rq_id
>> drivers/scsi/elx/efct/efct_hw.c:956:60: sparse:    got unsigned short [usertype] id
   drivers/scsi/elx/efct/efct_hw.c:733:41: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le16 [usertype] rq_id @@    got e] rq_id @@
   drivers/scsi/elx/efct/efct_hw.c:733:41: sparse:    expected restricted __le16 [usertype] rq_id
   drivers/scsi/elx/efct/efct_hw.c:733:41: sparse:    got int
   drivers/scsi/elx/efct/efct_hw.c:766:57: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le16 [usertype] rq_id @@    got tricted __le16 [usertype] rq_id @@
   drivers/scsi/elx/efct/efct_hw.c:766:57: sparse:    expected restricted __le16 [usertype] rq_id
   drivers/scsi/elx/efct/efct_hw.c:766:57: sparse:    got unsigned short [usertype] id
>> drivers/scsi/elx/efct/efct_hw.c:2496:27: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] sge_flags @@    got restrunsigned int [usertype] sge_flags @@
>> drivers/scsi/elx/efct/efct_hw.c:2496:27: sparse:    expected unsigned int [usertype] sge_flags
>> drivers/scsi/elx/efct/efct_hw.c:2496:27: sparse:    got restricted __le32 [usertype] dw2_flags
>> drivers/scsi/elx/efct/efct_hw.c:2532:27: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [assigned] [usertype] sge_flags @@    got ed int [assigned] [usertype] sge_flags @@
>> drivers/scsi/elx/efct/efct_hw.c:2532:27: sparse:    expected unsigned int [assigned] [usertype] sge_flags
   drivers/scsi/elx/efct/efct_hw.c:2532:27: sparse:    got restricted __le32 [usertype] dw2_flags
   drivers/scsi/elx/efct/efct_hw.c:2544:19: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [assigned] [usertype] sge_flags @@    got ed int [assigned] [usertype] sge_flags @@
   drivers/scsi/elx/efct/efct_hw.c:2544:19: sparse:    expected unsigned int [assigned] [usertype] sge_flags
   drivers/scsi/elx/efct/efct_hw.c:2544:19: sparse:    got restricted __le32 [usertype] dw2_flags
   drivers/scsi/elx/efct/efct_hw.c:2676:19: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] sge_flags @@    got restrunsigned int [usertype] sge_flags @@
   drivers/scsi/elx/efct/efct_hw.c:2676:19: sparse:    expected unsigned int [usertype] sge_flags
   drivers/scsi/elx/efct/efct_hw.c:2676:19: sparse:    got restricted __le32 [usertype] dw2_flags
   drivers/scsi/elx/efct/efct_hw.c:2680:27: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [assigned] [usertype] sge_flags @@    got ed int [assigned] [usertype] sge_flags @@
   drivers/scsi/elx/efct/efct_hw.c:2680:27: sparse:    expected unsigned int [assigned] [usertype] sge_flags
   drivers/scsi/elx/efct/efct_hw.c:2680:27: sparse:    got restricted __le32 [usertype] dw2_flags
   drivers/scsi/elx/efct/efct_hw.c:2778:19: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] sge_flags @@    got restrunsigned int [usertype] sge_flags @@
   drivers/scsi/elx/efct/efct_hw.c:2778:19: sparse:    expected unsigned int [usertype] sge_flags
   drivers/scsi/elx/efct/efct_hw.c:2778:19: sparse:    got restricted __le32 [usertype] dw2_flags
   drivers/scsi/elx/efct/efct_hw.c:2797:27: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [assigned] [usertype] sge_flags @@    got ed int [assigned] [usertype] sge_flags @@
   drivers/scsi/elx/efct/efct_hw.c:2797:27: sparse:    expected unsigned int [assigned] [usertype] sge_flags
   drivers/scsi/elx/efct/efct_hw.c:2797:27: sparse:    got restricted __le32 [usertype] dw2_flags
   drivers/scsi/elx/efct/efct_hw.c:2854:19: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] sge_flags @@    got restrunsigned int [usertype] sge_flags @@
   drivers/scsi/elx/efct/efct_hw.c:2854:19: sparse:    expected unsigned int [usertype] sge_flags
   drivers/scsi/elx/efct/efct_hw.c:2854:19: sparse:    got restricted __le32 [usertype] dw2_flags
   drivers/scsi/elx/efct/efct_hw.c:2875:27: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [assigned] [usertype] sge_flags @@    got ed int [assigned] [usertype] sge_flags @@
   drivers/scsi/elx/efct/efct_hw.c:2875:27: sparse:    expected unsigned int [assigned] [usertype] sge_flags
   drivers/scsi/elx/efct/efct_hw.c:2875:27: sparse:    got restricted __le32 [usertype] dw2_flags
>> drivers/scsi/elx/efct/efct_hw.c:4213:20: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] sge0_flags @@    got restrunsigned int [usertype] sge0_flags @@
>> drivers/scsi/elx/efct/efct_hw.c:4213:20: sparse:    expected unsigned int [usertype] sge0_flags
   drivers/scsi/elx/efct/efct_hw.c:4213:20: sparse:    got restricted __le32 [usertype] dw2_flags
>> drivers/scsi/elx/efct/efct_hw.c:4214:20: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] sge1_flags @@    got restrunsigned int [usertype] sge1_flags @@
>> drivers/scsi/elx/efct/efct_hw.c:4214:20: sparse:    expected unsigned int [usertype] sge1_flags
   drivers/scsi/elx/efct/efct_hw.c:4214:20: sparse:    got restricted __le32 [usertype] dw2_flags
>> drivers/scsi/elx/efct/efct_hw.c:4338:29: sparse: sparse: cast from restricted __be16
   drivers/scsi/elx/efct/efct_hw.c:4339:29: sparse: sparse: cast from restricted __be16
--
>> drivers/scsi/elx/efct/efct_unsol.c:910:28: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __be16 [assigned] [usertype] ba_low_seq_cnt @@    got e16 [assigned] [usertype] ba_low_seq_cnt @@
>> drivers/scsi/elx/efct/efct_unsol.c:910:28: sparse:    expected restricted __be16 [assigned] [usertype] ba_low_seq_cnt
>> drivers/scsi/elx/efct/efct_unsol.c:910:28: sparse:    got unsigned short [usertype]
>> drivers/scsi/elx/efct/efct_unsol.c:911:29: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __be16 [assigned] [usertype] ba_high_seq_cnt @@    got e16 [assigned] [usertype] ba_high_seq_cnt @@
>> drivers/scsi/elx/efct/efct_unsol.c:911:29: sparse:    expected restricted __be16 [assigned] [usertype] ba_high_seq_cnt
   drivers/scsi/elx/efct/efct_unsol.c:911:29: sparse:    got unsigned short [usertype]
--
>> drivers/scsi/elx/libefc_sli/sli4.c:152:31: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] dw3_version @@    got restrunsigned int [usertype] dw3_version @@
>> drivers/scsi/elx/libefc_sli/sli4.c:152:31: sparse:    expected unsigned int [usertype] dw3_version
>> drivers/scsi/elx/libefc_sli/sli4.c:152:31: sparse:    got restricted __le32 [usertype]
>> drivers/scsi/elx/libefc_sli/sli4.c:153:18: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned long [assigned] [usertype] cmd_size @@    got d long [assigned] [usertype] cmd_size @@
>> drivers/scsi/elx/libefc_sli/sli4.c:153:18: sparse:    expected unsigned long [assigned] [usertype] cmd_size
   drivers/scsi/elx/libefc_sli/sli4.c:153:18: sparse:    got restricted __le32 [usertype]
>> drivers/scsi/elx/libefc_sli/sli4.c:155:34: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le32 [usertype] request_length @@    got unsignerestricted __le32 [usertype] request_length @@
>> drivers/scsi/elx/libefc_sli/sli4.c:155:34: sparse:    expected restricted __le32 [usertype] request_length
>> drivers/scsi/elx/libefc_sli/sli4.c:155:34: sparse:    got unsigned long [assigned] [usertype] cmd_size
   drivers/scsi/elx/libefc_sli/sli4.c:275:37: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] dw3_version @@    got restrunsigned int [usertype] dw3_version @@
   drivers/scsi/elx/libefc_sli/sli4.c:275:37: sparse:    expected unsigned int [usertype] dw3_version
   drivers/scsi/elx/libefc_sli/sli4.c:275:37: sparse:    got restricted __le32 [usertype]
   drivers/scsi/elx/libefc_sli/sli4.c:416:37: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] dw3_version @@    got restrunsigned int [usertype] dw3_version @@
   drivers/scsi/elx/libefc_sli/sli4.c:416:37: sparse:    expected unsigned int [usertype] dw3_version
   drivers/scsi/elx/libefc_sli/sli4.c:416:37: sparse:    got restricted __le32 [usertype]
   drivers/scsi/elx/libefc_sli/sli4.c:550:29: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] dw3_version @@    got restrunsigned int [usertype] dw3_version @@
   drivers/scsi/elx/libefc_sli/sli4.c:550:29: sparse:    expected unsigned int [usertype] dw3_version
   drivers/scsi/elx/libefc_sli/sli4.c:550:29: sparse:    got restricted __le32 [usertype]
   drivers/scsi/elx/libefc_sli/sli4.c:748:29: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] dw3_version @@    got restrunsigned int [usertype] dw3_version @@
   drivers/scsi/elx/libefc_sli/sli4.c:748:29: sparse:    expected unsigned int [usertype] dw3_version
   drivers/scsi/elx/libefc_sli/sli4.c:748:29: sparse:    got restricted __le32 [usertype]
   drivers/scsi/elx/libefc_sli/sli4.c:846:32: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] dw3_version @@    got restrunsigned int [usertype] dw3_version @@
   drivers/scsi/elx/libefc_sli/sli4.c:846:32: sparse:    expected unsigned int [usertype] dw3_version
   drivers/scsi/elx/libefc_sli/sli4.c:846:32: sparse:    got restricted __le32 [usertype]
>> drivers/scsi/elx/libefc_sli/sli4.c:1078:33: sparse: sparse: Using plain integer as NULL pointer
   drivers/scsi/elx/libefc_sli/sli4.c:1526:33: sparse: sparse: Using plain integer as NULL pointer
>> drivers/scsi/elx/libefc_sli/sli4.h:3029:17: sparse: sparse: invalid assignment: &=
>> drivers/scsi/elx/libefc_sli/sli4.h:3029:17: sparse:    left side has type unsigned int
>> drivers/scsi/elx/libefc_sli/sli4.h:3029:17: sparse:    right side has type restricted __le32
   drivers/scsi/elx/libefc_sli/sli4.h:3030:17: sparse: sparse: invalid assignment: |=
   drivers/scsi/elx/libefc_sli/sli4.h:3030:17: sparse:    left side has type unsigned int
>> drivers/scsi/elx/libefc_sli/sli4.h:3030:17: sparse:    right side has type restricted __le16
>> drivers/scsi/elx/libefc_sli/sli4.c:2486:47: sparse: sparse: restricted __le32 degrades to integer
   drivers/scsi/elx/libefc_sli/sli4.c:2487:48: sparse: sparse: restricted __le32 degrades to integer
>> drivers/scsi/elx/libefc_sli/sli4.c:2486:38: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le16 [usertype] payload_offset_length @@    got  [usertype] payload_offset_length @@
>> drivers/scsi/elx/libefc_sli/sli4.c:2486:38: sparse:    expected restricted __le16 [usertype] payload_offset_length
>> drivers/scsi/elx/libefc_sli/sli4.c:2486:38: sparse:    got unsigned int
>> drivers/scsi/elx/libefc_sli/sli4.c:2589:41: sparse: sparse: cast from restricted __le32
>> drivers/scsi/elx/libefc_sli/sli4.c:2591:27: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] sge_flags @@    got restrunsigned int [usertype] sge_flags @@
>> drivers/scsi/elx/libefc_sli/sli4.c:2591:27: sparse:    expected unsigned int [usertype] sge_flags
>> drivers/scsi/elx/libefc_sli/sli4.c:2591:27: sparse:    got restricted __le32 [usertype] dw2_flags
>> drivers/scsi/elx/libefc_sli/sli4.c:2594:34: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le32 [usertype] dw2_flags @@    got unsignrestricted __le32 [usertype] dw2_flags @@
>> drivers/scsi/elx/libefc_sli/sli4.c:2594:34: sparse:    expected restricted __le32 [usertype] dw2_flags
>> drivers/scsi/elx/libefc_sli/sli4.c:2594:34: sparse:    got unsigned int [assigned] [usertype] sge_flags
   drivers/scsi/elx/libefc_sli/sli4.c:2597:47: sparse: sparse: restricted __le32 degrades to integer
   drivers/scsi/elx/libefc_sli/sli4.c:2598:48: sparse: sparse: restricted __le32 degrades to integer
   drivers/scsi/elx/libefc_sli/sli4.c:2597:38: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le16 [usertype] payload_offset_length @@    got  [usertype] payload_offset_length @@
   drivers/scsi/elx/libefc_sli/sli4.c:2597:38: sparse:    expected restricted __le16 [usertype] payload_offset_length
   drivers/scsi/elx/libefc_sli/sli4.c:2597:38: sparse:    got unsigned int
   drivers/scsi/elx/libefc_sli/sli4.c:2719:41: sparse: sparse: cast from restricted __le32
   drivers/scsi/elx/libefc_sli/sli4.c:2720:27: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] sge_flags @@    got restrunsigned int [usertype] sge_flags @@
   drivers/scsi/elx/libefc_sli/sli4.c:2720:27: sparse:    expected unsigned int [usertype] sge_flags
   drivers/scsi/elx/libefc_sli/sli4.c:2720:27: sparse:    got restricted __le32 [usertype] dw2_flags
   drivers/scsi/elx/libefc_sli/sli4.c:2723:34: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le32 [usertype] dw2_flags @@    got unsignrestricted __le32 [usertype] dw2_flags @@
   drivers/scsi/elx/libefc_sli/sli4.c:2723:34: sparse:    expected restricted __le32 [usertype] dw2_flags
   drivers/scsi/elx/libefc_sli/sli4.c:2723:34: sparse:    got unsigned int [assigned] [usertype] sge_flags
   drivers/scsi/elx/libefc_sli/sli4.c:2726:48: sparse: sparse: restricted __le32 degrades to integer
   drivers/scsi/elx/libefc_sli/sli4.c:2727:48: sparse: sparse: restricted __le32 degrades to integer
   drivers/scsi/elx/libefc_sli/sli4.c:2726:39: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le16 [usertype] payload_offset_length @@    got  [usertype] payload_offset_length @@
   drivers/scsi/elx/libefc_sli/sli4.c:2726:39: sparse:    expected restricted __le16 [usertype] payload_offset_length
   drivers/scsi/elx/libefc_sli/sli4.c:2726:39: sparse:    got unsigned int
>> drivers/scsi/elx/libefc_sli/sli4.c:2930:35: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int @@    got restricted __le32unsigned int @@
>> drivers/scsi/elx/libefc_sli/sli4.c:2930:35: sparse:    expected unsigned int
   drivers/scsi/elx/libefc_sli/sli4.c:2930:35: sparse:    got restricted __le32 [usertype]
   drivers/scsi/elx/libefc_sli/sli4.c:3083:34: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int @@    got restricted __le32unsigned int @@
   drivers/scsi/elx/libefc_sli/sli4.c:3083:34: sparse:    expected unsigned int
   drivers/scsi/elx/libefc_sli/sli4.c:3083:34: sparse:    got restricted __le32 [usertype]
   drivers/scsi/elx/libefc_sli/sli4.c:3159:47: sparse: sparse: restricted __le32 degrades to integer
   drivers/scsi/elx/libefc_sli/sli4.c:3245:35: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int @@    got restricted __le32unsigned int @@
   drivers/scsi/elx/libefc_sli/sli4.c:3245:35: sparse:    expected unsigned int
   drivers/scsi/elx/libefc_sli/sli4.c:3245:35: sparse:    got restricted __le32 [usertype]
>> drivers/scsi/elx/libefc_sli/sli4.c:3423:19: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le16 [usertype] cq_id @@    got e] cq_id @@
>> drivers/scsi/elx/libefc_sli/sli4.c:3423:19: sparse:    expected restricted __le16 [usertype] cq_id
>> drivers/scsi/elx/libefc_sli/sli4.c:3423:19: sparse:    got int
>> drivers/scsi/elx/libefc_sli/sli4.c:3471:37: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:3472:36: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:3482:22: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:3483:22: sparse: sparse: cast from restricted __le16
>> drivers/scsi/elx/libefc_sli/sli4.c:3577:42: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le32 [usertype] els_response_payload_length @@    got icted __le32 [usertype] els_response_payload_length @@
>> drivers/scsi/elx/libefc_sli/sli4.c:3577:42: sparse:    expected restricted __le32 [usertype] els_response_payload_length
>> drivers/scsi/elx/libefc_sli/sli4.c:3577:42: sparse:    got unsigned int [usertype] rsp_len
>> drivers/scsi/elx/libefc_sli/sli4.c:3681:38: sparse: sparse: invalid assignment: |=
>> drivers/scsi/elx/libefc_sli/sli4.c:3681:38: sparse:    left side has type restricted __le32
>> drivers/scsi/elx/libefc_sli/sli4.c:3681:38: sparse:    right side has type unsigned int
   drivers/scsi/elx/libefc_sli/sli4.c:3718:46: sparse: sparse: invalid assignment: |=
   drivers/scsi/elx/libefc_sli/sli4.c:3718:46: sparse:    left side has type restricted __le32
   drivers/scsi/elx/libefc_sli/sli4.c:3718:46: sparse:    right side has type unsigned int
   drivers/scsi/elx/libefc_sli/sli4.c:3754:25: sparse: sparse: invalid assignment: |=
>> drivers/scsi/elx/libefc_sli/sli4.c:3754:25: sparse:    left side has type restricted __le16
>> drivers/scsi/elx/libefc_sli/sli4.c:3754:25: sparse:    right side has type int
   drivers/scsi/elx/libefc_sli/sli4.c:3755:25: sparse: sparse: invalid assignment: |=
   drivers/scsi/elx/libefc_sli/sli4.c:3755:25: sparse:    left side has type restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:3755:25: sparse:    right side has type int
>> drivers/scsi/elx/libefc_sli/sli4.c:4001:23: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned short [usertype] @@    got resunsigned short [usertype] @@
>> drivers/scsi/elx/libefc_sli/sli4.c:4001:23: sparse:    expected unsigned short [usertype]
>> drivers/scsi/elx/libefc_sli/sli4.c:4001:23: sparse:    got restricted __le16 [usertype] rq_id
>> drivers/scsi/elx/libefc_sli/sli4.c:4863:50: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le32 [usertype] available_length_dword @@    got restricted __le32 [usertype] available_length_dword @@
>> drivers/scsi/elx/libefc_sli/sli4.c:4863:50: sparse:    expected restricted __le32 [usertype] available_length_dword
>> drivers/scsi/elx/libefc_sli/sli4.c:4863:50: sparse:    got restricted __le16 [usertype]
   drivers/scsi/elx/libefc_sli/sli4.c:4993:43: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:4996:43: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:4999:43: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:5002:43: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:5056:47: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:5059:47: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:5062:47: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:5065:47: sparse: sparse: cast from restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:5176:30: sparse: sparse: invalid assignment: |=
   drivers/scsi/elx/libefc_sli/sli4.c:5176:30: sparse:    left side has type restricted __le16
   drivers/scsi/elx/libefc_sli/sli4.c:5176:30: sparse:    right side has type int
   drivers/scsi/elx/libefc_sli/sli4.c:5663:33: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] dw3_version @@    got restrunsigned int [usertype] dw3_version @@
   drivers/scsi/elx/libefc_sli/sli4.c:5663:33: sparse:    expected unsigned int [usertype] dw3_version
   drivers/scsi/elx/libefc_sli/sli4.c:5663:33: sparse:    got restricted __le32 [usertype]
>> drivers/scsi/elx/libefc_sli/sli4.c:7366:53: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le32 [usertype] page1_low @@    got icted __le32 [usertype] page1_low @@
>> drivers/scsi/elx/libefc_sli/sli4.c:7366:53: sparse:    expected restricted __le32 [usertype] page1_low
>> drivers/scsi/elx/libefc_sli/sli4.c:7366:53: sparse:    got unsigned int [usertype]
>> drivers/scsi/elx/libefc_sli/sli4.c:7368:54: sparse: sparse: incorrect type in assignment (different base types) @@    expected restricted __le32 [usertype] page1_high @@    got icted __le32 [usertype] page1_high @@
>> drivers/scsi/elx/libefc_sli/sli4.c:7368:54: sparse:    expected restricted __le32 [usertype] page1_high
   drivers/scsi/elx/libefc_sli/sli4.c:7368:54: sparse:    got unsigned int [usertype]
>> drivers/scsi/elx/libefc_sli/sli4.c:7404:22: sparse: sparse: incorrect type in assignment (different base types) @@    expected unsigned int [usertype] payload_size @@    got restrunsigned int [usertype] payload_size @@
>> drivers/scsi/elx/libefc_sli/sli4.c:7404:22: sparse:    expected unsigned int [usertype] payload_size
   drivers/scsi/elx/libefc_sli/sli4.c:7404:22: sparse:    got restricted __le32 [usertype]

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC PATCH] elx: efct: efct_libefc_templ can be static
  2019-10-23 21:55 ` [PATCH 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
  2019-10-26  0:34   ` kbuild test robot
  2019-10-26 14:13   ` kbuild test robot
@ 2019-10-26 14:13   ` kbuild test robot
  2 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2019-10-26 14:13 UTC (permalink / raw)
  To: James Smart; +Cc: kbuild-all, linux-scsi, James Smart, Ram Vegesna


Fixes: 24d4401b1dd0 ("elx: efct: Tie into kernel Kconfig and build process")
Signed-off-by: kbuild test robot <lkp@intel.com>
---
 efct_driver.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/scsi/elx/efct/efct_driver.c b/drivers/scsi/elx/efct/efct_driver.c
index 4928e5753d88c..b00fc00a6eb02 100644
--- a/drivers/scsi/elx/efct/efct_driver.c
+++ b/drivers/scsi/elx/efct/efct_driver.c
@@ -46,7 +46,7 @@ struct efct_fw_write_result {
 	u32 change_status;
 };
 
-struct libefc_function_template efct_libefc_templ = {
+static struct libefc_function_template efct_libefc_templ = {
 	.hw_domain_alloc = efct_hw_domain_alloc,
 	.hw_domain_attach = efct_hw_domain_attach,
 	.hw_domain_free = efct_hw_domain_free,

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [PATCH 24/32] elx: efct: LIO backend interface routines
  2019-10-24 22:27   ` Bart Van Assche
@ 2019-10-28 17:49     ` James Smart
  2019-10-28 18:31       ` Bart Van Assche
  0 siblings, 1 reply; 54+ messages in thread
From: James Smart @ 2019-10-28 17:49 UTC (permalink / raw)
  To: Bart Van Assche, linux-scsi; +Cc: Ram Vegesna

Thank you Bart.  We've gone through your comments and mostly agree with 
them and will be making the corresponding changes.

For the few exceptions and where you had a couple of questios, see below.

-- james



On 10/24/2019 3:27 PM, Bart Van Assche wrote:
> Additionally, what is a "virtual target"?

The code meant virtual port (NPIV port). The comments will be updated.


>> +static ssize_t
>> +efct_lio_wwn_version_show(struct config_item *item, char *page)
>> +{
>> +    return sprintf(page, "Emulex EFCT fabric module version %s\n",
>> +               __stringify(EFCT_LIO_VERSION));
>> +}
> 
> Version numbers are not useful in upstream code. Please remove this 
> attribute and also the EFCT_LIO_VERSION constant.

 From my time with lpfc, I disagree.  It's true, if looking only at the 
upstream kernel, version doesn't mean much. But when it comes to the 
distros, who may cherry-pick a lot, it has certainly helped to have a 
version string to get a general understanding of what's there. Granted 
you must still look at the code for the actual content, but it's a good 
indicator.



>> +static const struct file_operations efct_debugfs_session_fops = {
>> +    .owner        = THIS_MODULE,
>> +    .open        = efct_debugfs_session_open,
>> +    .release    = efct_debugfs_session_close,
>> +    .read        = efct_debugfs_session_read,
>> +    .write        = efct_debugfs_session_write,
>> +    .llseek        = default_llseek,
>> +};
>> +
>> +static const struct file_operations efct_npiv_debugfs_session_fops = {
>> +    .owner        = THIS_MODULE,
>> +    .open        = efct_npiv_debugfs_session_open,
>> +    .release    = efct_debugfs_session_close,
>> +    .read        = efct_debugfs_session_read,
>> +    .write        = efct_debugfs_session_write,
>> +    .llseek        = default_llseek,
>> +};
> 
> Since the information that is exported through debugfs (logged in 
> initiators) is information that is also useful for other target drivers, 
> I think this functionality should be implemented in the target core 
> instead of in this target driver.

Can you expand further on what you'd like to see and the format of the 
data to be displayed ?

I'll see if it makes sense.

-- james

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 24/32] elx: efct: LIO backend interface routines
  2019-10-28 17:49     ` James Smart
@ 2019-10-28 18:31       ` Bart Van Assche
  0 siblings, 0 replies; 54+ messages in thread
From: Bart Van Assche @ 2019-10-28 18:31 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Ram Vegesna, Mike Christie

On 10/28/19 10:49 AM, James Smart wrote:
>> On 10/24/2019 3:27 PM, Bart Van Assche wrote:
>>> +static const struct file_operations efct_debugfs_session_fops = {
>>> +    .owner        = THIS_MODULE,
>>> +    .open        = efct_debugfs_session_open,
>>> +    .release    = efct_debugfs_session_close,
>>> +    .read        = efct_debugfs_session_read,
>>> +    .write        = efct_debugfs_session_write,
>>> +    .llseek        = default_llseek,
>>> +};
>>> +
>>> +static const struct file_operations efct_npiv_debugfs_session_fops = {
>>> +    .owner        = THIS_MODULE,
>>> +    .open        = efct_npiv_debugfs_session_open,
>>> +    .release    = efct_debugfs_session_close,
>>> +    .read        = efct_debugfs_session_read,
>>> +    .write        = efct_debugfs_session_write,
>>> +    .llseek        = default_llseek,
>>> +};
>>
>> Since the information that is exported through debugfs (logged in 
>> initiators) is information that is also useful for other target 
>> drivers, I think this functionality should be implemented in the 
>> target core instead of in this target driver.
> 
> Can you expand further on what you'd like to see and the format of the 
> data to be displayed ?

(+Mike)

Mike, can you comment on the status of your patch "target: add session 
dir in configfs" (https://patchwork.kernel.org/patch/10525321/)?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 54+ messages in thread

end of thread, other threads:[~2019-10-28 18:32 UTC | newest]

Thread overview: 54+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-23 21:55 [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
2019-10-23 21:55 ` [PATCH 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
2019-10-24 16:22   ` Daniel Wagner
2019-10-25 23:04     ` James Smart
2019-10-23 21:55 ` [PATCH 02/32] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
2019-10-25  9:59   ` Daniel Wagner
2019-10-25 23:00     ` James Smart
2019-10-23 21:55 ` [PATCH 03/32] elx: libefc_sli: Data structures and defines for mbox commands James Smart
2019-10-25 11:19   ` Daniel Wagner
2019-10-25 12:20     ` Steffen Maier
2019-10-25 22:10       ` James Smart
2019-10-25 22:42     ` James Smart
2019-10-23 21:55 ` [PATCH 04/32] elx: libefc_sli: queue create/destroy/parse routines James Smart
2019-10-25 15:35   ` Daniel Wagner
2019-10-25 22:24     ` James Smart
2019-10-23 21:55 ` [PATCH 05/32] elx: libefc_sli: Populate and post different WQEs James Smart
2019-10-23 21:55 ` [PATCH 06/32] elx: libefc_sli: bmbx routines and SLI config commands James Smart
2019-10-23 21:55 ` [PATCH 07/32] elx: libefc_sli: APIs to setup SLI library James Smart
2019-10-23 21:55 ` [PATCH 08/32] elx: libefc: Generic state machine framework James Smart
2019-10-23 21:55 ` [PATCH 09/32] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
2019-10-23 21:55 ` [PATCH 10/32] elx: libefc: FC Domain state machine interfaces James Smart
2019-10-23 21:55 ` [PATCH 11/32] elx: libefc: SLI and FC PORT " James Smart
2019-10-23 21:55 ` [PATCH 12/32] elx: libefc: Remote node " James Smart
2019-10-23 21:55 ` [PATCH 13/32] elx: libefc: Fabric " James Smart
2019-10-23 21:55 ` [PATCH 14/32] elx: libefc: FC node ELS and state handling James Smart
2019-10-23 21:55 ` [PATCH 15/32] elx: efct: Data structures and defines for hw operations James Smart
2019-10-23 21:55 ` [PATCH 16/32] elx: efct: Driver initialization routines James Smart
2019-10-23 21:55 ` [PATCH 17/32] elx: efct: Hardware queues creation and deletion James Smart
2019-10-23 21:55 ` [PATCH 18/32] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
2019-10-23 21:55 ` [PATCH 19/32] elx: efct: Hardware IO and SGL initialization James Smart
2019-10-23 21:55 ` [PATCH 20/32] elx: efct: Hardware queues processing James Smart
2019-10-23 21:55 ` [PATCH 21/32] elx: efct: Unsolicited FC frame processing routines James Smart
2019-10-23 21:55 ` [PATCH 22/32] elx: efct: Extended link Service IO handling James Smart
2019-10-23 21:55 ` [PATCH 23/32] elx: efct: SCSI IO handling routines James Smart
2019-10-23 21:55 ` [PATCH 24/32] elx: efct: LIO backend interface routines James Smart
2019-10-24 22:27   ` Bart Van Assche
2019-10-28 17:49     ` James Smart
2019-10-28 18:31       ` Bart Van Assche
2019-10-23 21:55 ` [PATCH 25/32] elx: efct: Hardware IO submission routines James Smart
2019-10-23 21:55 ` [PATCH 26/32] elx: efct: link statistics and SFP data James Smart
2019-10-23 21:55 ` [PATCH 27/32] elx: efct: xport and hardware teardown routines James Smart
2019-10-23 21:55 ` [PATCH 28/32] elx: efct: IO timeout handling routines James Smart
2019-10-23 21:55 ` [PATCH 29/32] elx: efct: Firmware update, async link processing James Smart
2019-10-23 21:55 ` [PATCH 30/32] elx: efct: scsi_transport_fc host interface support James Smart
2019-10-23 21:55 ` [PATCH 31/32] elx: efct: Add Makefile and Kconfig for efct driver James Smart
2019-10-25 15:55   ` Daniel Wagner
2019-10-25 22:47     ` James Smart
2019-10-23 21:55 ` [PATCH 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
2019-10-26  0:34   ` kbuild test robot
2019-10-26  0:39     ` Randy Dunlap
2019-10-26 14:13   ` kbuild test robot
2019-10-26 14:13   ` [RFC PATCH] elx: efct: efct_libefc_templ can be static kbuild test robot
2019-10-25 15:56 ` [PATCH 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver Daniel Wagner
2019-10-25 22:31   ` James Smart

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).